00:00:00.001 Started by upstream project "autotest-nightly" build number 3627 00:00:00.001 originally caused by: 00:00:00.002 Started by upstream project "nightly-trigger" build number 3009 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.094 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.094 The recommended git tool is: git 00:00:00.094 using credential 00000000-0000-0000-0000-000000000002 00:00:00.098 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.130 Fetching changes from the remote Git repository 00:00:00.132 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.166 Using shallow fetch with depth 1 00:00:00.166 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.166 > git --version # timeout=10 00:00:00.208 > git --version # 'git version 2.39.2' 00:00:00.208 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.209 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.209 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.598 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.610 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.622 Checking out Revision 6e1fadd1eee50389429f9abb33dde5face8ca717 (FETCH_HEAD) 00:00:04.622 > git config core.sparsecheckout # timeout=10 00:00:04.635 > git read-tree -mu HEAD # timeout=10 00:00:04.651 > git checkout -f 6e1fadd1eee50389429f9abb33dde5face8ca717 # timeout=5 00:00:04.672 Commit message: "pool: attach build logs for failed merge builds" 00:00:04.672 > git rev-list --no-walk 6e1fadd1eee50389429f9abb33dde5face8ca717 # timeout=10 00:00:04.771 [Pipeline] Start of Pipeline 00:00:04.782 [Pipeline] library 00:00:04.783 Loading library shm_lib@master 00:00:04.784 Library shm_lib@master is cached. Copying from home. 00:00:04.800 [Pipeline] node 00:00:04.809 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:04.811 [Pipeline] { 00:00:04.820 [Pipeline] catchError 00:00:04.821 [Pipeline] { 00:00:04.834 [Pipeline] wrap 00:00:04.842 [Pipeline] { 00:00:04.848 [Pipeline] stage 00:00:04.849 [Pipeline] { (Prologue) 00:00:05.018 [Pipeline] sh 00:00:05.298 + logger -p user.info -t JENKINS-CI 00:00:05.317 [Pipeline] echo 00:00:05.318 Node: GP11 00:00:05.325 [Pipeline] sh 00:00:05.624 [Pipeline] setCustomBuildProperty 00:00:05.635 [Pipeline] echo 00:00:05.636 Cleanup processes 00:00:05.641 [Pipeline] sh 00:00:05.927 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.927 1286255 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.944 [Pipeline] sh 00:00:06.231 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.231 ++ grep -v 'sudo pgrep' 00:00:06.231 ++ awk '{print $1}' 00:00:06.231 + sudo kill -9 00:00:06.231 + true 00:00:06.244 [Pipeline] cleanWs 00:00:06.254 [WS-CLEANUP] Deleting project workspace... 00:00:06.254 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.260 [WS-CLEANUP] done 00:00:06.263 [Pipeline] setCustomBuildProperty 00:00:06.276 [Pipeline] sh 00:00:06.557 + sudo git config --global --replace-all safe.directory '*' 00:00:06.619 [Pipeline] nodesByLabel 00:00:06.620 Found a total of 1 nodes with the 'sorcerer' label 00:00:06.628 [Pipeline] httpRequest 00:00:06.632 HttpMethod: GET 00:00:06.633 URL: http://10.211.164.96/packages/jbp_6e1fadd1eee50389429f9abb33dde5face8ca717.tar.gz 00:00:06.638 Sending request to url: http://10.211.164.96/packages/jbp_6e1fadd1eee50389429f9abb33dde5face8ca717.tar.gz 00:00:06.651 Response Code: HTTP/1.1 200 OK 00:00:06.651 Success: Status code 200 is in the accepted range: 200,404 00:00:06.652 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_6e1fadd1eee50389429f9abb33dde5face8ca717.tar.gz 00:00:08.747 [Pipeline] sh 00:00:09.027 + tar --no-same-owner -xf jbp_6e1fadd1eee50389429f9abb33dde5face8ca717.tar.gz 00:00:09.045 [Pipeline] httpRequest 00:00:09.050 HttpMethod: GET 00:00:09.050 URL: http://10.211.164.96/packages/spdk_abd932d6f95cedabe6e56e5390ec197c8d7806e9.tar.gz 00:00:09.052 Sending request to url: http://10.211.164.96/packages/spdk_abd932d6f95cedabe6e56e5390ec197c8d7806e9.tar.gz 00:00:09.074 Response Code: HTTP/1.1 200 OK 00:00:09.074 Success: Status code 200 is in the accepted range: 200,404 00:00:09.075 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_abd932d6f95cedabe6e56e5390ec197c8d7806e9.tar.gz 00:01:23.218 [Pipeline] sh 00:01:23.509 + tar --no-same-owner -xf spdk_abd932d6f95cedabe6e56e5390ec197c8d7806e9.tar.gz 00:01:26.842 [Pipeline] sh 00:01:27.128 + git -C spdk log --oneline -n5 00:01:27.128 abd932d6f Revert "event: switch reactors to poll mode before stopping" 00:01:27.128 3f2c89791 event: switch reactors to poll mode before stopping 00:01:27.128 443e1ea31 setup.sh: emit command line to /dev/kmsg on Linux 00:01:27.128 a1264177c pkgdep/git: Adjust ICE driver to kernel >= 6.8.x 00:01:27.128 af95268b1 pkgdep/git: Adjust QAT driver to kernel >= 6.8.x 00:01:27.140 [Pipeline] } 00:01:27.157 [Pipeline] // stage 00:01:27.166 [Pipeline] stage 00:01:27.168 [Pipeline] { (Prepare) 00:01:27.187 [Pipeline] writeFile 00:01:27.204 [Pipeline] sh 00:01:27.487 + logger -p user.info -t JENKINS-CI 00:01:27.501 [Pipeline] sh 00:01:27.785 + logger -p user.info -t JENKINS-CI 00:01:27.797 [Pipeline] sh 00:01:28.079 + cat autorun-spdk.conf 00:01:28.079 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:28.079 SPDK_TEST_NVMF=1 00:01:28.079 SPDK_TEST_NVME_CLI=1 00:01:28.079 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:28.079 SPDK_TEST_NVMF_NICS=e810 00:01:28.079 SPDK_RUN_UBSAN=1 00:01:28.079 NET_TYPE=phy 00:01:28.087 RUN_NIGHTLY=1 00:01:28.091 [Pipeline] readFile 00:01:28.116 [Pipeline] withEnv 00:01:28.118 [Pipeline] { 00:01:28.132 [Pipeline] sh 00:01:28.415 + set -ex 00:01:28.415 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:28.415 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:28.415 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:28.415 ++ SPDK_TEST_NVMF=1 00:01:28.415 ++ SPDK_TEST_NVME_CLI=1 00:01:28.415 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:28.415 ++ SPDK_TEST_NVMF_NICS=e810 00:01:28.415 ++ SPDK_RUN_UBSAN=1 00:01:28.415 ++ NET_TYPE=phy 00:01:28.415 ++ RUN_NIGHTLY=1 00:01:28.415 + case $SPDK_TEST_NVMF_NICS in 00:01:28.415 + DRIVERS=ice 00:01:28.415 + [[ tcp == \r\d\m\a ]] 00:01:28.415 + [[ -n ice ]] 00:01:28.415 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:28.415 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:28.415 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:28.415 rmmod: ERROR: Module irdma is not currently loaded 00:01:28.415 rmmod: ERROR: Module i40iw is not currently loaded 00:01:28.415 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:28.415 + true 00:01:28.415 + for D in $DRIVERS 00:01:28.415 + sudo modprobe ice 00:01:28.415 + exit 0 00:01:28.425 [Pipeline] } 00:01:28.444 [Pipeline] // withEnv 00:01:28.449 [Pipeline] } 00:01:28.465 [Pipeline] // stage 00:01:28.475 [Pipeline] catchError 00:01:28.476 [Pipeline] { 00:01:28.489 [Pipeline] timeout 00:01:28.489 Timeout set to expire in 40 min 00:01:28.491 [Pipeline] { 00:01:28.505 [Pipeline] stage 00:01:28.507 [Pipeline] { (Tests) 00:01:28.521 [Pipeline] sh 00:01:28.805 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:28.805 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:28.805 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:28.805 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:28.805 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:28.805 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:28.805 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:28.805 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:28.805 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:28.805 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:28.805 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:28.805 + source /etc/os-release 00:01:28.805 ++ NAME='Fedora Linux' 00:01:28.805 ++ VERSION='38 (Cloud Edition)' 00:01:28.805 ++ ID=fedora 00:01:28.805 ++ VERSION_ID=38 00:01:28.805 ++ VERSION_CODENAME= 00:01:28.805 ++ PLATFORM_ID=platform:f38 00:01:28.805 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:28.805 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:28.805 ++ LOGO=fedora-logo-icon 00:01:28.805 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:28.805 ++ HOME_URL=https://fedoraproject.org/ 00:01:28.805 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:28.805 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:28.805 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:28.805 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:28.805 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:28.805 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:28.805 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:28.805 ++ SUPPORT_END=2024-05-14 00:01:28.805 ++ VARIANT='Cloud Edition' 00:01:28.805 ++ VARIANT_ID=cloud 00:01:28.805 + uname -a 00:01:28.805 Linux spdk-gp-11 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:28.805 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:29.741 Hugepages 00:01:29.741 node hugesize free / total 00:01:29.741 node0 1048576kB 0 / 0 00:01:29.741 node0 2048kB 0 / 0 00:01:29.741 node1 1048576kB 0 / 0 00:01:29.741 node1 2048kB 0 / 0 00:01:29.741 00:01:29.741 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:29.741 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:01:29.741 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:01:29.741 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:01:29.741 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:01:29.741 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:01:29.741 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:01:29.741 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:01:29.741 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:01:29.741 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:01:29.741 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:01:29.741 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:01:29.741 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:01:29.741 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:01:29.741 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:01:29.741 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:01:29.741 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:01:29.741 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:29.999 + rm -f /tmp/spdk-ld-path 00:01:29.999 + source autorun-spdk.conf 00:01:29.999 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:29.999 ++ SPDK_TEST_NVMF=1 00:01:29.999 ++ SPDK_TEST_NVME_CLI=1 00:01:29.999 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:29.999 ++ SPDK_TEST_NVMF_NICS=e810 00:01:29.999 ++ SPDK_RUN_UBSAN=1 00:01:29.999 ++ NET_TYPE=phy 00:01:29.999 ++ RUN_NIGHTLY=1 00:01:29.999 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:29.999 + [[ -n '' ]] 00:01:29.999 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:29.999 + for M in /var/spdk/build-*-manifest.txt 00:01:29.999 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:29.999 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:29.999 + for M in /var/spdk/build-*-manifest.txt 00:01:29.999 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:29.999 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:29.999 ++ uname 00:01:29.999 + [[ Linux == \L\i\n\u\x ]] 00:01:29.999 + sudo dmesg -T 00:01:29.999 + sudo dmesg --clear 00:01:29.999 + dmesg_pid=1286927 00:01:29.999 + [[ Fedora Linux == FreeBSD ]] 00:01:29.999 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:29.999 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:29.999 + sudo dmesg -Tw 00:01:29.999 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:29.999 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:29.999 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:29.999 + [[ -x /usr/src/fio-static/fio ]] 00:01:29.999 + export FIO_BIN=/usr/src/fio-static/fio 00:01:29.999 + FIO_BIN=/usr/src/fio-static/fio 00:01:29.999 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:29.999 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:29.999 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:29.999 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:29.999 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:29.999 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:29.999 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:29.999 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:29.999 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:29.999 Test configuration: 00:01:29.999 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:29.999 SPDK_TEST_NVMF=1 00:01:29.999 SPDK_TEST_NVME_CLI=1 00:01:29.999 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:29.999 SPDK_TEST_NVMF_NICS=e810 00:01:29.999 SPDK_RUN_UBSAN=1 00:01:29.999 NET_TYPE=phy 00:01:29.999 RUN_NIGHTLY=1 03:02:04 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:29.999 03:02:04 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:29.999 03:02:04 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:29.999 03:02:04 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:29.999 03:02:04 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:29.999 03:02:04 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:30.000 03:02:04 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:30.000 03:02:04 -- paths/export.sh@5 -- $ export PATH 00:01:30.000 03:02:04 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:30.000 03:02:04 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:30.000 03:02:04 -- common/autobuild_common.sh@435 -- $ date +%s 00:01:30.000 03:02:04 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1714006924.XXXXXX 00:01:30.000 03:02:04 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1714006924.ztVkZB 00:01:30.000 03:02:04 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:01:30.000 03:02:04 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:01:30.000 03:02:04 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:30.000 03:02:04 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:30.000 03:02:04 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:30.000 03:02:04 -- common/autobuild_common.sh@451 -- $ get_config_params 00:01:30.000 03:02:04 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:01:30.000 03:02:04 -- common/autotest_common.sh@10 -- $ set +x 00:01:30.000 03:02:04 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk' 00:01:30.000 03:02:04 -- common/autobuild_common.sh@453 -- $ start_monitor_resources 00:01:30.000 03:02:04 -- pm/common@17 -- $ local monitor 00:01:30.000 03:02:04 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:30.000 03:02:04 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=1286961 00:01:30.000 03:02:04 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:30.000 03:02:04 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=1286963 00:01:30.000 03:02:04 -- pm/common@21 -- $ date +%s 00:01:30.000 03:02:04 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:30.000 03:02:04 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=1286965 00:01:30.000 03:02:04 -- pm/common@21 -- $ date +%s 00:01:30.000 03:02:04 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:30.000 03:02:04 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=1286969 00:01:30.000 03:02:04 -- pm/common@21 -- $ date +%s 00:01:30.000 03:02:04 -- pm/common@26 -- $ sleep 1 00:01:30.000 03:02:04 -- pm/common@21 -- $ date +%s 00:01:30.000 03:02:04 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1714006924 00:01:30.000 03:02:04 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1714006924 00:01:30.000 03:02:04 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1714006924 00:01:30.000 03:02:04 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1714006924 00:01:30.000 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1714006924_collect-vmstat.pm.log 00:01:30.000 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1714006924_collect-bmc-pm.bmc.pm.log 00:01:30.000 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1714006924_collect-cpu-temp.pm.log 00:01:30.000 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1714006924_collect-cpu-load.pm.log 00:01:30.939 03:02:05 -- common/autobuild_common.sh@454 -- $ trap stop_monitor_resources EXIT 00:01:30.940 03:02:05 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:30.940 03:02:05 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:30.940 03:02:05 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:30.940 03:02:05 -- spdk/autobuild.sh@16 -- $ date -u 00:01:30.940 Thu Apr 25 01:02:05 AM UTC 2024 00:01:30.940 03:02:05 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:30.940 v24.05-pre-438-gabd932d6f 00:01:30.940 03:02:05 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:30.940 03:02:05 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:30.940 03:02:05 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:30.940 03:02:05 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:01:30.940 03:02:05 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:01:30.940 03:02:05 -- common/autotest_common.sh@10 -- $ set +x 00:01:31.198 ************************************ 00:01:31.198 START TEST ubsan 00:01:31.198 ************************************ 00:01:31.198 03:02:05 -- common/autotest_common.sh@1111 -- $ echo 'using ubsan' 00:01:31.198 using ubsan 00:01:31.198 00:01:31.198 real 0m0.000s 00:01:31.198 user 0m0.000s 00:01:31.198 sys 0m0.000s 00:01:31.198 03:02:05 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:01:31.198 03:02:05 -- common/autotest_common.sh@10 -- $ set +x 00:01:31.198 ************************************ 00:01:31.198 END TEST ubsan 00:01:31.198 ************************************ 00:01:31.198 03:02:05 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:31.198 03:02:05 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:31.198 03:02:05 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:31.198 03:02:05 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:31.198 03:02:05 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:31.198 03:02:05 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:31.198 03:02:05 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:31.198 03:02:05 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:31.198 03:02:05 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-shared 00:01:31.198 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:31.198 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:31.457 Using 'verbs' RDMA provider 00:01:42.026 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:52.004 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:52.004 Creating mk/config.mk...done. 00:01:52.004 Creating mk/cc.flags.mk...done. 00:01:52.004 Type 'make' to build. 00:01:52.004 03:02:26 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:01:52.004 03:02:26 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:01:52.004 03:02:26 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:01:52.004 03:02:26 -- common/autotest_common.sh@10 -- $ set +x 00:01:52.004 ************************************ 00:01:52.004 START TEST make 00:01:52.004 ************************************ 00:01:52.004 03:02:26 -- common/autotest_common.sh@1111 -- $ make -j48 00:01:52.004 make[1]: Nothing to be done for 'all'. 00:02:02.019 The Meson build system 00:02:02.019 Version: 1.3.1 00:02:02.019 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:02:02.019 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:02:02.019 Build type: native build 00:02:02.019 Program cat found: YES (/usr/bin/cat) 00:02:02.019 Project name: DPDK 00:02:02.019 Project version: 23.11.0 00:02:02.019 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:02.019 C linker for the host machine: cc ld.bfd 2.39-16 00:02:02.019 Host machine cpu family: x86_64 00:02:02.019 Host machine cpu: x86_64 00:02:02.019 Message: ## Building in Developer Mode ## 00:02:02.019 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:02.019 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:02.019 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:02.020 Program python3 found: YES (/usr/bin/python3) 00:02:02.020 Program cat found: YES (/usr/bin/cat) 00:02:02.020 Compiler for C supports arguments -march=native: YES 00:02:02.020 Checking for size of "void *" : 8 00:02:02.020 Checking for size of "void *" : 8 (cached) 00:02:02.020 Library m found: YES 00:02:02.020 Library numa found: YES 00:02:02.020 Has header "numaif.h" : YES 00:02:02.020 Library fdt found: NO 00:02:02.020 Library execinfo found: NO 00:02:02.020 Has header "execinfo.h" : YES 00:02:02.020 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:02.020 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:02.020 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:02.020 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:02.020 Run-time dependency openssl found: YES 3.0.9 00:02:02.020 Run-time dependency libpcap found: YES 1.10.4 00:02:02.020 Has header "pcap.h" with dependency libpcap: YES 00:02:02.020 Compiler for C supports arguments -Wcast-qual: YES 00:02:02.020 Compiler for C supports arguments -Wdeprecated: YES 00:02:02.020 Compiler for C supports arguments -Wformat: YES 00:02:02.020 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:02.020 Compiler for C supports arguments -Wformat-security: NO 00:02:02.020 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:02.020 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:02.020 Compiler for C supports arguments -Wnested-externs: YES 00:02:02.020 Compiler for C supports arguments -Wold-style-definition: YES 00:02:02.020 Compiler for C supports arguments -Wpointer-arith: YES 00:02:02.020 Compiler for C supports arguments -Wsign-compare: YES 00:02:02.020 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:02.020 Compiler for C supports arguments -Wundef: YES 00:02:02.020 Compiler for C supports arguments -Wwrite-strings: YES 00:02:02.020 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:02.020 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:02.020 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:02.020 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:02.020 Program objdump found: YES (/usr/bin/objdump) 00:02:02.020 Compiler for C supports arguments -mavx512f: YES 00:02:02.020 Checking if "AVX512 checking" compiles: YES 00:02:02.020 Fetching value of define "__SSE4_2__" : 1 00:02:02.020 Fetching value of define "__AES__" : 1 00:02:02.020 Fetching value of define "__AVX__" : 1 00:02:02.020 Fetching value of define "__AVX2__" : (undefined) 00:02:02.020 Fetching value of define "__AVX512BW__" : (undefined) 00:02:02.020 Fetching value of define "__AVX512CD__" : (undefined) 00:02:02.020 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:02.020 Fetching value of define "__AVX512F__" : (undefined) 00:02:02.020 Fetching value of define "__AVX512VL__" : (undefined) 00:02:02.020 Fetching value of define "__PCLMUL__" : 1 00:02:02.020 Fetching value of define "__RDRND__" : 1 00:02:02.020 Fetching value of define "__RDSEED__" : (undefined) 00:02:02.020 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:02.020 Fetching value of define "__znver1__" : (undefined) 00:02:02.020 Fetching value of define "__znver2__" : (undefined) 00:02:02.020 Fetching value of define "__znver3__" : (undefined) 00:02:02.020 Fetching value of define "__znver4__" : (undefined) 00:02:02.020 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:02.020 Message: lib/log: Defining dependency "log" 00:02:02.020 Message: lib/kvargs: Defining dependency "kvargs" 00:02:02.020 Message: lib/telemetry: Defining dependency "telemetry" 00:02:02.020 Checking for function "getentropy" : NO 00:02:02.020 Message: lib/eal: Defining dependency "eal" 00:02:02.020 Message: lib/ring: Defining dependency "ring" 00:02:02.020 Message: lib/rcu: Defining dependency "rcu" 00:02:02.020 Message: lib/mempool: Defining dependency "mempool" 00:02:02.020 Message: lib/mbuf: Defining dependency "mbuf" 00:02:02.020 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:02.020 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:02.020 Compiler for C supports arguments -mpclmul: YES 00:02:02.020 Compiler for C supports arguments -maes: YES 00:02:02.020 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:02.020 Compiler for C supports arguments -mavx512bw: YES 00:02:02.020 Compiler for C supports arguments -mavx512dq: YES 00:02:02.020 Compiler for C supports arguments -mavx512vl: YES 00:02:02.020 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:02.020 Compiler for C supports arguments -mavx2: YES 00:02:02.020 Compiler for C supports arguments -mavx: YES 00:02:02.020 Message: lib/net: Defining dependency "net" 00:02:02.020 Message: lib/meter: Defining dependency "meter" 00:02:02.020 Message: lib/ethdev: Defining dependency "ethdev" 00:02:02.020 Message: lib/pci: Defining dependency "pci" 00:02:02.020 Message: lib/cmdline: Defining dependency "cmdline" 00:02:02.020 Message: lib/hash: Defining dependency "hash" 00:02:02.020 Message: lib/timer: Defining dependency "timer" 00:02:02.020 Message: lib/compressdev: Defining dependency "compressdev" 00:02:02.020 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:02.020 Message: lib/dmadev: Defining dependency "dmadev" 00:02:02.020 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:02.020 Message: lib/power: Defining dependency "power" 00:02:02.020 Message: lib/reorder: Defining dependency "reorder" 00:02:02.020 Message: lib/security: Defining dependency "security" 00:02:02.020 Has header "linux/userfaultfd.h" : YES 00:02:02.020 Has header "linux/vduse.h" : YES 00:02:02.020 Message: lib/vhost: Defining dependency "vhost" 00:02:02.020 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:02.020 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:02.020 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:02.020 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:02.020 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:02.020 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:02.020 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:02.020 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:02.020 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:02.020 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:02.020 Program doxygen found: YES (/usr/bin/doxygen) 00:02:02.020 Configuring doxy-api-html.conf using configuration 00:02:02.020 Configuring doxy-api-man.conf using configuration 00:02:02.020 Program mandb found: YES (/usr/bin/mandb) 00:02:02.020 Program sphinx-build found: NO 00:02:02.020 Configuring rte_build_config.h using configuration 00:02:02.020 Message: 00:02:02.020 ================= 00:02:02.020 Applications Enabled 00:02:02.020 ================= 00:02:02.020 00:02:02.020 apps: 00:02:02.020 00:02:02.020 00:02:02.020 Message: 00:02:02.020 ================= 00:02:02.020 Libraries Enabled 00:02:02.020 ================= 00:02:02.020 00:02:02.020 libs: 00:02:02.020 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:02.020 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:02.020 cryptodev, dmadev, power, reorder, security, vhost, 00:02:02.020 00:02:02.020 Message: 00:02:02.020 =============== 00:02:02.020 Drivers Enabled 00:02:02.020 =============== 00:02:02.020 00:02:02.020 common: 00:02:02.020 00:02:02.020 bus: 00:02:02.020 pci, vdev, 00:02:02.020 mempool: 00:02:02.020 ring, 00:02:02.020 dma: 00:02:02.020 00:02:02.020 net: 00:02:02.020 00:02:02.020 crypto: 00:02:02.020 00:02:02.020 compress: 00:02:02.020 00:02:02.020 vdpa: 00:02:02.020 00:02:02.020 00:02:02.020 Message: 00:02:02.020 ================= 00:02:02.020 Content Skipped 00:02:02.020 ================= 00:02:02.020 00:02:02.020 apps: 00:02:02.020 dumpcap: explicitly disabled via build config 00:02:02.020 graph: explicitly disabled via build config 00:02:02.020 pdump: explicitly disabled via build config 00:02:02.020 proc-info: explicitly disabled via build config 00:02:02.020 test-acl: explicitly disabled via build config 00:02:02.020 test-bbdev: explicitly disabled via build config 00:02:02.020 test-cmdline: explicitly disabled via build config 00:02:02.021 test-compress-perf: explicitly disabled via build config 00:02:02.021 test-crypto-perf: explicitly disabled via build config 00:02:02.021 test-dma-perf: explicitly disabled via build config 00:02:02.021 test-eventdev: explicitly disabled via build config 00:02:02.021 test-fib: explicitly disabled via build config 00:02:02.021 test-flow-perf: explicitly disabled via build config 00:02:02.021 test-gpudev: explicitly disabled via build config 00:02:02.021 test-mldev: explicitly disabled via build config 00:02:02.021 test-pipeline: explicitly disabled via build config 00:02:02.021 test-pmd: explicitly disabled via build config 00:02:02.021 test-regex: explicitly disabled via build config 00:02:02.021 test-sad: explicitly disabled via build config 00:02:02.021 test-security-perf: explicitly disabled via build config 00:02:02.021 00:02:02.021 libs: 00:02:02.021 metrics: explicitly disabled via build config 00:02:02.021 acl: explicitly disabled via build config 00:02:02.021 bbdev: explicitly disabled via build config 00:02:02.021 bitratestats: explicitly disabled via build config 00:02:02.021 bpf: explicitly disabled via build config 00:02:02.021 cfgfile: explicitly disabled via build config 00:02:02.021 distributor: explicitly disabled via build config 00:02:02.021 efd: explicitly disabled via build config 00:02:02.021 eventdev: explicitly disabled via build config 00:02:02.021 dispatcher: explicitly disabled via build config 00:02:02.021 gpudev: explicitly disabled via build config 00:02:02.021 gro: explicitly disabled via build config 00:02:02.021 gso: explicitly disabled via build config 00:02:02.021 ip_frag: explicitly disabled via build config 00:02:02.021 jobstats: explicitly disabled via build config 00:02:02.021 latencystats: explicitly disabled via build config 00:02:02.021 lpm: explicitly disabled via build config 00:02:02.021 member: explicitly disabled via build config 00:02:02.021 pcapng: explicitly disabled via build config 00:02:02.021 rawdev: explicitly disabled via build config 00:02:02.021 regexdev: explicitly disabled via build config 00:02:02.021 mldev: explicitly disabled via build config 00:02:02.021 rib: explicitly disabled via build config 00:02:02.021 sched: explicitly disabled via build config 00:02:02.021 stack: explicitly disabled via build config 00:02:02.021 ipsec: explicitly disabled via build config 00:02:02.021 pdcp: explicitly disabled via build config 00:02:02.021 fib: explicitly disabled via build config 00:02:02.021 port: explicitly disabled via build config 00:02:02.021 pdump: explicitly disabled via build config 00:02:02.021 table: explicitly disabled via build config 00:02:02.021 pipeline: explicitly disabled via build config 00:02:02.021 graph: explicitly disabled via build config 00:02:02.021 node: explicitly disabled via build config 00:02:02.021 00:02:02.021 drivers: 00:02:02.021 common/cpt: not in enabled drivers build config 00:02:02.021 common/dpaax: not in enabled drivers build config 00:02:02.021 common/iavf: not in enabled drivers build config 00:02:02.021 common/idpf: not in enabled drivers build config 00:02:02.021 common/mvep: not in enabled drivers build config 00:02:02.021 common/octeontx: not in enabled drivers build config 00:02:02.021 bus/auxiliary: not in enabled drivers build config 00:02:02.021 bus/cdx: not in enabled drivers build config 00:02:02.021 bus/dpaa: not in enabled drivers build config 00:02:02.021 bus/fslmc: not in enabled drivers build config 00:02:02.021 bus/ifpga: not in enabled drivers build config 00:02:02.021 bus/platform: not in enabled drivers build config 00:02:02.021 bus/vmbus: not in enabled drivers build config 00:02:02.021 common/cnxk: not in enabled drivers build config 00:02:02.021 common/mlx5: not in enabled drivers build config 00:02:02.021 common/nfp: not in enabled drivers build config 00:02:02.021 common/qat: not in enabled drivers build config 00:02:02.021 common/sfc_efx: not in enabled drivers build config 00:02:02.021 mempool/bucket: not in enabled drivers build config 00:02:02.021 mempool/cnxk: not in enabled drivers build config 00:02:02.021 mempool/dpaa: not in enabled drivers build config 00:02:02.021 mempool/dpaa2: not in enabled drivers build config 00:02:02.021 mempool/octeontx: not in enabled drivers build config 00:02:02.021 mempool/stack: not in enabled drivers build config 00:02:02.021 dma/cnxk: not in enabled drivers build config 00:02:02.021 dma/dpaa: not in enabled drivers build config 00:02:02.021 dma/dpaa2: not in enabled drivers build config 00:02:02.021 dma/hisilicon: not in enabled drivers build config 00:02:02.021 dma/idxd: not in enabled drivers build config 00:02:02.021 dma/ioat: not in enabled drivers build config 00:02:02.021 dma/skeleton: not in enabled drivers build config 00:02:02.021 net/af_packet: not in enabled drivers build config 00:02:02.021 net/af_xdp: not in enabled drivers build config 00:02:02.021 net/ark: not in enabled drivers build config 00:02:02.021 net/atlantic: not in enabled drivers build config 00:02:02.021 net/avp: not in enabled drivers build config 00:02:02.021 net/axgbe: not in enabled drivers build config 00:02:02.021 net/bnx2x: not in enabled drivers build config 00:02:02.021 net/bnxt: not in enabled drivers build config 00:02:02.021 net/bonding: not in enabled drivers build config 00:02:02.021 net/cnxk: not in enabled drivers build config 00:02:02.021 net/cpfl: not in enabled drivers build config 00:02:02.021 net/cxgbe: not in enabled drivers build config 00:02:02.021 net/dpaa: not in enabled drivers build config 00:02:02.021 net/dpaa2: not in enabled drivers build config 00:02:02.021 net/e1000: not in enabled drivers build config 00:02:02.021 net/ena: not in enabled drivers build config 00:02:02.021 net/enetc: not in enabled drivers build config 00:02:02.021 net/enetfec: not in enabled drivers build config 00:02:02.021 net/enic: not in enabled drivers build config 00:02:02.021 net/failsafe: not in enabled drivers build config 00:02:02.021 net/fm10k: not in enabled drivers build config 00:02:02.021 net/gve: not in enabled drivers build config 00:02:02.021 net/hinic: not in enabled drivers build config 00:02:02.021 net/hns3: not in enabled drivers build config 00:02:02.021 net/i40e: not in enabled drivers build config 00:02:02.021 net/iavf: not in enabled drivers build config 00:02:02.021 net/ice: not in enabled drivers build config 00:02:02.021 net/idpf: not in enabled drivers build config 00:02:02.021 net/igc: not in enabled drivers build config 00:02:02.021 net/ionic: not in enabled drivers build config 00:02:02.021 net/ipn3ke: not in enabled drivers build config 00:02:02.021 net/ixgbe: not in enabled drivers build config 00:02:02.021 net/mana: not in enabled drivers build config 00:02:02.021 net/memif: not in enabled drivers build config 00:02:02.021 net/mlx4: not in enabled drivers build config 00:02:02.021 net/mlx5: not in enabled drivers build config 00:02:02.021 net/mvneta: not in enabled drivers build config 00:02:02.021 net/mvpp2: not in enabled drivers build config 00:02:02.021 net/netvsc: not in enabled drivers build config 00:02:02.021 net/nfb: not in enabled drivers build config 00:02:02.021 net/nfp: not in enabled drivers build config 00:02:02.021 net/ngbe: not in enabled drivers build config 00:02:02.021 net/null: not in enabled drivers build config 00:02:02.021 net/octeontx: not in enabled drivers build config 00:02:02.021 net/octeon_ep: not in enabled drivers build config 00:02:02.021 net/pcap: not in enabled drivers build config 00:02:02.021 net/pfe: not in enabled drivers build config 00:02:02.021 net/qede: not in enabled drivers build config 00:02:02.021 net/ring: not in enabled drivers build config 00:02:02.021 net/sfc: not in enabled drivers build config 00:02:02.021 net/softnic: not in enabled drivers build config 00:02:02.021 net/tap: not in enabled drivers build config 00:02:02.021 net/thunderx: not in enabled drivers build config 00:02:02.021 net/txgbe: not in enabled drivers build config 00:02:02.021 net/vdev_netvsc: not in enabled drivers build config 00:02:02.021 net/vhost: not in enabled drivers build config 00:02:02.021 net/virtio: not in enabled drivers build config 00:02:02.021 net/vmxnet3: not in enabled drivers build config 00:02:02.021 raw/*: missing internal dependency, "rawdev" 00:02:02.021 crypto/armv8: not in enabled drivers build config 00:02:02.021 crypto/bcmfs: not in enabled drivers build config 00:02:02.021 crypto/caam_jr: not in enabled drivers build config 00:02:02.021 crypto/ccp: not in enabled drivers build config 00:02:02.021 crypto/cnxk: not in enabled drivers build config 00:02:02.021 crypto/dpaa_sec: not in enabled drivers build config 00:02:02.021 crypto/dpaa2_sec: not in enabled drivers build config 00:02:02.021 crypto/ipsec_mb: not in enabled drivers build config 00:02:02.021 crypto/mlx5: not in enabled drivers build config 00:02:02.021 crypto/mvsam: not in enabled drivers build config 00:02:02.021 crypto/nitrox: not in enabled drivers build config 00:02:02.021 crypto/null: not in enabled drivers build config 00:02:02.021 crypto/octeontx: not in enabled drivers build config 00:02:02.022 crypto/openssl: not in enabled drivers build config 00:02:02.022 crypto/scheduler: not in enabled drivers build config 00:02:02.022 crypto/uadk: not in enabled drivers build config 00:02:02.022 crypto/virtio: not in enabled drivers build config 00:02:02.022 compress/isal: not in enabled drivers build config 00:02:02.022 compress/mlx5: not in enabled drivers build config 00:02:02.022 compress/octeontx: not in enabled drivers build config 00:02:02.022 compress/zlib: not in enabled drivers build config 00:02:02.022 regex/*: missing internal dependency, "regexdev" 00:02:02.022 ml/*: missing internal dependency, "mldev" 00:02:02.022 vdpa/ifc: not in enabled drivers build config 00:02:02.022 vdpa/mlx5: not in enabled drivers build config 00:02:02.022 vdpa/nfp: not in enabled drivers build config 00:02:02.022 vdpa/sfc: not in enabled drivers build config 00:02:02.022 event/*: missing internal dependency, "eventdev" 00:02:02.022 baseband/*: missing internal dependency, "bbdev" 00:02:02.022 gpu/*: missing internal dependency, "gpudev" 00:02:02.022 00:02:02.022 00:02:02.022 Build targets in project: 85 00:02:02.022 00:02:02.022 DPDK 23.11.0 00:02:02.022 00:02:02.022 User defined options 00:02:02.022 buildtype : debug 00:02:02.022 default_library : shared 00:02:02.022 libdir : lib 00:02:02.022 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:02.022 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:02.022 c_link_args : 00:02:02.022 cpu_instruction_set: native 00:02:02.022 disable_apps : test-acl,test-bbdev,test-crypto-perf,test-fib,test-pipeline,test-gpudev,test-flow-perf,pdump,dumpcap,test-sad,test-cmdline,test-eventdev,proc-info,test,test-dma-perf,test-pmd,test-mldev,test-compress-perf,test-security-perf,graph,test-regex 00:02:02.022 disable_libs : pipeline,member,eventdev,efd,bbdev,cfgfile,rib,sched,mldev,metrics,lpm,latencystats,pdump,pdcp,bpf,ipsec,fib,ip_frag,table,port,stack,gro,jobstats,regexdev,rawdev,pcapng,dispatcher,node,bitratestats,acl,gpudev,distributor,graph,gso 00:02:02.022 enable_docs : false 00:02:02.022 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:02.022 enable_kmods : false 00:02:02.022 tests : false 00:02:02.022 00:02:02.022 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:02.022 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:02:02.022 [1/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:02.022 [2/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:02.022 [3/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:02.022 [4/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:02.022 [5/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:02.022 [6/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:02.022 [7/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:02.022 [8/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:02.022 [9/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:02.022 [10/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:02.022 [11/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:02.022 [12/265] Linking static target lib/librte_kvargs.a 00:02:02.022 [13/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:02.022 [14/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:02.022 [15/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:02.022 [16/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:02.022 [17/265] Linking static target lib/librte_log.a 00:02:02.022 [18/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:02.022 [19/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:02.022 [20/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:02.022 [21/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:02.022 [22/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.022 [23/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:02.022 [24/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:02.022 [25/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:02.022 [26/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:02.022 [27/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:02.022 [28/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:02.022 [29/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:02.022 [30/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:02.022 [31/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:02.022 [32/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:02.022 [33/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:02.022 [34/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:02.022 [35/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:02.022 [36/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:02.022 [37/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:02.022 [38/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:02.022 [39/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:02.022 [40/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:02.022 [41/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:02.022 [42/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:02.022 [43/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:02.022 [44/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:02.022 [45/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:02.022 [46/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:02.022 [47/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:02.022 [48/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:02.022 [49/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:02.022 [50/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:02.022 [51/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:02.022 [52/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:02.022 [53/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:02.283 [54/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:02.283 [55/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:02.283 [56/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:02.283 [57/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:02.283 [58/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:02.283 [59/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:02.283 [60/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:02.283 [61/265] Linking static target lib/librte_telemetry.a 00:02:02.283 [62/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:02.283 [63/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:02.283 [64/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:02.283 [65/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:02.283 [66/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:02.283 [67/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.283 [68/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:02.283 [69/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:02.543 [70/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:02.543 [71/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:02.543 [72/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:02.543 [73/265] Linking static target lib/librte_pci.a 00:02:02.543 [74/265] Linking target lib/librte_log.so.24.0 00:02:02.543 [75/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:02.543 [76/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:02.543 [77/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:02.543 [78/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:02.543 [79/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:02.543 [80/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:02.802 [81/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:02.802 [82/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:02.802 [83/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:02.802 [84/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:02.802 [85/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:02.802 [86/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:02.802 [87/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:02.802 [88/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:02.802 [89/265] Linking target lib/librte_kvargs.so.24.0 00:02:02.802 [90/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:03.065 [91/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:03.065 [92/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:03.065 [93/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:03.065 [94/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:03.065 [95/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:03.065 [96/265] Linking static target lib/librte_ring.a 00:02:03.065 [97/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:03.065 [98/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:03.065 [99/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.065 [100/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:03.065 [101/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:03.065 [102/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:03.065 [103/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:03.065 [104/265] Linking static target lib/librte_meter.a 00:02:03.065 [105/265] Linking static target lib/librte_eal.a 00:02:03.065 [106/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:03.065 [107/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:03.065 [108/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:03.065 [109/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:03.065 [110/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:03.065 [111/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:03.326 [112/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:03.326 [113/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:03.326 [114/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:03.326 [115/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:03.326 [116/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:03.326 [117/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:03.326 [118/265] Linking static target lib/librte_rcu.a 00:02:03.326 [119/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:03.326 [120/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:03.326 [121/265] Linking static target lib/librte_mempool.a 00:02:03.326 [122/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:03.326 [123/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:03.326 [124/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:03.326 [125/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:03.326 [126/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:03.326 [127/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:03.326 [128/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.327 [129/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:03.327 [130/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:03.587 [131/265] Linking static target lib/librte_cmdline.a 00:02:03.587 [132/265] Linking target lib/librte_telemetry.so.24.0 00:02:03.587 [133/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:03.587 [134/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.587 [135/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:03.587 [136/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:03.587 [137/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:03.587 [138/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:03.587 [139/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.587 [140/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:03.850 [141/265] Linking static target lib/librte_net.a 00:02:03.850 [142/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:03.850 [143/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:03.850 [144/265] Linking static target lib/librte_timer.a 00:02:03.850 [145/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:03.850 [146/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:03.850 [147/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:03.850 [148/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.850 [149/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:03.850 [150/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:03.850 [151/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:03.850 [152/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:04.120 [153/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:04.120 [154/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:04.120 [155/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.120 [156/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:04.120 [157/265] Linking static target lib/librte_dmadev.a 00:02:04.120 [158/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:04.120 [159/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:04.120 [160/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:04.120 [161/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:04.403 [162/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.403 [163/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:04.403 [164/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.403 [165/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:04.403 [166/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:04.403 [167/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:04.403 [168/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:04.403 [169/265] Linking static target lib/librte_compressdev.a 00:02:04.403 [170/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:04.403 [171/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:04.403 [172/265] Linking static target lib/librte_hash.a 00:02:04.403 [173/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:04.403 [174/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:04.403 [175/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:04.403 [176/265] Linking static target lib/librte_power.a 00:02:04.403 [177/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:04.403 [178/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:04.403 [179/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:04.404 [180/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:04.404 [181/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.662 [182/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:04.662 [183/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.662 [184/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:04.662 [185/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:04.662 [186/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:04.662 [187/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:04.662 [188/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:04.662 [189/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:04.662 [190/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:04.662 [191/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:04.662 [192/265] Linking static target lib/librte_reorder.a 00:02:04.662 [193/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:04.662 [194/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:04.662 [195/265] Linking static target lib/librte_security.a 00:02:04.662 [196/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:04.662 [197/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.921 [198/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:04.921 [199/265] Linking static target lib/librte_mbuf.a 00:02:04.921 [200/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:04.921 [201/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:04.921 [202/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:04.921 [203/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.921 [204/265] Linking static target drivers/librte_bus_vdev.a 00:02:04.921 [205/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:04.921 [206/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:04.921 [207/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:04.921 [208/265] Linking static target drivers/librte_bus_pci.a 00:02:04.921 [209/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.921 [210/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:04.921 [211/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:04.921 [212/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:04.921 [213/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:04.921 [214/265] Linking static target drivers/librte_mempool_ring.a 00:02:04.921 [215/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.180 [216/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.180 [217/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:05.180 [218/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.180 [219/265] Linking static target lib/librte_ethdev.a 00:02:05.180 [220/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:05.180 [221/265] Linking static target lib/librte_cryptodev.a 00:02:05.180 [222/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.438 [223/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.374 [224/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.751 [225/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:09.128 [226/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.385 [227/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.385 [228/265] Linking target lib/librte_eal.so.24.0 00:02:09.643 [229/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:09.643 [230/265] Linking target lib/librte_ring.so.24.0 00:02:09.643 [231/265] Linking target lib/librte_timer.so.24.0 00:02:09.643 [232/265] Linking target lib/librte_meter.so.24.0 00:02:09.643 [233/265] Linking target lib/librte_pci.so.24.0 00:02:09.643 [234/265] Linking target drivers/librte_bus_vdev.so.24.0 00:02:09.643 [235/265] Linking target lib/librte_dmadev.so.24.0 00:02:09.643 [236/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:09.644 [237/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:09.644 [238/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:09.644 [239/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:09.644 [240/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:09.644 [241/265] Linking target lib/librte_rcu.so.24.0 00:02:09.644 [242/265] Linking target lib/librte_mempool.so.24.0 00:02:09.644 [243/265] Linking target drivers/librte_bus_pci.so.24.0 00:02:09.902 [244/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:09.902 [245/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:09.902 [246/265] Linking target drivers/librte_mempool_ring.so.24.0 00:02:09.902 [247/265] Linking target lib/librte_mbuf.so.24.0 00:02:10.160 [248/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:10.160 [249/265] Linking target lib/librte_net.so.24.0 00:02:10.160 [250/265] Linking target lib/librte_compressdev.so.24.0 00:02:10.160 [251/265] Linking target lib/librte_reorder.so.24.0 00:02:10.160 [252/265] Linking target lib/librte_cryptodev.so.24.0 00:02:10.160 [253/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:10.160 [254/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:10.160 [255/265] Linking target lib/librte_hash.so.24.0 00:02:10.160 [256/265] Linking target lib/librte_security.so.24.0 00:02:10.160 [257/265] Linking target lib/librte_cmdline.so.24.0 00:02:10.160 [258/265] Linking target lib/librte_ethdev.so.24.0 00:02:10.419 [259/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:10.419 [260/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:10.419 [261/265] Linking target lib/librte_power.so.24.0 00:02:12.951 [262/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:12.951 [263/265] Linking static target lib/librte_vhost.a 00:02:13.886 [264/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.886 [265/265] Linking target lib/librte_vhost.so.24.0 00:02:13.886 INFO: autodetecting backend as ninja 00:02:13.886 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 48 00:02:14.822 CC lib/ut/ut.o 00:02:14.822 CC lib/log/log.o 00:02:14.822 CC lib/log/log_flags.o 00:02:14.822 CC lib/log/log_deprecated.o 00:02:14.822 CC lib/ut_mock/mock.o 00:02:14.822 LIB libspdk_ut_mock.a 00:02:14.822 LIB libspdk_ut.a 00:02:14.822 LIB libspdk_log.a 00:02:14.822 SO libspdk_ut_mock.so.6.0 00:02:15.081 SO libspdk_ut.so.2.0 00:02:15.081 SO libspdk_log.so.7.0 00:02:15.081 SYMLINK libspdk_ut_mock.so 00:02:15.081 SYMLINK libspdk_ut.so 00:02:15.081 SYMLINK libspdk_log.so 00:02:15.081 CC lib/dma/dma.o 00:02:15.081 CC lib/util/base64.o 00:02:15.081 CXX lib/trace_parser/trace.o 00:02:15.081 CC lib/ioat/ioat.o 00:02:15.081 CC lib/util/bit_array.o 00:02:15.081 CC lib/util/cpuset.o 00:02:15.081 CC lib/util/crc16.o 00:02:15.081 CC lib/util/crc32.o 00:02:15.081 CC lib/util/crc32c.o 00:02:15.081 CC lib/util/crc32_ieee.o 00:02:15.081 CC lib/util/crc64.o 00:02:15.081 CC lib/util/dif.o 00:02:15.081 CC lib/util/fd.o 00:02:15.081 CC lib/util/file.o 00:02:15.081 CC lib/util/hexlify.o 00:02:15.081 CC lib/util/iov.o 00:02:15.081 CC lib/util/math.o 00:02:15.081 CC lib/util/pipe.o 00:02:15.081 CC lib/util/strerror_tls.o 00:02:15.081 CC lib/util/string.o 00:02:15.081 CC lib/util/uuid.o 00:02:15.081 CC lib/util/fd_group.o 00:02:15.081 CC lib/util/xor.o 00:02:15.081 CC lib/util/zipf.o 00:02:15.339 CC lib/vfio_user/host/vfio_user_pci.o 00:02:15.339 CC lib/vfio_user/host/vfio_user.o 00:02:15.339 LIB libspdk_dma.a 00:02:15.339 SO libspdk_dma.so.4.0 00:02:15.598 SYMLINK libspdk_dma.so 00:02:15.598 LIB libspdk_vfio_user.a 00:02:15.598 LIB libspdk_ioat.a 00:02:15.598 SO libspdk_vfio_user.so.5.0 00:02:15.598 SO libspdk_ioat.so.7.0 00:02:15.598 SYMLINK libspdk_vfio_user.so 00:02:15.598 SYMLINK libspdk_ioat.so 00:02:15.856 LIB libspdk_util.a 00:02:15.856 SO libspdk_util.so.9.0 00:02:15.856 SYMLINK libspdk_util.so 00:02:16.115 CC lib/json/json_parse.o 00:02:16.115 CC lib/env_dpdk/env.o 00:02:16.115 CC lib/idxd/idxd.o 00:02:16.115 CC lib/json/json_util.o 00:02:16.115 CC lib/vmd/vmd.o 00:02:16.115 CC lib/env_dpdk/memory.o 00:02:16.115 CC lib/idxd/idxd_user.o 00:02:16.115 CC lib/json/json_write.o 00:02:16.115 CC lib/env_dpdk/pci.o 00:02:16.115 CC lib/vmd/led.o 00:02:16.115 CC lib/env_dpdk/init.o 00:02:16.115 CC lib/env_dpdk/threads.o 00:02:16.115 CC lib/rdma/common.o 00:02:16.115 CC lib/env_dpdk/pci_ioat.o 00:02:16.115 CC lib/conf/conf.o 00:02:16.115 CC lib/rdma/rdma_verbs.o 00:02:16.115 CC lib/env_dpdk/pci_virtio.o 00:02:16.115 CC lib/env_dpdk/pci_vmd.o 00:02:16.115 CC lib/env_dpdk/pci_idxd.o 00:02:16.115 CC lib/env_dpdk/pci_event.o 00:02:16.115 CC lib/env_dpdk/sigbus_handler.o 00:02:16.115 CC lib/env_dpdk/pci_dpdk.o 00:02:16.115 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:16.115 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:16.115 LIB libspdk_trace_parser.a 00:02:16.115 SO libspdk_trace_parser.so.5.0 00:02:16.374 SYMLINK libspdk_trace_parser.so 00:02:16.374 LIB libspdk_conf.a 00:02:16.374 SO libspdk_conf.so.6.0 00:02:16.374 LIB libspdk_json.a 00:02:16.374 SYMLINK libspdk_conf.so 00:02:16.374 LIB libspdk_rdma.a 00:02:16.374 SO libspdk_rdma.so.6.0 00:02:16.374 SO libspdk_json.so.6.0 00:02:16.632 SYMLINK libspdk_rdma.so 00:02:16.632 SYMLINK libspdk_json.so 00:02:16.632 CC lib/jsonrpc/jsonrpc_server.o 00:02:16.632 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:16.632 CC lib/jsonrpc/jsonrpc_client.o 00:02:16.632 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:16.632 LIB libspdk_idxd.a 00:02:16.632 SO libspdk_idxd.so.12.0 00:02:16.891 SYMLINK libspdk_idxd.so 00:02:16.891 LIB libspdk_vmd.a 00:02:16.891 SO libspdk_vmd.so.6.0 00:02:16.892 SYMLINK libspdk_vmd.so 00:02:16.892 LIB libspdk_jsonrpc.a 00:02:16.892 SO libspdk_jsonrpc.so.6.0 00:02:17.150 SYMLINK libspdk_jsonrpc.so 00:02:17.150 CC lib/rpc/rpc.o 00:02:17.409 LIB libspdk_rpc.a 00:02:17.409 SO libspdk_rpc.so.6.0 00:02:17.409 SYMLINK libspdk_rpc.so 00:02:17.670 CC lib/trace/trace.o 00:02:17.670 CC lib/trace/trace_flags.o 00:02:17.670 CC lib/keyring/keyring.o 00:02:17.670 CC lib/trace/trace_rpc.o 00:02:17.670 CC lib/keyring/keyring_rpc.o 00:02:17.670 CC lib/notify/notify.o 00:02:17.670 CC lib/notify/notify_rpc.o 00:02:17.977 LIB libspdk_notify.a 00:02:17.977 SO libspdk_notify.so.6.0 00:02:17.977 LIB libspdk_keyring.a 00:02:17.977 LIB libspdk_trace.a 00:02:17.977 SYMLINK libspdk_notify.so 00:02:17.977 SO libspdk_keyring.so.1.0 00:02:17.977 SO libspdk_trace.so.10.0 00:02:17.977 SYMLINK libspdk_keyring.so 00:02:17.977 SYMLINK libspdk_trace.so 00:02:18.236 LIB libspdk_env_dpdk.a 00:02:18.236 SO libspdk_env_dpdk.so.14.0 00:02:18.236 CC lib/sock/sock.o 00:02:18.236 CC lib/sock/sock_rpc.o 00:02:18.236 CC lib/thread/thread.o 00:02:18.236 CC lib/thread/iobuf.o 00:02:18.236 SYMLINK libspdk_env_dpdk.so 00:02:18.495 LIB libspdk_sock.a 00:02:18.495 SO libspdk_sock.so.9.0 00:02:18.752 SYMLINK libspdk_sock.so 00:02:18.752 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:18.752 CC lib/nvme/nvme_ctrlr.o 00:02:18.752 CC lib/nvme/nvme_fabric.o 00:02:18.752 CC lib/nvme/nvme_ns_cmd.o 00:02:18.752 CC lib/nvme/nvme_ns.o 00:02:18.752 CC lib/nvme/nvme_pcie_common.o 00:02:18.752 CC lib/nvme/nvme_pcie.o 00:02:18.752 CC lib/nvme/nvme_qpair.o 00:02:18.752 CC lib/nvme/nvme.o 00:02:18.752 CC lib/nvme/nvme_quirks.o 00:02:18.752 CC lib/nvme/nvme_transport.o 00:02:18.752 CC lib/nvme/nvme_discovery.o 00:02:18.752 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:18.752 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:18.752 CC lib/nvme/nvme_tcp.o 00:02:18.752 CC lib/nvme/nvme_opal.o 00:02:18.752 CC lib/nvme/nvme_io_msg.o 00:02:18.752 CC lib/nvme/nvme_poll_group.o 00:02:18.752 CC lib/nvme/nvme_zns.o 00:02:18.752 CC lib/nvme/nvme_stubs.o 00:02:18.752 CC lib/nvme/nvme_auth.o 00:02:18.752 CC lib/nvme/nvme_cuse.o 00:02:18.752 CC lib/nvme/nvme_rdma.o 00:02:19.686 LIB libspdk_thread.a 00:02:19.944 SO libspdk_thread.so.10.0 00:02:19.944 SYMLINK libspdk_thread.so 00:02:19.944 CC lib/accel/accel.o 00:02:19.944 CC lib/init/json_config.o 00:02:19.944 CC lib/virtio/virtio.o 00:02:19.944 CC lib/accel/accel_rpc.o 00:02:19.944 CC lib/virtio/virtio_vhost_user.o 00:02:19.944 CC lib/blob/blobstore.o 00:02:19.944 CC lib/accel/accel_sw.o 00:02:19.944 CC lib/virtio/virtio_vfio_user.o 00:02:19.944 CC lib/blob/request.o 00:02:19.944 CC lib/init/subsystem.o 00:02:19.944 CC lib/virtio/virtio_pci.o 00:02:19.944 CC lib/blob/zeroes.o 00:02:19.944 CC lib/init/subsystem_rpc.o 00:02:19.944 CC lib/blob/blob_bs_dev.o 00:02:19.944 CC lib/init/rpc.o 00:02:20.202 LIB libspdk_init.a 00:02:20.461 SO libspdk_init.so.5.0 00:02:20.461 LIB libspdk_virtio.a 00:02:20.461 SYMLINK libspdk_init.so 00:02:20.461 SO libspdk_virtio.so.7.0 00:02:20.461 SYMLINK libspdk_virtio.so 00:02:20.461 CC lib/event/app.o 00:02:20.461 CC lib/event/reactor.o 00:02:20.461 CC lib/event/log_rpc.o 00:02:20.461 CC lib/event/app_rpc.o 00:02:20.461 CC lib/event/scheduler_static.o 00:02:21.028 LIB libspdk_event.a 00:02:21.028 SO libspdk_event.so.13.0 00:02:21.028 SYMLINK libspdk_event.so 00:02:21.028 LIB libspdk_accel.a 00:02:21.028 SO libspdk_accel.so.15.0 00:02:21.286 SYMLINK libspdk_accel.so 00:02:21.286 LIB libspdk_nvme.a 00:02:21.286 SO libspdk_nvme.so.13.0 00:02:21.286 CC lib/bdev/bdev.o 00:02:21.286 CC lib/bdev/bdev_rpc.o 00:02:21.286 CC lib/bdev/bdev_zone.o 00:02:21.286 CC lib/bdev/part.o 00:02:21.286 CC lib/bdev/scsi_nvme.o 00:02:21.544 SYMLINK libspdk_nvme.so 00:02:22.921 LIB libspdk_blob.a 00:02:22.921 SO libspdk_blob.so.11.0 00:02:23.180 SYMLINK libspdk_blob.so 00:02:23.180 CC lib/lvol/lvol.o 00:02:23.180 CC lib/blobfs/blobfs.o 00:02:23.180 CC lib/blobfs/tree.o 00:02:23.747 LIB libspdk_bdev.a 00:02:23.747 SO libspdk_bdev.so.15.0 00:02:24.013 SYMLINK libspdk_bdev.so 00:02:24.014 LIB libspdk_blobfs.a 00:02:24.014 SO libspdk_blobfs.so.10.0 00:02:24.014 LIB libspdk_lvol.a 00:02:24.014 SO libspdk_lvol.so.10.0 00:02:24.014 CC lib/ublk/ublk.o 00:02:24.014 CC lib/scsi/dev.o 00:02:24.014 CC lib/nbd/nbd.o 00:02:24.014 CC lib/nvmf/ctrlr.o 00:02:24.014 CC lib/scsi/lun.o 00:02:24.014 CC lib/nvmf/ctrlr_discovery.o 00:02:24.014 CC lib/ublk/ublk_rpc.o 00:02:24.014 CC lib/ftl/ftl_core.o 00:02:24.014 CC lib/nbd/nbd_rpc.o 00:02:24.014 CC lib/nvmf/ctrlr_bdev.o 00:02:24.014 CC lib/ftl/ftl_init.o 00:02:24.014 CC lib/scsi/port.o 00:02:24.014 CC lib/scsi/scsi.o 00:02:24.014 CC lib/ftl/ftl_layout.o 00:02:24.014 CC lib/scsi/scsi_bdev.o 00:02:24.014 CC lib/nvmf/subsystem.o 00:02:24.014 CC lib/nvmf/nvmf.o 00:02:24.014 CC lib/ftl/ftl_debug.o 00:02:24.014 CC lib/scsi/scsi_pr.o 00:02:24.014 CC lib/nvmf/nvmf_rpc.o 00:02:24.014 CC lib/ftl/ftl_io.o 00:02:24.014 CC lib/ftl/ftl_sb.o 00:02:24.014 CC lib/scsi/scsi_rpc.o 00:02:24.014 CC lib/nvmf/transport.o 00:02:24.014 CC lib/nvmf/tcp.o 00:02:24.014 CC lib/ftl/ftl_l2p.o 00:02:24.014 CC lib/scsi/task.o 00:02:24.014 CC lib/nvmf/rdma.o 00:02:24.014 CC lib/ftl/ftl_l2p_flat.o 00:02:24.014 CC lib/ftl/ftl_nv_cache.o 00:02:24.014 CC lib/ftl/ftl_band.o 00:02:24.014 CC lib/ftl/ftl_band_ops.o 00:02:24.014 CC lib/ftl/ftl_writer.o 00:02:24.014 CC lib/ftl/ftl_rq.o 00:02:24.014 CC lib/ftl/ftl_reloc.o 00:02:24.014 CC lib/ftl/ftl_l2p_cache.o 00:02:24.014 CC lib/ftl/ftl_p2l.o 00:02:24.014 CC lib/ftl/mngt/ftl_mngt.o 00:02:24.014 SYMLINK libspdk_blobfs.so 00:02:24.014 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:24.014 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:24.014 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:24.014 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:24.014 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:24.014 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:24.014 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:24.014 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:24.014 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:24.273 SYMLINK libspdk_lvol.so 00:02:24.273 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:24.540 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:24.540 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:24.540 CC lib/ftl/utils/ftl_conf.o 00:02:24.540 CC lib/ftl/utils/ftl_md.o 00:02:24.540 CC lib/ftl/utils/ftl_mempool.o 00:02:24.540 CC lib/ftl/utils/ftl_bitmap.o 00:02:24.540 CC lib/ftl/utils/ftl_property.o 00:02:24.540 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:24.540 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:24.540 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:24.540 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:24.540 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:24.540 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:24.540 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:24.540 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:24.540 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:24.540 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:24.540 CC lib/ftl/base/ftl_base_dev.o 00:02:24.540 CC lib/ftl/base/ftl_base_bdev.o 00:02:24.540 CC lib/ftl/ftl_trace.o 00:02:24.798 LIB libspdk_nbd.a 00:02:24.799 SO libspdk_nbd.so.7.0 00:02:24.799 SYMLINK libspdk_nbd.so 00:02:25.057 LIB libspdk_scsi.a 00:02:25.057 SO libspdk_scsi.so.9.0 00:02:25.057 SYMLINK libspdk_scsi.so 00:02:25.057 LIB libspdk_ublk.a 00:02:25.057 SO libspdk_ublk.so.3.0 00:02:25.057 SYMLINK libspdk_ublk.so 00:02:25.315 CC lib/iscsi/conn.o 00:02:25.315 CC lib/vhost/vhost.o 00:02:25.315 CC lib/iscsi/init_grp.o 00:02:25.316 CC lib/vhost/vhost_rpc.o 00:02:25.316 CC lib/iscsi/iscsi.o 00:02:25.316 CC lib/iscsi/md5.o 00:02:25.316 CC lib/vhost/vhost_scsi.o 00:02:25.316 CC lib/iscsi/param.o 00:02:25.316 CC lib/vhost/vhost_blk.o 00:02:25.316 CC lib/iscsi/portal_grp.o 00:02:25.316 CC lib/vhost/rte_vhost_user.o 00:02:25.316 CC lib/iscsi/tgt_node.o 00:02:25.316 CC lib/iscsi/iscsi_subsystem.o 00:02:25.316 CC lib/iscsi/iscsi_rpc.o 00:02:25.316 CC lib/iscsi/task.o 00:02:25.574 LIB libspdk_ftl.a 00:02:25.574 SO libspdk_ftl.so.9.0 00:02:26.141 SYMLINK libspdk_ftl.so 00:02:26.400 LIB libspdk_vhost.a 00:02:26.400 SO libspdk_vhost.so.8.0 00:02:26.659 SYMLINK libspdk_vhost.so 00:02:26.659 LIB libspdk_nvmf.a 00:02:26.659 SO libspdk_nvmf.so.18.0 00:02:26.659 LIB libspdk_iscsi.a 00:02:26.659 SO libspdk_iscsi.so.8.0 00:02:26.917 SYMLINK libspdk_nvmf.so 00:02:26.917 SYMLINK libspdk_iscsi.so 00:02:27.175 CC module/env_dpdk/env_dpdk_rpc.o 00:02:27.175 CC module/sock/posix/posix.o 00:02:27.175 CC module/scheduler/gscheduler/gscheduler.o 00:02:27.175 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:27.175 CC module/blob/bdev/blob_bdev.o 00:02:27.175 CC module/keyring/file/keyring.o 00:02:27.175 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:27.175 CC module/accel/error/accel_error.o 00:02:27.175 CC module/keyring/file/keyring_rpc.o 00:02:27.175 CC module/accel/ioat/accel_ioat.o 00:02:27.175 CC module/accel/iaa/accel_iaa.o 00:02:27.175 CC module/accel/error/accel_error_rpc.o 00:02:27.175 CC module/accel/ioat/accel_ioat_rpc.o 00:02:27.175 CC module/accel/dsa/accel_dsa.o 00:02:27.175 CC module/accel/iaa/accel_iaa_rpc.o 00:02:27.175 CC module/accel/dsa/accel_dsa_rpc.o 00:02:27.175 LIB libspdk_env_dpdk_rpc.a 00:02:27.434 SO libspdk_env_dpdk_rpc.so.6.0 00:02:27.434 SYMLINK libspdk_env_dpdk_rpc.so 00:02:27.434 LIB libspdk_keyring_file.a 00:02:27.434 LIB libspdk_scheduler_gscheduler.a 00:02:27.434 LIB libspdk_scheduler_dpdk_governor.a 00:02:27.434 SO libspdk_keyring_file.so.1.0 00:02:27.434 SO libspdk_scheduler_gscheduler.so.4.0 00:02:27.434 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:27.434 LIB libspdk_accel_error.a 00:02:27.434 LIB libspdk_accel_ioat.a 00:02:27.434 LIB libspdk_scheduler_dynamic.a 00:02:27.434 LIB libspdk_accel_iaa.a 00:02:27.434 SO libspdk_accel_error.so.2.0 00:02:27.434 SO libspdk_scheduler_dynamic.so.4.0 00:02:27.434 SO libspdk_accel_ioat.so.6.0 00:02:27.434 SYMLINK libspdk_scheduler_gscheduler.so 00:02:27.434 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:27.434 SYMLINK libspdk_keyring_file.so 00:02:27.434 SO libspdk_accel_iaa.so.3.0 00:02:27.434 LIB libspdk_accel_dsa.a 00:02:27.434 LIB libspdk_blob_bdev.a 00:02:27.434 SYMLINK libspdk_scheduler_dynamic.so 00:02:27.434 SYMLINK libspdk_accel_error.so 00:02:27.434 SYMLINK libspdk_accel_ioat.so 00:02:27.434 SO libspdk_accel_dsa.so.5.0 00:02:27.434 SO libspdk_blob_bdev.so.11.0 00:02:27.434 SYMLINK libspdk_accel_iaa.so 00:02:27.692 SYMLINK libspdk_accel_dsa.so 00:02:27.692 SYMLINK libspdk_blob_bdev.so 00:02:27.951 CC module/blobfs/bdev/blobfs_bdev.o 00:02:27.951 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:27.951 CC module/bdev/malloc/bdev_malloc.o 00:02:27.951 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:27.951 CC module/bdev/passthru/vbdev_passthru.o 00:02:27.951 CC module/bdev/error/vbdev_error.o 00:02:27.951 CC module/bdev/nvme/bdev_nvme.o 00:02:27.951 CC module/bdev/lvol/vbdev_lvol.o 00:02:27.951 CC module/bdev/gpt/gpt.o 00:02:27.951 CC module/bdev/error/vbdev_error_rpc.o 00:02:27.951 CC module/bdev/delay/vbdev_delay.o 00:02:27.951 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:27.951 CC module/bdev/raid/bdev_raid.o 00:02:27.951 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:27.951 CC module/bdev/null/bdev_null.o 00:02:27.951 CC module/bdev/gpt/vbdev_gpt.o 00:02:27.951 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:27.951 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:27.951 CC module/bdev/nvme/nvme_rpc.o 00:02:27.951 CC module/bdev/raid/bdev_raid_sb.o 00:02:27.952 CC module/bdev/raid/bdev_raid_rpc.o 00:02:27.952 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:27.952 CC module/bdev/nvme/bdev_mdns_client.o 00:02:27.952 CC module/bdev/split/vbdev_split.o 00:02:27.952 CC module/bdev/null/bdev_null_rpc.o 00:02:27.952 CC module/bdev/split/vbdev_split_rpc.o 00:02:27.952 CC module/bdev/raid/raid0.o 00:02:27.952 CC module/bdev/nvme/vbdev_opal.o 00:02:27.952 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:27.952 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:27.952 CC module/bdev/iscsi/bdev_iscsi.o 00:02:27.952 CC module/bdev/raid/raid1.o 00:02:27.952 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:27.952 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:27.952 CC module/bdev/raid/concat.o 00:02:27.952 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:27.952 CC module/bdev/ftl/bdev_ftl.o 00:02:27.952 CC module/bdev/aio/bdev_aio.o 00:02:27.952 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:27.952 CC module/bdev/aio/bdev_aio_rpc.o 00:02:27.952 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:27.952 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:27.952 LIB libspdk_sock_posix.a 00:02:27.952 SO libspdk_sock_posix.so.6.0 00:02:28.210 SYMLINK libspdk_sock_posix.so 00:02:28.210 LIB libspdk_blobfs_bdev.a 00:02:28.210 SO libspdk_blobfs_bdev.so.6.0 00:02:28.210 LIB libspdk_bdev_split.a 00:02:28.210 LIB libspdk_bdev_aio.a 00:02:28.210 SO libspdk_bdev_split.so.6.0 00:02:28.210 LIB libspdk_bdev_null.a 00:02:28.210 LIB libspdk_bdev_zone_block.a 00:02:28.210 LIB libspdk_bdev_error.a 00:02:28.210 SYMLINK libspdk_blobfs_bdev.so 00:02:28.210 SO libspdk_bdev_aio.so.6.0 00:02:28.210 LIB libspdk_bdev_ftl.a 00:02:28.210 SO libspdk_bdev_null.so.6.0 00:02:28.210 SO libspdk_bdev_zone_block.so.6.0 00:02:28.210 SO libspdk_bdev_error.so.6.0 00:02:28.210 LIB libspdk_bdev_passthru.a 00:02:28.210 LIB libspdk_bdev_iscsi.a 00:02:28.468 LIB libspdk_bdev_gpt.a 00:02:28.468 SO libspdk_bdev_ftl.so.6.0 00:02:28.468 SYMLINK libspdk_bdev_split.so 00:02:28.468 SO libspdk_bdev_passthru.so.6.0 00:02:28.468 SO libspdk_bdev_iscsi.so.6.0 00:02:28.469 LIB libspdk_bdev_malloc.a 00:02:28.469 SO libspdk_bdev_gpt.so.6.0 00:02:28.469 SYMLINK libspdk_bdev_aio.so 00:02:28.469 SYMLINK libspdk_bdev_error.so 00:02:28.469 SYMLINK libspdk_bdev_null.so 00:02:28.469 SYMLINK libspdk_bdev_zone_block.so 00:02:28.469 LIB libspdk_bdev_delay.a 00:02:28.469 SO libspdk_bdev_malloc.so.6.0 00:02:28.469 SYMLINK libspdk_bdev_ftl.so 00:02:28.469 SYMLINK libspdk_bdev_passthru.so 00:02:28.469 SO libspdk_bdev_delay.so.6.0 00:02:28.469 SYMLINK libspdk_bdev_iscsi.so 00:02:28.469 SYMLINK libspdk_bdev_gpt.so 00:02:28.469 SYMLINK libspdk_bdev_malloc.so 00:02:28.469 SYMLINK libspdk_bdev_delay.so 00:02:28.469 LIB libspdk_bdev_lvol.a 00:02:28.469 SO libspdk_bdev_lvol.so.6.0 00:02:28.469 LIB libspdk_bdev_virtio.a 00:02:28.726 SO libspdk_bdev_virtio.so.6.0 00:02:28.726 SYMLINK libspdk_bdev_lvol.so 00:02:28.726 SYMLINK libspdk_bdev_virtio.so 00:02:28.985 LIB libspdk_bdev_raid.a 00:02:28.985 SO libspdk_bdev_raid.so.6.0 00:02:28.985 SYMLINK libspdk_bdev_raid.so 00:02:30.387 LIB libspdk_bdev_nvme.a 00:02:30.387 SO libspdk_bdev_nvme.so.7.0 00:02:30.387 SYMLINK libspdk_bdev_nvme.so 00:02:30.646 CC module/event/subsystems/keyring/keyring.o 00:02:30.646 CC module/event/subsystems/vmd/vmd.o 00:02:30.646 CC module/event/subsystems/scheduler/scheduler.o 00:02:30.646 CC module/event/subsystems/sock/sock.o 00:02:30.646 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:30.646 CC module/event/subsystems/iobuf/iobuf.o 00:02:30.646 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:30.646 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:30.646 LIB libspdk_event_keyring.a 00:02:30.646 LIB libspdk_event_sock.a 00:02:30.646 LIB libspdk_event_scheduler.a 00:02:30.646 LIB libspdk_event_vhost_blk.a 00:02:30.646 LIB libspdk_event_vmd.a 00:02:30.646 SO libspdk_event_keyring.so.1.0 00:02:30.646 SO libspdk_event_sock.so.5.0 00:02:30.646 LIB libspdk_event_iobuf.a 00:02:30.646 SO libspdk_event_scheduler.so.4.0 00:02:30.646 SO libspdk_event_vhost_blk.so.3.0 00:02:30.905 SO libspdk_event_vmd.so.6.0 00:02:30.905 SO libspdk_event_iobuf.so.3.0 00:02:30.905 SYMLINK libspdk_event_keyring.so 00:02:30.905 SYMLINK libspdk_event_sock.so 00:02:30.905 SYMLINK libspdk_event_vhost_blk.so 00:02:30.905 SYMLINK libspdk_event_scheduler.so 00:02:30.905 SYMLINK libspdk_event_vmd.so 00:02:30.905 SYMLINK libspdk_event_iobuf.so 00:02:30.905 CC module/event/subsystems/accel/accel.o 00:02:31.163 LIB libspdk_event_accel.a 00:02:31.163 SO libspdk_event_accel.so.6.0 00:02:31.163 SYMLINK libspdk_event_accel.so 00:02:31.421 CC module/event/subsystems/bdev/bdev.o 00:02:31.679 LIB libspdk_event_bdev.a 00:02:31.679 SO libspdk_event_bdev.so.6.0 00:02:31.679 SYMLINK libspdk_event_bdev.so 00:02:31.938 CC module/event/subsystems/scsi/scsi.o 00:02:31.938 CC module/event/subsystems/ublk/ublk.o 00:02:31.938 CC module/event/subsystems/nbd/nbd.o 00:02:31.938 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:31.938 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:31.938 LIB libspdk_event_ublk.a 00:02:31.938 LIB libspdk_event_nbd.a 00:02:31.938 LIB libspdk_event_scsi.a 00:02:31.938 SO libspdk_event_ublk.so.3.0 00:02:31.938 SO libspdk_event_nbd.so.6.0 00:02:31.938 SO libspdk_event_scsi.so.6.0 00:02:31.938 SYMLINK libspdk_event_ublk.so 00:02:31.938 SYMLINK libspdk_event_nbd.so 00:02:32.196 SYMLINK libspdk_event_scsi.so 00:02:32.196 LIB libspdk_event_nvmf.a 00:02:32.196 SO libspdk_event_nvmf.so.6.0 00:02:32.196 SYMLINK libspdk_event_nvmf.so 00:02:32.196 CC module/event/subsystems/iscsi/iscsi.o 00:02:32.196 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:32.455 LIB libspdk_event_vhost_scsi.a 00:02:32.455 LIB libspdk_event_iscsi.a 00:02:32.455 SO libspdk_event_vhost_scsi.so.3.0 00:02:32.455 SO libspdk_event_iscsi.so.6.0 00:02:32.455 SYMLINK libspdk_event_vhost_scsi.so 00:02:32.455 SYMLINK libspdk_event_iscsi.so 00:02:32.721 SO libspdk.so.6.0 00:02:32.721 SYMLINK libspdk.so 00:02:32.721 TEST_HEADER include/spdk/accel.h 00:02:32.721 TEST_HEADER include/spdk/accel_module.h 00:02:32.721 CXX app/trace/trace.o 00:02:32.721 CC app/spdk_nvme_discover/discovery_aer.o 00:02:32.721 TEST_HEADER include/spdk/assert.h 00:02:32.721 TEST_HEADER include/spdk/barrier.h 00:02:32.721 CC app/trace_record/trace_record.o 00:02:32.721 TEST_HEADER include/spdk/base64.h 00:02:32.721 CC app/spdk_nvme_identify/identify.o 00:02:32.721 CC app/spdk_nvme_perf/perf.o 00:02:32.721 TEST_HEADER include/spdk/bdev.h 00:02:32.721 CC test/rpc_client/rpc_client_test.o 00:02:32.721 CC app/spdk_top/spdk_top.o 00:02:32.721 TEST_HEADER include/spdk/bdev_module.h 00:02:32.721 TEST_HEADER include/spdk/bdev_zone.h 00:02:32.721 TEST_HEADER include/spdk/bit_array.h 00:02:32.721 CC app/spdk_lspci/spdk_lspci.o 00:02:32.721 TEST_HEADER include/spdk/bit_pool.h 00:02:32.721 TEST_HEADER include/spdk/blob_bdev.h 00:02:32.721 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:32.982 TEST_HEADER include/spdk/blobfs.h 00:02:32.982 TEST_HEADER include/spdk/blob.h 00:02:32.982 TEST_HEADER include/spdk/conf.h 00:02:32.982 TEST_HEADER include/spdk/config.h 00:02:32.982 TEST_HEADER include/spdk/cpuset.h 00:02:32.982 TEST_HEADER include/spdk/crc16.h 00:02:32.982 TEST_HEADER include/spdk/crc32.h 00:02:32.982 TEST_HEADER include/spdk/crc64.h 00:02:32.982 TEST_HEADER include/spdk/dif.h 00:02:32.982 CC app/spdk_dd/spdk_dd.o 00:02:32.982 TEST_HEADER include/spdk/dma.h 00:02:32.982 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:32.982 TEST_HEADER include/spdk/endian.h 00:02:32.982 TEST_HEADER include/spdk/env_dpdk.h 00:02:32.982 TEST_HEADER include/spdk/env.h 00:02:32.982 TEST_HEADER include/spdk/event.h 00:02:32.982 CC app/iscsi_tgt/iscsi_tgt.o 00:02:32.982 TEST_HEADER include/spdk/fd_group.h 00:02:32.982 TEST_HEADER include/spdk/fd.h 00:02:32.982 CC app/nvmf_tgt/nvmf_main.o 00:02:32.982 TEST_HEADER include/spdk/file.h 00:02:32.982 TEST_HEADER include/spdk/ftl.h 00:02:32.982 TEST_HEADER include/spdk/gpt_spec.h 00:02:32.982 CC app/vhost/vhost.o 00:02:32.982 TEST_HEADER include/spdk/hexlify.h 00:02:32.982 TEST_HEADER include/spdk/histogram_data.h 00:02:32.982 TEST_HEADER include/spdk/idxd.h 00:02:32.982 TEST_HEADER include/spdk/idxd_spec.h 00:02:32.982 TEST_HEADER include/spdk/init.h 00:02:32.982 TEST_HEADER include/spdk/ioat.h 00:02:32.982 TEST_HEADER include/spdk/ioat_spec.h 00:02:32.982 TEST_HEADER include/spdk/iscsi_spec.h 00:02:32.982 CC examples/nvme/arbitration/arbitration.o 00:02:32.982 CC examples/ioat/verify/verify.o 00:02:32.982 CC app/spdk_tgt/spdk_tgt.o 00:02:32.982 CC examples/nvme/hello_world/hello_world.o 00:02:32.982 TEST_HEADER include/spdk/json.h 00:02:32.982 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:32.982 CC examples/vmd/lsvmd/lsvmd.o 00:02:32.982 CC test/app/histogram_perf/histogram_perf.o 00:02:32.982 TEST_HEADER include/spdk/jsonrpc.h 00:02:32.982 CC examples/idxd/perf/perf.o 00:02:32.982 CC examples/nvme/reconnect/reconnect.o 00:02:32.982 CC examples/nvme/hotplug/hotplug.o 00:02:32.982 TEST_HEADER include/spdk/keyring.h 00:02:32.982 CC test/app/jsoncat/jsoncat.o 00:02:32.982 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:32.982 CC examples/ioat/perf/perf.o 00:02:32.982 TEST_HEADER include/spdk/keyring_module.h 00:02:32.982 CC app/fio/nvme/fio_plugin.o 00:02:32.982 CC test/thread/poller_perf/poller_perf.o 00:02:32.982 CC examples/util/zipf/zipf.o 00:02:32.982 TEST_HEADER include/spdk/likely.h 00:02:32.982 CC examples/accel/perf/accel_perf.o 00:02:32.982 CC test/event/event_perf/event_perf.o 00:02:32.982 CC examples/sock/hello_world/hello_sock.o 00:02:32.982 CC examples/nvme/abort/abort.o 00:02:32.982 TEST_HEADER include/spdk/log.h 00:02:32.982 TEST_HEADER include/spdk/lvol.h 00:02:32.982 TEST_HEADER include/spdk/memory.h 00:02:32.982 TEST_HEADER include/spdk/mmio.h 00:02:32.982 TEST_HEADER include/spdk/nbd.h 00:02:32.983 TEST_HEADER include/spdk/notify.h 00:02:32.983 TEST_HEADER include/spdk/nvme.h 00:02:32.983 TEST_HEADER include/spdk/nvme_intel.h 00:02:32.983 CC test/nvme/aer/aer.o 00:02:32.983 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:32.983 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:32.983 CC examples/thread/thread/thread_ex.o 00:02:32.983 TEST_HEADER include/spdk/nvme_spec.h 00:02:32.983 CC examples/bdev/hello_world/hello_bdev.o 00:02:32.983 TEST_HEADER include/spdk/nvme_zns.h 00:02:32.983 CC test/accel/dif/dif.o 00:02:32.983 CC test/bdev/bdevio/bdevio.o 00:02:32.983 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:32.983 CC test/blobfs/mkfs/mkfs.o 00:02:32.983 CC examples/blob/hello_world/hello_blob.o 00:02:32.983 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:32.983 CC test/dma/test_dma/test_dma.o 00:02:32.983 TEST_HEADER include/spdk/nvmf.h 00:02:32.983 CC examples/bdev/bdevperf/bdevperf.o 00:02:32.983 CC test/app/bdev_svc/bdev_svc.o 00:02:32.983 TEST_HEADER include/spdk/nvmf_spec.h 00:02:32.983 TEST_HEADER include/spdk/nvmf_transport.h 00:02:32.983 CC examples/nvmf/nvmf/nvmf.o 00:02:32.983 TEST_HEADER include/spdk/opal.h 00:02:32.983 TEST_HEADER include/spdk/opal_spec.h 00:02:32.983 TEST_HEADER include/spdk/pci_ids.h 00:02:32.983 TEST_HEADER include/spdk/pipe.h 00:02:32.983 TEST_HEADER include/spdk/queue.h 00:02:32.983 TEST_HEADER include/spdk/reduce.h 00:02:33.249 TEST_HEADER include/spdk/rpc.h 00:02:33.249 TEST_HEADER include/spdk/scheduler.h 00:02:33.249 TEST_HEADER include/spdk/scsi.h 00:02:33.249 LINK spdk_lspci 00:02:33.249 TEST_HEADER include/spdk/scsi_spec.h 00:02:33.249 TEST_HEADER include/spdk/sock.h 00:02:33.249 TEST_HEADER include/spdk/stdinc.h 00:02:33.249 CC test/lvol/esnap/esnap.o 00:02:33.249 CC test/env/mem_callbacks/mem_callbacks.o 00:02:33.249 TEST_HEADER include/spdk/string.h 00:02:33.249 TEST_HEADER include/spdk/thread.h 00:02:33.249 TEST_HEADER include/spdk/trace.h 00:02:33.249 TEST_HEADER include/spdk/trace_parser.h 00:02:33.249 TEST_HEADER include/spdk/tree.h 00:02:33.249 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:33.249 TEST_HEADER include/spdk/ublk.h 00:02:33.249 TEST_HEADER include/spdk/util.h 00:02:33.249 TEST_HEADER include/spdk/uuid.h 00:02:33.249 TEST_HEADER include/spdk/version.h 00:02:33.249 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:33.249 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:33.249 TEST_HEADER include/spdk/vhost.h 00:02:33.249 TEST_HEADER include/spdk/vmd.h 00:02:33.249 TEST_HEADER include/spdk/xor.h 00:02:33.249 TEST_HEADER include/spdk/zipf.h 00:02:33.249 CXX test/cpp_headers/accel.o 00:02:33.249 LINK rpc_client_test 00:02:33.249 LINK spdk_nvme_discover 00:02:33.249 LINK lsvmd 00:02:33.249 LINK jsoncat 00:02:33.249 LINK interrupt_tgt 00:02:33.249 LINK histogram_perf 00:02:33.249 LINK nvmf_tgt 00:02:33.249 LINK poller_perf 00:02:33.249 LINK event_perf 00:02:33.249 LINK zipf 00:02:33.249 LINK spdk_trace_record 00:02:33.249 LINK cmb_copy 00:02:33.249 LINK vhost 00:02:33.249 LINK iscsi_tgt 00:02:33.523 LINK verify 00:02:33.523 LINK spdk_tgt 00:02:33.523 LINK hello_world 00:02:33.523 LINK ioat_perf 00:02:33.523 LINK hello_sock 00:02:33.523 LINK bdev_svc 00:02:33.523 LINK mkfs 00:02:33.523 LINK hotplug 00:02:33.524 LINK hello_blob 00:02:33.524 LINK hello_bdev 00:02:33.524 LINK thread 00:02:33.524 CXX test/cpp_headers/accel_module.o 00:02:33.524 LINK arbitration 00:02:33.524 LINK spdk_dd 00:02:33.524 LINK aer 00:02:33.524 CC test/event/reactor/reactor.o 00:02:33.524 LINK reconnect 00:02:33.524 CXX test/cpp_headers/assert.o 00:02:33.524 LINK nvmf 00:02:33.524 LINK idxd_perf 00:02:33.787 CC test/event/reactor_perf/reactor_perf.o 00:02:33.787 LINK abort 00:02:33.787 CC test/app/stub/stub.o 00:02:33.787 LINK spdk_trace 00:02:33.787 CC examples/vmd/led/led.o 00:02:33.787 CXX test/cpp_headers/barrier.o 00:02:33.787 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:33.787 CXX test/cpp_headers/base64.o 00:02:33.787 CXX test/cpp_headers/bdev.o 00:02:33.787 LINK dif 00:02:33.787 LINK bdevio 00:02:33.787 CC examples/blob/cli/blobcli.o 00:02:33.787 LINK test_dma 00:02:33.787 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:33.787 CC app/fio/bdev/fio_plugin.o 00:02:33.787 CC test/event/app_repeat/app_repeat.o 00:02:33.787 CC test/nvme/reset/reset.o 00:02:33.787 CC test/env/vtophys/vtophys.o 00:02:33.787 CC test/nvme/sgl/sgl.o 00:02:33.787 LINK accel_perf 00:02:33.787 LINK nvme_manage 00:02:33.787 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:33.787 LINK reactor 00:02:34.053 CXX test/cpp_headers/bdev_module.o 00:02:34.053 CXX test/cpp_headers/bdev_zone.o 00:02:34.053 CC test/nvme/e2edp/nvme_dp.o 00:02:34.053 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:34.053 CXX test/cpp_headers/bit_array.o 00:02:34.053 CC test/event/scheduler/scheduler.o 00:02:34.053 CXX test/cpp_headers/bit_pool.o 00:02:34.053 LINK nvme_fuzz 00:02:34.053 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:34.053 CC test/nvme/overhead/overhead.o 00:02:34.053 CXX test/cpp_headers/blob_bdev.o 00:02:34.053 LINK reactor_perf 00:02:34.053 CXX test/cpp_headers/blobfs_bdev.o 00:02:34.053 LINK spdk_nvme 00:02:34.053 CC test/env/memory/memory_ut.o 00:02:34.053 LINK led 00:02:34.053 CC test/env/pci/pci_ut.o 00:02:34.053 LINK stub 00:02:34.053 CC test/nvme/err_injection/err_injection.o 00:02:34.053 LINK app_repeat 00:02:34.053 LINK vtophys 00:02:34.053 LINK pmr_persistence 00:02:34.053 CXX test/cpp_headers/blobfs.o 00:02:34.053 CXX test/cpp_headers/blob.o 00:02:34.317 CC test/nvme/startup/startup.o 00:02:34.317 CXX test/cpp_headers/conf.o 00:02:34.317 CC test/nvme/reserve/reserve.o 00:02:34.317 CC test/nvme/connect_stress/connect_stress.o 00:02:34.317 CXX test/cpp_headers/config.o 00:02:34.317 CXX test/cpp_headers/cpuset.o 00:02:34.317 CC test/nvme/simple_copy/simple_copy.o 00:02:34.317 CXX test/cpp_headers/crc16.o 00:02:34.317 CXX test/cpp_headers/crc32.o 00:02:34.317 CC test/nvme/boot_partition/boot_partition.o 00:02:34.317 CC test/nvme/compliance/nvme_compliance.o 00:02:34.317 CXX test/cpp_headers/crc64.o 00:02:34.317 CXX test/cpp_headers/dif.o 00:02:34.317 CXX test/cpp_headers/dma.o 00:02:34.317 CC test/nvme/fused_ordering/fused_ordering.o 00:02:34.317 CXX test/cpp_headers/endian.o 00:02:34.317 LINK reset 00:02:34.317 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:34.317 CC test/nvme/fdp/fdp.o 00:02:34.317 CXX test/cpp_headers/env_dpdk.o 00:02:34.317 LINK env_dpdk_post_init 00:02:34.317 LINK spdk_nvme_perf 00:02:34.317 LINK mem_callbacks 00:02:34.317 CC test/nvme/cuse/cuse.o 00:02:34.317 CXX test/cpp_headers/env.o 00:02:34.317 CXX test/cpp_headers/event.o 00:02:34.317 LINK sgl 00:02:34.317 CXX test/cpp_headers/fd_group.o 00:02:34.317 CXX test/cpp_headers/fd.o 00:02:34.317 LINK scheduler 00:02:34.317 LINK spdk_nvme_identify 00:02:34.577 CXX test/cpp_headers/file.o 00:02:34.577 CXX test/cpp_headers/ftl.o 00:02:34.577 CXX test/cpp_headers/gpt_spec.o 00:02:34.577 LINK nvme_dp 00:02:34.577 CXX test/cpp_headers/hexlify.o 00:02:34.577 LINK spdk_top 00:02:34.577 LINK startup 00:02:34.577 LINK err_injection 00:02:34.577 CXX test/cpp_headers/histogram_data.o 00:02:34.577 LINK connect_stress 00:02:34.577 LINK overhead 00:02:34.577 CXX test/cpp_headers/idxd.o 00:02:34.577 CXX test/cpp_headers/idxd_spec.o 00:02:34.577 CXX test/cpp_headers/init.o 00:02:34.577 LINK bdevperf 00:02:34.577 CXX test/cpp_headers/ioat.o 00:02:34.577 LINK boot_partition 00:02:34.577 LINK reserve 00:02:34.577 CXX test/cpp_headers/ioat_spec.o 00:02:34.577 CXX test/cpp_headers/iscsi_spec.o 00:02:34.577 CXX test/cpp_headers/json.o 00:02:34.577 CXX test/cpp_headers/jsonrpc.o 00:02:34.577 CXX test/cpp_headers/keyring.o 00:02:34.577 CXX test/cpp_headers/keyring_module.o 00:02:34.577 LINK simple_copy 00:02:34.577 LINK blobcli 00:02:34.577 CXX test/cpp_headers/likely.o 00:02:34.577 CXX test/cpp_headers/log.o 00:02:34.843 CXX test/cpp_headers/lvol.o 00:02:34.843 CXX test/cpp_headers/memory.o 00:02:34.843 LINK fused_ordering 00:02:34.843 LINK doorbell_aers 00:02:34.843 LINK spdk_bdev 00:02:34.843 CXX test/cpp_headers/mmio.o 00:02:34.843 CXX test/cpp_headers/nbd.o 00:02:34.843 LINK pci_ut 00:02:34.843 CXX test/cpp_headers/notify.o 00:02:34.843 CXX test/cpp_headers/nvme.o 00:02:34.843 CXX test/cpp_headers/nvme_intel.o 00:02:34.843 CXX test/cpp_headers/nvme_ocssd.o 00:02:34.843 LINK vhost_fuzz 00:02:34.843 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:34.843 CXX test/cpp_headers/nvme_spec.o 00:02:34.843 CXX test/cpp_headers/nvme_zns.o 00:02:34.843 CXX test/cpp_headers/nvmf_cmd.o 00:02:34.843 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:34.843 CXX test/cpp_headers/nvmf.o 00:02:34.843 CXX test/cpp_headers/nvmf_spec.o 00:02:34.843 CXX test/cpp_headers/nvmf_transport.o 00:02:34.843 CXX test/cpp_headers/opal.o 00:02:34.843 CXX test/cpp_headers/opal_spec.o 00:02:34.843 CXX test/cpp_headers/pci_ids.o 00:02:34.843 CXX test/cpp_headers/pipe.o 00:02:34.843 CXX test/cpp_headers/queue.o 00:02:34.843 CXX test/cpp_headers/reduce.o 00:02:34.843 CXX test/cpp_headers/rpc.o 00:02:34.843 CXX test/cpp_headers/scheduler.o 00:02:34.843 CXX test/cpp_headers/scsi.o 00:02:34.843 CXX test/cpp_headers/scsi_spec.o 00:02:34.843 LINK nvme_compliance 00:02:35.102 CXX test/cpp_headers/sock.o 00:02:35.102 CXX test/cpp_headers/stdinc.o 00:02:35.102 CXX test/cpp_headers/string.o 00:02:35.102 LINK fdp 00:02:35.102 CXX test/cpp_headers/thread.o 00:02:35.102 CXX test/cpp_headers/trace.o 00:02:35.102 CXX test/cpp_headers/trace_parser.o 00:02:35.102 CXX test/cpp_headers/tree.o 00:02:35.102 CXX test/cpp_headers/ublk.o 00:02:35.102 CXX test/cpp_headers/util.o 00:02:35.102 CXX test/cpp_headers/uuid.o 00:02:35.102 CXX test/cpp_headers/version.o 00:02:35.102 CXX test/cpp_headers/vfio_user_pci.o 00:02:35.102 CXX test/cpp_headers/vfio_user_spec.o 00:02:35.102 CXX test/cpp_headers/vhost.o 00:02:35.102 CXX test/cpp_headers/vmd.o 00:02:35.102 CXX test/cpp_headers/xor.o 00:02:35.102 CXX test/cpp_headers/zipf.o 00:02:35.668 LINK memory_ut 00:02:35.925 LINK cuse 00:02:36.183 LINK iscsi_fuzz 00:02:38.712 LINK esnap 00:02:38.970 00:02:38.970 real 0m47.200s 00:02:38.970 user 9m46.538s 00:02:38.970 sys 2m19.374s 00:02:38.970 03:03:13 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:02:38.970 03:03:13 -- common/autotest_common.sh@10 -- $ set +x 00:02:38.970 ************************************ 00:02:38.970 END TEST make 00:02:38.970 ************************************ 00:02:38.970 03:03:13 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:38.970 03:03:13 -- pm/common@30 -- $ signal_monitor_resources TERM 00:02:38.970 03:03:13 -- pm/common@41 -- $ local monitor pid pids signal=TERM 00:02:38.970 03:03:13 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:38.970 03:03:13 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:38.970 03:03:13 -- pm/common@45 -- $ pid=1286982 00:02:38.970 03:03:13 -- pm/common@52 -- $ sudo kill -TERM 1286982 00:02:39.229 03:03:13 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:39.229 03:03:13 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:39.229 03:03:13 -- pm/common@45 -- $ pid=1286981 00:02:39.229 03:03:13 -- pm/common@52 -- $ sudo kill -TERM 1286981 00:02:39.229 03:03:13 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:39.229 03:03:13 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:39.229 03:03:13 -- pm/common@45 -- $ pid=1286979 00:02:39.229 03:03:13 -- pm/common@52 -- $ sudo kill -TERM 1286979 00:02:39.229 03:03:13 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:39.229 03:03:13 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:39.229 03:03:13 -- pm/common@45 -- $ pid=1286980 00:02:39.229 03:03:13 -- pm/common@52 -- $ sudo kill -TERM 1286980 00:02:39.229 03:03:13 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:39.229 03:03:13 -- nvmf/common.sh@7 -- # uname -s 00:02:39.229 03:03:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:39.229 03:03:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:39.229 03:03:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:39.229 03:03:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:39.229 03:03:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:39.229 03:03:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:39.229 03:03:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:39.229 03:03:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:39.229 03:03:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:39.229 03:03:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:39.229 03:03:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:02:39.229 03:03:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:02:39.229 03:03:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:39.229 03:03:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:39.229 03:03:13 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:39.229 03:03:13 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:39.229 03:03:13 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:39.229 03:03:13 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:39.229 03:03:13 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:39.229 03:03:13 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:39.229 03:03:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:39.229 03:03:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:39.229 03:03:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:39.229 03:03:13 -- paths/export.sh@5 -- # export PATH 00:02:39.229 03:03:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:39.229 03:03:13 -- nvmf/common.sh@47 -- # : 0 00:02:39.229 03:03:13 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:39.229 03:03:13 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:39.229 03:03:13 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:39.229 03:03:13 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:39.229 03:03:13 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:39.229 03:03:13 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:39.229 03:03:13 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:39.229 03:03:13 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:39.229 03:03:13 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:39.229 03:03:13 -- spdk/autotest.sh@32 -- # uname -s 00:02:39.229 03:03:13 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:39.229 03:03:13 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:39.229 03:03:13 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:39.229 03:03:13 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:39.229 03:03:13 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:39.229 03:03:13 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:39.229 03:03:13 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:39.229 03:03:13 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:39.229 03:03:13 -- spdk/autotest.sh@48 -- # udevadm_pid=1341825 00:02:39.229 03:03:13 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:39.229 03:03:13 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:39.229 03:03:13 -- pm/common@17 -- # local monitor 00:02:39.229 03:03:13 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:39.229 03:03:13 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=1341828 00:02:39.229 03:03:13 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:39.229 03:03:13 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=1341830 00:02:39.229 03:03:13 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:39.229 03:03:13 -- pm/common@21 -- # date +%s 00:02:39.229 03:03:13 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=1341832 00:02:39.229 03:03:13 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:39.229 03:03:13 -- pm/common@21 -- # date +%s 00:02:39.229 03:03:13 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=1341836 00:02:39.229 03:03:13 -- pm/common@26 -- # sleep 1 00:02:39.229 03:03:13 -- pm/common@21 -- # date +%s 00:02:39.229 03:03:13 -- pm/common@21 -- # date +%s 00:02:39.229 03:03:13 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1714006993 00:02:39.229 03:03:13 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1714006993 00:02:39.229 03:03:13 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1714006993 00:02:39.229 03:03:13 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1714006993 00:02:39.229 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1714006993_collect-vmstat.pm.log 00:02:39.229 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1714006993_collect-bmc-pm.bmc.pm.log 00:02:39.229 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1714006993_collect-cpu-load.pm.log 00:02:39.229 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1714006993_collect-cpu-temp.pm.log 00:02:40.166 03:03:14 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:40.166 03:03:14 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:40.166 03:03:14 -- common/autotest_common.sh@710 -- # xtrace_disable 00:02:40.166 03:03:14 -- common/autotest_common.sh@10 -- # set +x 00:02:40.166 03:03:14 -- spdk/autotest.sh@59 -- # create_test_list 00:02:40.166 03:03:14 -- common/autotest_common.sh@734 -- # xtrace_disable 00:02:40.166 03:03:14 -- common/autotest_common.sh@10 -- # set +x 00:02:40.425 03:03:14 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:40.425 03:03:14 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:40.425 03:03:14 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:40.425 03:03:14 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:40.425 03:03:14 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:40.425 03:03:14 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:40.425 03:03:14 -- common/autotest_common.sh@1441 -- # uname 00:02:40.425 03:03:14 -- common/autotest_common.sh@1441 -- # '[' Linux = FreeBSD ']' 00:02:40.425 03:03:14 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:40.425 03:03:14 -- common/autotest_common.sh@1461 -- # uname 00:02:40.425 03:03:14 -- common/autotest_common.sh@1461 -- # [[ Linux = FreeBSD ]] 00:02:40.425 03:03:14 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:40.425 03:03:14 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:40.425 03:03:14 -- spdk/autotest.sh@72 -- # hash lcov 00:02:40.425 03:03:14 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:40.425 03:03:14 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:40.425 --rc lcov_branch_coverage=1 00:02:40.425 --rc lcov_function_coverage=1 00:02:40.425 --rc genhtml_branch_coverage=1 00:02:40.425 --rc genhtml_function_coverage=1 00:02:40.425 --rc genhtml_legend=1 00:02:40.425 --rc geninfo_all_blocks=1 00:02:40.425 ' 00:02:40.425 03:03:14 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:40.425 --rc lcov_branch_coverage=1 00:02:40.425 --rc lcov_function_coverage=1 00:02:40.425 --rc genhtml_branch_coverage=1 00:02:40.425 --rc genhtml_function_coverage=1 00:02:40.425 --rc genhtml_legend=1 00:02:40.425 --rc geninfo_all_blocks=1 00:02:40.425 ' 00:02:40.425 03:03:14 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:40.425 --rc lcov_branch_coverage=1 00:02:40.425 --rc lcov_function_coverage=1 00:02:40.425 --rc genhtml_branch_coverage=1 00:02:40.425 --rc genhtml_function_coverage=1 00:02:40.425 --rc genhtml_legend=1 00:02:40.425 --rc geninfo_all_blocks=1 00:02:40.425 --no-external' 00:02:40.425 03:03:14 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:40.425 --rc lcov_branch_coverage=1 00:02:40.425 --rc lcov_function_coverage=1 00:02:40.425 --rc genhtml_branch_coverage=1 00:02:40.425 --rc genhtml_function_coverage=1 00:02:40.425 --rc genhtml_legend=1 00:02:40.425 --rc geninfo_all_blocks=1 00:02:40.425 --no-external' 00:02:40.425 03:03:14 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:40.425 lcov: LCOV version 1.14 00:02:40.425 03:03:14 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:50.403 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:50.403 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:50.403 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:50.403 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:50.403 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:50.403 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:50.403 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:50.403 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:50.403 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:50.403 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:50.403 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:50.403 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:50.403 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:50.403 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:50.403 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:50.403 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:50.403 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:50.403 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:50.403 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:50.403 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:50.403 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:50.403 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:50.403 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:50.403 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:50.403 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:50.403 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:50.403 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:50.403 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:50.403 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:50.403 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:50.403 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:50.403 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:50.403 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:50.403 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:50.403 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:50.403 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:50.403 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:50.403 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:50.403 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:50.403 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:50.403 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:50.403 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:50.403 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:50.403 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:50.403 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:50.403 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:50.403 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:50.403 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:50.403 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:50.403 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:50.403 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:50.403 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:50.403 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:50.403 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:50.403 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:50.403 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:50.403 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:50.403 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:50.403 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:50.403 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:50.403 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:50.403 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:50.403 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:50.403 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:50.403 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:50.403 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:50.403 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:50.403 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:50.403 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:50.403 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:50.403 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:50.403 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:50.403 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:50.403 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:50.403 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:50.403 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:50.403 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:50.403 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:50.403 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:50.403 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:50.403 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:50.404 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:50.404 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:50.404 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:50.404 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:50.404 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:50.404 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:50.404 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:50.404 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:50.404 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:50.404 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:50.404 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:50.404 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:50.404 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:50.404 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:50.404 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:50.404 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:50.404 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:50.404 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:50.404 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:50.404 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:50.404 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:50.404 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:50.404 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:50.404 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:50.404 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:50.404 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:50.404 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:50.404 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:50.404 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:50.404 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:50.404 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:50.404 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:50.404 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:50.404 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:50.404 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:50.404 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:50.404 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:50.404 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:50.404 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:50.404 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:50.404 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:50.404 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:50.404 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:50.404 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:50.404 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:50.404 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:50.404 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:50.404 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:50.404 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:50.404 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:50.404 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:50.404 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:50.404 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:50.404 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:50.404 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:50.404 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:50.404 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:50.404 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:50.404 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:50.404 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:50.404 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:50.404 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:50.404 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:50.404 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:50.404 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:50.404 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:50.404 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:50.404 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:50.404 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:50.404 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:50.404 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:50.404 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:50.404 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:50.404 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:50.404 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:50.404 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:50.404 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:50.404 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:50.404 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:50.404 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:50.404 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:50.404 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:50.404 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:50.404 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:50.404 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:50.404 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:50.404 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:50.404 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:50.404 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:50.404 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:50.404 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:50.404 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:50.404 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:50.404 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:50.404 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:55.667 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:55.667 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:07.868 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:03:07.868 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:03:07.868 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:03:07.868 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:03:07.868 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:03:07.868 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:03:17.857 03:03:51 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:17.857 03:03:51 -- common/autotest_common.sh@710 -- # xtrace_disable 00:03:17.857 03:03:51 -- common/autotest_common.sh@10 -- # set +x 00:03:17.857 03:03:51 -- spdk/autotest.sh@91 -- # rm -f 00:03:17.857 03:03:51 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:18.794 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:03:18.794 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:03:18.795 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:03:18.795 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:03:18.795 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:03:18.795 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:03:18.795 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:03:18.795 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:03:18.795 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:03:18.795 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:03:18.795 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:03:18.795 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:03:18.795 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:03:18.795 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:03:18.795 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:03:18.795 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:03:18.795 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:03:18.795 03:03:53 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:18.795 03:03:53 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:03:18.795 03:03:53 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:03:18.795 03:03:53 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:03:18.795 03:03:53 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:18.795 03:03:53 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:03:18.795 03:03:53 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:03:18.795 03:03:53 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:18.795 03:03:53 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:18.795 03:03:53 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:18.795 03:03:53 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:18.795 03:03:53 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:18.795 03:03:53 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:18.795 03:03:53 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:18.795 03:03:53 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:19.054 No valid GPT data, bailing 00:03:19.055 03:03:53 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:19.055 03:03:53 -- scripts/common.sh@391 -- # pt= 00:03:19.055 03:03:53 -- scripts/common.sh@392 -- # return 1 00:03:19.055 03:03:53 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:19.055 1+0 records in 00:03:19.055 1+0 records out 00:03:19.055 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00473608 s, 221 MB/s 00:03:19.055 03:03:53 -- spdk/autotest.sh@118 -- # sync 00:03:19.055 03:03:53 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:19.055 03:03:53 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:19.055 03:03:53 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:20.997 03:03:55 -- spdk/autotest.sh@124 -- # uname -s 00:03:20.997 03:03:55 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:20.997 03:03:55 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:20.997 03:03:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:20.997 03:03:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:20.997 03:03:55 -- common/autotest_common.sh@10 -- # set +x 00:03:20.997 ************************************ 00:03:20.997 START TEST setup.sh 00:03:20.997 ************************************ 00:03:20.997 03:03:55 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:20.997 * Looking for test storage... 00:03:20.997 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:20.997 03:03:55 -- setup/test-setup.sh@10 -- # uname -s 00:03:20.997 03:03:55 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:20.997 03:03:55 -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:20.997 03:03:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:20.997 03:03:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:20.997 03:03:55 -- common/autotest_common.sh@10 -- # set +x 00:03:20.997 ************************************ 00:03:20.997 START TEST acl 00:03:20.997 ************************************ 00:03:20.997 03:03:55 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:20.997 * Looking for test storage... 00:03:20.997 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:20.997 03:03:55 -- setup/acl.sh@10 -- # get_zoned_devs 00:03:20.997 03:03:55 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:03:20.997 03:03:55 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:03:20.997 03:03:55 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:03:20.997 03:03:55 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:20.997 03:03:55 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:03:20.997 03:03:55 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:03:20.997 03:03:55 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:20.997 03:03:55 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:20.997 03:03:55 -- setup/acl.sh@12 -- # devs=() 00:03:20.997 03:03:55 -- setup/acl.sh@12 -- # declare -a devs 00:03:20.997 03:03:55 -- setup/acl.sh@13 -- # drivers=() 00:03:20.997 03:03:55 -- setup/acl.sh@13 -- # declare -A drivers 00:03:20.997 03:03:55 -- setup/acl.sh@51 -- # setup reset 00:03:20.997 03:03:55 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:20.997 03:03:55 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:22.375 03:03:56 -- setup/acl.sh@52 -- # collect_setup_devs 00:03:22.375 03:03:56 -- setup/acl.sh@16 -- # local dev driver 00:03:22.375 03:03:56 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:22.375 03:03:56 -- setup/acl.sh@15 -- # setup output status 00:03:22.375 03:03:56 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:22.375 03:03:56 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:23.754 Hugepages 00:03:23.754 node hugesize free / total 00:03:23.754 03:03:57 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:23.754 03:03:57 -- setup/acl.sh@19 -- # continue 00:03:23.754 03:03:57 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:23.754 03:03:57 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:23.754 03:03:57 -- setup/acl.sh@19 -- # continue 00:03:23.754 03:03:57 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:23.754 03:03:57 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:23.754 03:03:57 -- setup/acl.sh@19 -- # continue 00:03:23.754 03:03:57 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:23.754 00:03:23.754 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:23.754 03:03:57 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:23.754 03:03:57 -- setup/acl.sh@19 -- # continue 00:03:23.754 03:03:57 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:23.755 03:03:57 -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:03:23.755 03:03:57 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:23.755 03:03:57 -- setup/acl.sh@20 -- # continue 00:03:23.755 03:03:57 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:23.755 03:03:57 -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:03:23.755 03:03:57 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:23.755 03:03:57 -- setup/acl.sh@20 -- # continue 00:03:23.755 03:03:57 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:23.755 03:03:57 -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:03:23.755 03:03:57 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:23.755 03:03:57 -- setup/acl.sh@20 -- # continue 00:03:23.755 03:03:57 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:23.755 03:03:57 -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:03:23.755 03:03:57 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:23.755 03:03:57 -- setup/acl.sh@20 -- # continue 00:03:23.755 03:03:57 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:23.755 03:03:57 -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:03:23.755 03:03:57 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:23.755 03:03:57 -- setup/acl.sh@20 -- # continue 00:03:23.755 03:03:57 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:23.755 03:03:57 -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:03:23.755 03:03:57 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:23.755 03:03:57 -- setup/acl.sh@20 -- # continue 00:03:23.755 03:03:57 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:23.755 03:03:57 -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:03:23.755 03:03:57 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:23.755 03:03:57 -- setup/acl.sh@20 -- # continue 00:03:23.755 03:03:57 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:23.755 03:03:57 -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:03:23.755 03:03:57 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:23.755 03:03:57 -- setup/acl.sh@20 -- # continue 00:03:23.755 03:03:57 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:23.755 03:03:57 -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:03:23.755 03:03:57 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:23.755 03:03:57 -- setup/acl.sh@20 -- # continue 00:03:23.755 03:03:57 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:23.755 03:03:57 -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:03:23.755 03:03:57 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:23.755 03:03:57 -- setup/acl.sh@20 -- # continue 00:03:23.755 03:03:57 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:23.755 03:03:57 -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:03:23.755 03:03:57 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:23.755 03:03:57 -- setup/acl.sh@20 -- # continue 00:03:23.755 03:03:57 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:23.755 03:03:57 -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:03:23.755 03:03:57 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:23.755 03:03:57 -- setup/acl.sh@20 -- # continue 00:03:23.755 03:03:57 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:23.755 03:03:57 -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:03:23.755 03:03:57 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:23.755 03:03:57 -- setup/acl.sh@20 -- # continue 00:03:23.755 03:03:57 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:23.755 03:03:57 -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:03:23.755 03:03:57 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:23.755 03:03:57 -- setup/acl.sh@20 -- # continue 00:03:23.755 03:03:57 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:23.755 03:03:57 -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:03:23.755 03:03:57 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:23.755 03:03:57 -- setup/acl.sh@20 -- # continue 00:03:23.755 03:03:57 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:23.755 03:03:57 -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:03:23.755 03:03:57 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:23.755 03:03:57 -- setup/acl.sh@20 -- # continue 00:03:23.755 03:03:57 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:23.755 03:03:58 -- setup/acl.sh@19 -- # [[ 0000:88:00.0 == *:*:*.* ]] 00:03:23.755 03:03:58 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:23.755 03:03:58 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:03:23.755 03:03:58 -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:23.755 03:03:58 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:23.755 03:03:58 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:23.755 03:03:58 -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:23.755 03:03:58 -- setup/acl.sh@54 -- # run_test denied denied 00:03:23.755 03:03:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:23.755 03:03:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:23.755 03:03:58 -- common/autotest_common.sh@10 -- # set +x 00:03:23.755 ************************************ 00:03:23.755 START TEST denied 00:03:23.755 ************************************ 00:03:23.755 03:03:58 -- common/autotest_common.sh@1111 -- # denied 00:03:23.755 03:03:58 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:88:00.0' 00:03:23.755 03:03:58 -- setup/acl.sh@38 -- # setup output config 00:03:23.755 03:03:58 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:23.755 03:03:58 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:88:00.0' 00:03:23.755 03:03:58 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:25.666 0000:88:00.0 (8086 0a54): Skipping denied controller at 0000:88:00.0 00:03:25.666 03:03:59 -- setup/acl.sh@40 -- # verify 0000:88:00.0 00:03:25.666 03:03:59 -- setup/acl.sh@28 -- # local dev driver 00:03:25.666 03:03:59 -- setup/acl.sh@30 -- # for dev in "$@" 00:03:25.666 03:03:59 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:88:00.0 ]] 00:03:25.666 03:03:59 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:88:00.0/driver 00:03:25.666 03:03:59 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:25.666 03:03:59 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:25.666 03:03:59 -- setup/acl.sh@41 -- # setup reset 00:03:25.666 03:03:59 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:25.666 03:03:59 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:28.207 00:03:28.207 real 0m3.990s 00:03:28.207 user 0m1.183s 00:03:28.207 sys 0m1.900s 00:03:28.207 03:04:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:28.207 03:04:02 -- common/autotest_common.sh@10 -- # set +x 00:03:28.207 ************************************ 00:03:28.207 END TEST denied 00:03:28.207 ************************************ 00:03:28.207 03:04:02 -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:28.207 03:04:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:28.207 03:04:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:28.207 03:04:02 -- common/autotest_common.sh@10 -- # set +x 00:03:28.207 ************************************ 00:03:28.207 START TEST allowed 00:03:28.207 ************************************ 00:03:28.207 03:04:02 -- common/autotest_common.sh@1111 -- # allowed 00:03:28.207 03:04:02 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:88:00.0 00:03:28.207 03:04:02 -- setup/acl.sh@45 -- # setup output config 00:03:28.207 03:04:02 -- setup/acl.sh@46 -- # grep -E '0000:88:00.0 .*: nvme -> .*' 00:03:28.207 03:04:02 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:28.207 03:04:02 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:30.741 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:30.741 03:04:04 -- setup/acl.sh@47 -- # verify 00:03:30.741 03:04:04 -- setup/acl.sh@28 -- # local dev driver 00:03:30.741 03:04:04 -- setup/acl.sh@48 -- # setup reset 00:03:30.741 03:04:04 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:30.741 03:04:04 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:32.119 00:03:32.119 real 0m3.976s 00:03:32.119 user 0m1.051s 00:03:32.119 sys 0m1.782s 00:03:32.119 03:04:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:32.119 03:04:06 -- common/autotest_common.sh@10 -- # set +x 00:03:32.119 ************************************ 00:03:32.119 END TEST allowed 00:03:32.119 ************************************ 00:03:32.119 00:03:32.119 real 0m10.954s 00:03:32.119 user 0m3.416s 00:03:32.119 sys 0m5.530s 00:03:32.119 03:04:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:32.119 03:04:06 -- common/autotest_common.sh@10 -- # set +x 00:03:32.119 ************************************ 00:03:32.119 END TEST acl 00:03:32.119 ************************************ 00:03:32.119 03:04:06 -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:32.119 03:04:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:32.119 03:04:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:32.119 03:04:06 -- common/autotest_common.sh@10 -- # set +x 00:03:32.119 ************************************ 00:03:32.119 START TEST hugepages 00:03:32.119 ************************************ 00:03:32.119 03:04:06 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:32.119 * Looking for test storage... 00:03:32.119 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:32.119 03:04:06 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:32.119 03:04:06 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:32.119 03:04:06 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:32.119 03:04:06 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:32.119 03:04:06 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:32.119 03:04:06 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:32.119 03:04:06 -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:32.119 03:04:06 -- setup/common.sh@18 -- # local node= 00:03:32.119 03:04:06 -- setup/common.sh@19 -- # local var val 00:03:32.119 03:04:06 -- setup/common.sh@20 -- # local mem_f mem 00:03:32.119 03:04:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.119 03:04:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:32.119 03:04:06 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:32.119 03:04:06 -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.119 03:04:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.119 03:04:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.119 03:04:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.119 03:04:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 34744328 kB' 'MemAvailable: 39915584 kB' 'Buffers: 3728 kB' 'Cached: 18761552 kB' 'SwapCached: 0 kB' 'Active: 14666784 kB' 'Inactive: 4649000 kB' 'Active(anon): 14052664 kB' 'Inactive(anon): 0 kB' 'Active(file): 614120 kB' 'Inactive(file): 4649000 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 553904 kB' 'Mapped: 228896 kB' 'Shmem: 13502160 kB' 'KReclaimable: 552572 kB' 'Slab: 952788 kB' 'SReclaimable: 552572 kB' 'SUnreclaim: 400216 kB' 'KernelStack: 12960 kB' 'PageTables: 9464 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36562304 kB' 'Committed_AS: 15233212 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197084 kB' 'VmallocChunk: 0 kB' 'Percpu: 44928 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 2207324 kB' 'DirectMap2M: 28121088 kB' 'DirectMap1G: 38797312 kB' 00:03:32.119 03:04:06 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.120 03:04:06 -- setup/common.sh@32 -- # continue 00:03:32.120 03:04:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.120 03:04:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.120 03:04:06 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.120 03:04:06 -- setup/common.sh@32 -- # continue 00:03:32.120 03:04:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.120 03:04:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.120 03:04:06 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.120 03:04:06 -- setup/common.sh@32 -- # continue 00:03:32.120 03:04:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.120 03:04:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.120 03:04:06 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.120 03:04:06 -- setup/common.sh@32 -- # continue 00:03:32.120 03:04:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.120 03:04:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.120 03:04:06 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.120 03:04:06 -- setup/common.sh@32 -- # continue 00:03:32.120 03:04:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.120 03:04:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.120 03:04:06 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.120 03:04:06 -- setup/common.sh@32 -- # continue 00:03:32.120 03:04:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.120 03:04:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.120 03:04:06 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.120 03:04:06 -- setup/common.sh@32 -- # continue 00:03:32.120 03:04:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.120 03:04:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.120 03:04:06 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.120 03:04:06 -- setup/common.sh@32 -- # continue 00:03:32.120 03:04:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.120 03:04:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.120 03:04:06 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.120 03:04:06 -- setup/common.sh@32 -- # continue 00:03:32.120 03:04:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.120 03:04:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.120 03:04:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.120 03:04:06 -- setup/common.sh@32 -- # continue 00:03:32.120 03:04:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.120 03:04:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.120 03:04:06 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.120 03:04:06 -- setup/common.sh@32 -- # continue 00:03:32.120 03:04:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.120 03:04:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.120 03:04:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.120 03:04:06 -- setup/common.sh@32 -- # continue 00:03:32.120 03:04:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.120 03:04:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.120 03:04:06 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.120 03:04:06 -- setup/common.sh@32 -- # continue 00:03:32.120 03:04:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.120 03:04:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.120 03:04:06 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.120 03:04:06 -- setup/common.sh@32 -- # continue 00:03:32.120 03:04:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.120 03:04:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.120 03:04:06 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.120 03:04:06 -- setup/common.sh@32 -- # continue 00:03:32.120 03:04:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.120 03:04:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.120 03:04:06 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.120 03:04:06 -- setup/common.sh@32 -- # continue 00:03:32.120 03:04:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.120 03:04:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.120 03:04:06 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.120 03:04:06 -- setup/common.sh@32 -- # continue 00:03:32.120 03:04:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.120 03:04:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.120 03:04:06 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.120 03:04:06 -- setup/common.sh@32 -- # continue 00:03:32.120 03:04:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.120 03:04:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.120 03:04:06 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.120 03:04:06 -- setup/common.sh@32 -- # continue 00:03:32.120 03:04:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.120 03:04:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.120 03:04:06 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.120 03:04:06 -- setup/common.sh@32 -- # continue 00:03:32.120 03:04:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.120 03:04:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.120 03:04:06 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.120 03:04:06 -- setup/common.sh@32 -- # continue 00:03:32.120 03:04:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.120 03:04:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.120 03:04:06 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.120 03:04:06 -- setup/common.sh@32 -- # continue 00:03:32.120 03:04:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.120 03:04:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.120 03:04:06 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.120 03:04:06 -- setup/common.sh@32 -- # continue 00:03:32.120 03:04:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.120 03:04:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.120 03:04:06 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.120 03:04:06 -- setup/common.sh@32 -- # continue 00:03:32.120 03:04:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.120 03:04:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.120 03:04:06 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.120 03:04:06 -- setup/common.sh@32 -- # continue 00:03:32.120 03:04:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.120 03:04:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.120 03:04:06 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.120 03:04:06 -- setup/common.sh@32 -- # continue 00:03:32.120 03:04:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.120 03:04:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.120 03:04:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.120 03:04:06 -- setup/common.sh@32 -- # continue 00:03:32.120 03:04:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.120 03:04:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.120 03:04:06 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.120 03:04:06 -- setup/common.sh@32 -- # continue 00:03:32.120 03:04:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.120 03:04:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.120 03:04:06 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.120 03:04:06 -- setup/common.sh@32 -- # continue 00:03:32.120 03:04:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.120 03:04:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.120 03:04:06 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.120 03:04:06 -- setup/common.sh@32 -- # continue 00:03:32.120 03:04:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.120 03:04:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.120 03:04:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.120 03:04:06 -- setup/common.sh@32 -- # continue 00:03:32.120 03:04:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.120 03:04:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.120 03:04:06 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.120 03:04:06 -- setup/common.sh@32 -- # continue 00:03:32.120 03:04:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.120 03:04:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.120 03:04:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.120 03:04:06 -- setup/common.sh@32 -- # continue 00:03:32.120 03:04:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.120 03:04:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.120 03:04:06 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.120 03:04:06 -- setup/common.sh@32 -- # continue 00:03:32.120 03:04:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.120 03:04:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.120 03:04:06 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.120 03:04:06 -- setup/common.sh@32 -- # continue 00:03:32.120 03:04:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.120 03:04:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.120 03:04:06 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.120 03:04:06 -- setup/common.sh@32 -- # continue 00:03:32.120 03:04:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.120 03:04:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.120 03:04:06 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.120 03:04:06 -- setup/common.sh@32 -- # continue 00:03:32.120 03:04:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.120 03:04:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.120 03:04:06 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.120 03:04:06 -- setup/common.sh@32 -- # continue 00:03:32.120 03:04:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.120 03:04:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.120 03:04:06 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.121 03:04:06 -- setup/common.sh@32 -- # continue 00:03:32.121 03:04:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.121 03:04:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.121 03:04:06 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.121 03:04:06 -- setup/common.sh@32 -- # continue 00:03:32.121 03:04:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.121 03:04:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.121 03:04:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.121 03:04:06 -- setup/common.sh@32 -- # continue 00:03:32.121 03:04:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.121 03:04:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.121 03:04:06 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.121 03:04:06 -- setup/common.sh@32 -- # continue 00:03:32.121 03:04:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.121 03:04:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.121 03:04:06 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.121 03:04:06 -- setup/common.sh@32 -- # continue 00:03:32.121 03:04:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.121 03:04:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.121 03:04:06 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.121 03:04:06 -- setup/common.sh@32 -- # continue 00:03:32.121 03:04:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.121 03:04:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.121 03:04:06 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.121 03:04:06 -- setup/common.sh@32 -- # continue 00:03:32.121 03:04:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.121 03:04:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.121 03:04:06 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.121 03:04:06 -- setup/common.sh@32 -- # continue 00:03:32.121 03:04:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.121 03:04:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.121 03:04:06 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.121 03:04:06 -- setup/common.sh@32 -- # continue 00:03:32.121 03:04:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.121 03:04:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.121 03:04:06 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.121 03:04:06 -- setup/common.sh@32 -- # continue 00:03:32.121 03:04:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.121 03:04:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.121 03:04:06 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.121 03:04:06 -- setup/common.sh@32 -- # continue 00:03:32.121 03:04:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.121 03:04:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.121 03:04:06 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.121 03:04:06 -- setup/common.sh@32 -- # continue 00:03:32.121 03:04:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.121 03:04:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.121 03:04:06 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.121 03:04:06 -- setup/common.sh@32 -- # continue 00:03:32.121 03:04:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.121 03:04:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.121 03:04:06 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.121 03:04:06 -- setup/common.sh@32 -- # continue 00:03:32.121 03:04:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.121 03:04:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.121 03:04:06 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.121 03:04:06 -- setup/common.sh@33 -- # echo 2048 00:03:32.121 03:04:06 -- setup/common.sh@33 -- # return 0 00:03:32.121 03:04:06 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:32.121 03:04:06 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:32.121 03:04:06 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:32.121 03:04:06 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:32.121 03:04:06 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:32.121 03:04:06 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:32.121 03:04:06 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:32.121 03:04:06 -- setup/hugepages.sh@207 -- # get_nodes 00:03:32.121 03:04:06 -- setup/hugepages.sh@27 -- # local node 00:03:32.121 03:04:06 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:32.121 03:04:06 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:32.121 03:04:06 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:32.121 03:04:06 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:32.121 03:04:06 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:32.121 03:04:06 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:32.121 03:04:06 -- setup/hugepages.sh@208 -- # clear_hp 00:03:32.121 03:04:06 -- setup/hugepages.sh@37 -- # local node hp 00:03:32.121 03:04:06 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:32.121 03:04:06 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:32.121 03:04:06 -- setup/hugepages.sh@41 -- # echo 0 00:03:32.121 03:04:06 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:32.121 03:04:06 -- setup/hugepages.sh@41 -- # echo 0 00:03:32.121 03:04:06 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:32.121 03:04:06 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:32.121 03:04:06 -- setup/hugepages.sh@41 -- # echo 0 00:03:32.121 03:04:06 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:32.121 03:04:06 -- setup/hugepages.sh@41 -- # echo 0 00:03:32.121 03:04:06 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:32.121 03:04:06 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:32.121 03:04:06 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:32.121 03:04:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:32.121 03:04:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:32.121 03:04:06 -- common/autotest_common.sh@10 -- # set +x 00:03:32.121 ************************************ 00:03:32.121 START TEST default_setup 00:03:32.121 ************************************ 00:03:32.121 03:04:06 -- common/autotest_common.sh@1111 -- # default_setup 00:03:32.121 03:04:06 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:32.121 03:04:06 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:32.121 03:04:06 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:32.121 03:04:06 -- setup/hugepages.sh@51 -- # shift 00:03:32.380 03:04:06 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:32.380 03:04:06 -- setup/hugepages.sh@52 -- # local node_ids 00:03:32.380 03:04:06 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:32.380 03:04:06 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:32.380 03:04:06 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:32.380 03:04:06 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:32.380 03:04:06 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:32.380 03:04:06 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:32.380 03:04:06 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:32.380 03:04:06 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:32.380 03:04:06 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:32.380 03:04:06 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:32.380 03:04:06 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:32.380 03:04:06 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:32.380 03:04:06 -- setup/hugepages.sh@73 -- # return 0 00:03:32.380 03:04:06 -- setup/hugepages.sh@137 -- # setup output 00:03:32.380 03:04:06 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:32.380 03:04:06 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:33.318 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:33.318 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:33.318 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:33.578 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:33.578 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:33.578 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:33.578 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:33.578 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:33.578 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:33.578 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:33.578 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:33.578 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:33.578 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:33.578 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:33.578 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:33.578 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:34.522 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:34.522 03:04:08 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:34.522 03:04:08 -- setup/hugepages.sh@89 -- # local node 00:03:34.522 03:04:08 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:34.522 03:04:08 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:34.522 03:04:08 -- setup/hugepages.sh@92 -- # local surp 00:03:34.522 03:04:08 -- setup/hugepages.sh@93 -- # local resv 00:03:34.522 03:04:08 -- setup/hugepages.sh@94 -- # local anon 00:03:34.522 03:04:08 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:34.522 03:04:08 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:34.522 03:04:08 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:34.522 03:04:08 -- setup/common.sh@18 -- # local node= 00:03:34.522 03:04:08 -- setup/common.sh@19 -- # local var val 00:03:34.522 03:04:08 -- setup/common.sh@20 -- # local mem_f mem 00:03:34.522 03:04:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.522 03:04:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:34.522 03:04:08 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:34.522 03:04:08 -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.522 03:04:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.522 03:04:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.522 03:04:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.522 03:04:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 36890036 kB' 'MemAvailable: 42061260 kB' 'Buffers: 3728 kB' 'Cached: 18761640 kB' 'SwapCached: 0 kB' 'Active: 14682640 kB' 'Inactive: 4649000 kB' 'Active(anon): 14068520 kB' 'Inactive(anon): 0 kB' 'Active(file): 614120 kB' 'Inactive(file): 4649000 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 569580 kB' 'Mapped: 229092 kB' 'Shmem: 13502248 kB' 'KReclaimable: 552540 kB' 'Slab: 952492 kB' 'SReclaimable: 552540 kB' 'SUnreclaim: 399952 kB' 'KernelStack: 12864 kB' 'PageTables: 9236 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 15248720 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197180 kB' 'VmallocChunk: 0 kB' 'Percpu: 44928 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2207324 kB' 'DirectMap2M: 28121088 kB' 'DirectMap1G: 38797312 kB' 00:03:34.522 03:04:08 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.522 03:04:08 -- setup/common.sh@32 -- # continue 00:03:34.522 03:04:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.522 03:04:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.522 03:04:08 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.522 03:04:08 -- setup/common.sh@32 -- # continue 00:03:34.522 03:04:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.522 03:04:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.522 03:04:08 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.522 03:04:08 -- setup/common.sh@32 -- # continue 00:03:34.522 03:04:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.522 03:04:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.522 03:04:08 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.522 03:04:08 -- setup/common.sh@32 -- # continue 00:03:34.522 03:04:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.522 03:04:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.522 03:04:08 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.522 03:04:08 -- setup/common.sh@32 -- # continue 00:03:34.522 03:04:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.522 03:04:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.522 03:04:08 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.522 03:04:08 -- setup/common.sh@32 -- # continue 00:03:34.522 03:04:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.522 03:04:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.522 03:04:08 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.522 03:04:08 -- setup/common.sh@32 -- # continue 00:03:34.522 03:04:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.522 03:04:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.522 03:04:08 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.522 03:04:08 -- setup/common.sh@32 -- # continue 00:03:34.522 03:04:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.522 03:04:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.522 03:04:08 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.522 03:04:08 -- setup/common.sh@32 -- # continue 00:03:34.522 03:04:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.522 03:04:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.522 03:04:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.522 03:04:08 -- setup/common.sh@32 -- # continue 00:03:34.522 03:04:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.522 03:04:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.522 03:04:08 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.522 03:04:08 -- setup/common.sh@32 -- # continue 00:03:34.522 03:04:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.522 03:04:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.522 03:04:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.522 03:04:08 -- setup/common.sh@32 -- # continue 00:03:34.522 03:04:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.522 03:04:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.522 03:04:08 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.522 03:04:08 -- setup/common.sh@32 -- # continue 00:03:34.522 03:04:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.522 03:04:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.522 03:04:08 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.522 03:04:08 -- setup/common.sh@32 -- # continue 00:03:34.522 03:04:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.522 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.522 03:04:09 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.522 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.522 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.522 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.522 03:04:09 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.522 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.522 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.522 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.522 03:04:09 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.522 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.522 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.522 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.522 03:04:09 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.522 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.522 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.523 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.523 03:04:09 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.523 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.523 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.523 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.523 03:04:09 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.523 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.523 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.523 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.523 03:04:09 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.523 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.523 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.523 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.523 03:04:09 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.523 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.523 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.523 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.523 03:04:09 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.523 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.523 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.523 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.523 03:04:09 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.523 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.523 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.523 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.523 03:04:09 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.523 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.523 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.523 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.523 03:04:09 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.523 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.523 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.523 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.523 03:04:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.523 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.523 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.523 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.523 03:04:09 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.523 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.523 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.523 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.523 03:04:09 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.523 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.523 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.523 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.523 03:04:09 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.523 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.523 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.523 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.523 03:04:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.523 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.523 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.523 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.523 03:04:09 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.523 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.523 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.523 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.523 03:04:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.523 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.523 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.523 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.523 03:04:09 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.523 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.523 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.523 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.523 03:04:09 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.523 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.523 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.523 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.523 03:04:09 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.523 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.523 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.523 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.523 03:04:09 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.523 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.523 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.523 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.523 03:04:09 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.523 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.523 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.523 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.523 03:04:09 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.523 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.523 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.523 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.523 03:04:09 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.523 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.523 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.523 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.523 03:04:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.523 03:04:09 -- setup/common.sh@33 -- # echo 0 00:03:34.523 03:04:09 -- setup/common.sh@33 -- # return 0 00:03:34.523 03:04:09 -- setup/hugepages.sh@97 -- # anon=0 00:03:34.523 03:04:09 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:34.523 03:04:09 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:34.523 03:04:09 -- setup/common.sh@18 -- # local node= 00:03:34.523 03:04:09 -- setup/common.sh@19 -- # local var val 00:03:34.523 03:04:09 -- setup/common.sh@20 -- # local mem_f mem 00:03:34.523 03:04:09 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.523 03:04:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:34.523 03:04:09 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:34.523 03:04:09 -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.523 03:04:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.523 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.523 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.523 03:04:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 36890072 kB' 'MemAvailable: 42061296 kB' 'Buffers: 3728 kB' 'Cached: 18761640 kB' 'SwapCached: 0 kB' 'Active: 14682304 kB' 'Inactive: 4649000 kB' 'Active(anon): 14068184 kB' 'Inactive(anon): 0 kB' 'Active(file): 614120 kB' 'Inactive(file): 4649000 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 569272 kB' 'Mapped: 229064 kB' 'Shmem: 13502248 kB' 'KReclaimable: 552540 kB' 'Slab: 952492 kB' 'SReclaimable: 552540 kB' 'SUnreclaim: 399952 kB' 'KernelStack: 12880 kB' 'PageTables: 9228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 15248732 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197148 kB' 'VmallocChunk: 0 kB' 'Percpu: 44928 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2207324 kB' 'DirectMap2M: 28121088 kB' 'DirectMap1G: 38797312 kB' 00:03:34.523 03:04:09 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.523 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.523 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.523 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.523 03:04:09 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.523 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.523 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.523 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.523 03:04:09 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.523 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.523 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.523 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.523 03:04:09 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.523 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.523 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.523 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.523 03:04:09 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.523 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.523 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.523 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.523 03:04:09 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.523 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.523 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.523 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.523 03:04:09 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.523 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.523 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.787 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.787 03:04:09 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.787 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.787 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.787 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.787 03:04:09 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.787 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.787 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.787 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.787 03:04:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.787 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.787 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.787 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.787 03:04:09 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.787 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.787 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.787 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.787 03:04:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.787 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.787 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.787 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.787 03:04:09 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.787 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.787 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.787 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.787 03:04:09 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.787 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.787 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.787 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.787 03:04:09 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.787 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.787 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.787 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.787 03:04:09 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.787 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.787 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.787 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.787 03:04:09 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.787 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.787 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.787 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.787 03:04:09 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.787 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.787 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.787 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.787 03:04:09 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.787 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.787 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.787 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.787 03:04:09 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.787 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.787 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.787 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.787 03:04:09 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.787 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.787 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.787 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.787 03:04:09 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.787 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.787 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.787 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.787 03:04:09 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.787 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.787 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.787 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.787 03:04:09 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.787 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.787 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.787 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.787 03:04:09 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.787 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.787 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.787 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.787 03:04:09 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.787 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.787 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.787 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.787 03:04:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.787 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.787 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.787 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.787 03:04:09 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.787 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.787 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.787 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.787 03:04:09 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.787 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.787 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.787 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.787 03:04:09 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.787 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.787 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.787 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.787 03:04:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.787 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.787 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.787 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.787 03:04:09 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.787 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.787 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.787 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.787 03:04:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.787 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.787 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.787 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.787 03:04:09 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.787 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.787 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.787 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.787 03:04:09 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.787 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.788 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.788 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.788 03:04:09 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.788 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.788 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.788 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.788 03:04:09 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.788 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.788 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.788 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.788 03:04:09 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.788 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.788 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.788 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.788 03:04:09 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.788 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.788 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.788 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.788 03:04:09 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.788 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.788 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.788 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.788 03:04:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.788 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.788 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.788 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.788 03:04:09 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.788 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.788 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.788 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.788 03:04:09 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.788 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.788 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.788 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.788 03:04:09 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.788 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.788 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.788 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.788 03:04:09 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.788 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.788 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.788 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.788 03:04:09 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.788 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.788 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.788 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.788 03:04:09 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.788 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.788 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.788 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.788 03:04:09 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.788 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.788 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.788 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.788 03:04:09 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.788 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.788 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.788 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.788 03:04:09 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.788 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.788 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.788 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.788 03:04:09 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.788 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.788 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.788 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.788 03:04:09 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.788 03:04:09 -- setup/common.sh@33 -- # echo 0 00:03:34.788 03:04:09 -- setup/common.sh@33 -- # return 0 00:03:34.788 03:04:09 -- setup/hugepages.sh@99 -- # surp=0 00:03:34.788 03:04:09 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:34.788 03:04:09 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:34.788 03:04:09 -- setup/common.sh@18 -- # local node= 00:03:34.788 03:04:09 -- setup/common.sh@19 -- # local var val 00:03:34.788 03:04:09 -- setup/common.sh@20 -- # local mem_f mem 00:03:34.788 03:04:09 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.788 03:04:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:34.788 03:04:09 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:34.788 03:04:09 -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.788 03:04:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.788 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.788 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.788 03:04:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 36888892 kB' 'MemAvailable: 42060116 kB' 'Buffers: 3728 kB' 'Cached: 18761652 kB' 'SwapCached: 0 kB' 'Active: 14682336 kB' 'Inactive: 4649000 kB' 'Active(anon): 14068216 kB' 'Inactive(anon): 0 kB' 'Active(file): 614120 kB' 'Inactive(file): 4649000 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 569228 kB' 'Mapped: 228984 kB' 'Shmem: 13502260 kB' 'KReclaimable: 552540 kB' 'Slab: 952488 kB' 'SReclaimable: 552540 kB' 'SUnreclaim: 399948 kB' 'KernelStack: 12896 kB' 'PageTables: 9212 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 15248744 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197164 kB' 'VmallocChunk: 0 kB' 'Percpu: 44928 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2207324 kB' 'DirectMap2M: 28121088 kB' 'DirectMap1G: 38797312 kB' 00:03:34.788 03:04:09 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.788 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.788 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.788 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.788 03:04:09 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.788 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.788 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.788 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.788 03:04:09 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.788 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.788 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.788 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.788 03:04:09 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.788 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.788 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.788 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.788 03:04:09 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.788 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.788 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.788 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.788 03:04:09 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.788 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.788 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.788 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.788 03:04:09 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.788 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.788 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.788 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.788 03:04:09 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.788 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.788 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.788 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.788 03:04:09 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.788 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.788 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.788 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.788 03:04:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.788 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.788 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.788 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.788 03:04:09 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.788 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.788 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.788 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.788 03:04:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.788 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.788 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.788 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.788 03:04:09 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.788 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.788 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.788 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.788 03:04:09 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.788 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.788 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.788 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.788 03:04:09 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.788 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.788 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.788 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.788 03:04:09 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.788 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.788 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.788 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.789 03:04:09 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.789 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.789 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.789 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.789 03:04:09 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.789 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.789 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.789 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.789 03:04:09 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.789 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.789 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.789 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.789 03:04:09 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.789 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.789 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.789 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.789 03:04:09 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.789 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.789 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.789 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.789 03:04:09 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.789 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.789 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.789 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.789 03:04:09 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.789 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.789 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.789 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.789 03:04:09 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.789 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.789 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.789 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.789 03:04:09 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.789 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.789 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.789 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.789 03:04:09 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.789 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.789 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.789 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.789 03:04:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.789 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.789 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.789 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.789 03:04:09 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.789 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.789 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.789 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.789 03:04:09 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.789 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.789 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.789 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.789 03:04:09 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.789 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.789 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.789 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.789 03:04:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.789 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.789 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.789 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.789 03:04:09 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.789 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.789 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.789 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.789 03:04:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.789 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.789 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.789 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.789 03:04:09 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.789 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.789 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.789 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.789 03:04:09 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.789 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.789 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.789 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.789 03:04:09 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.789 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.789 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.789 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.789 03:04:09 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.789 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.789 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.789 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.789 03:04:09 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.789 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.789 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.789 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.789 03:04:09 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.789 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.789 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.789 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.789 03:04:09 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.789 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.789 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.789 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.789 03:04:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.789 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.789 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.789 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.789 03:04:09 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.789 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.789 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.789 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.789 03:04:09 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.789 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.789 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.789 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.789 03:04:09 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.789 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.789 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.789 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.789 03:04:09 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.789 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.789 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.789 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.789 03:04:09 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.789 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.789 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.789 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.789 03:04:09 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.789 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.789 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.789 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.789 03:04:09 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.789 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.789 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.789 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.789 03:04:09 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.789 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.789 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.789 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.789 03:04:09 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.789 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.789 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.789 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.789 03:04:09 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.789 03:04:09 -- setup/common.sh@33 -- # echo 0 00:03:34.789 03:04:09 -- setup/common.sh@33 -- # return 0 00:03:34.789 03:04:09 -- setup/hugepages.sh@100 -- # resv=0 00:03:34.789 03:04:09 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:34.789 nr_hugepages=1024 00:03:34.789 03:04:09 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:34.789 resv_hugepages=0 00:03:34.789 03:04:09 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:34.789 surplus_hugepages=0 00:03:34.789 03:04:09 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:34.789 anon_hugepages=0 00:03:34.789 03:04:09 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:34.789 03:04:09 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:34.789 03:04:09 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:34.789 03:04:09 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:34.789 03:04:09 -- setup/common.sh@18 -- # local node= 00:03:34.789 03:04:09 -- setup/common.sh@19 -- # local var val 00:03:34.789 03:04:09 -- setup/common.sh@20 -- # local mem_f mem 00:03:34.789 03:04:09 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.789 03:04:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:34.789 03:04:09 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:34.789 03:04:09 -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.789 03:04:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.789 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.789 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.790 03:04:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 36888136 kB' 'MemAvailable: 42059360 kB' 'Buffers: 3728 kB' 'Cached: 18761668 kB' 'SwapCached: 0 kB' 'Active: 14682008 kB' 'Inactive: 4649000 kB' 'Active(anon): 14067888 kB' 'Inactive(anon): 0 kB' 'Active(file): 614120 kB' 'Inactive(file): 4649000 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 568868 kB' 'Mapped: 228984 kB' 'Shmem: 13502276 kB' 'KReclaimable: 552540 kB' 'Slab: 952488 kB' 'SReclaimable: 552540 kB' 'SUnreclaim: 399948 kB' 'KernelStack: 12864 kB' 'PageTables: 9116 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 15248760 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197164 kB' 'VmallocChunk: 0 kB' 'Percpu: 44928 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2207324 kB' 'DirectMap2M: 28121088 kB' 'DirectMap1G: 38797312 kB' 00:03:34.790 03:04:09 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.790 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.790 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.790 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.790 03:04:09 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.790 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.790 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.790 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.790 03:04:09 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.790 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.790 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.790 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.790 03:04:09 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.790 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.790 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.790 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.790 03:04:09 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.790 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.790 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.790 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.790 03:04:09 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.790 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.790 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.790 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.790 03:04:09 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.790 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.790 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.790 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.790 03:04:09 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.790 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.790 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.790 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.790 03:04:09 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.790 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.790 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.790 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.790 03:04:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.790 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.790 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.790 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.790 03:04:09 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.790 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.790 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.790 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.790 03:04:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.790 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.790 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.790 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.790 03:04:09 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.790 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.790 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.790 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.790 03:04:09 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.790 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.790 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.790 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.790 03:04:09 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.790 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.790 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.790 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.790 03:04:09 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.790 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.790 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.790 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.790 03:04:09 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.790 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.790 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.790 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.790 03:04:09 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.790 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.790 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.790 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.790 03:04:09 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.790 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.790 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.790 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.790 03:04:09 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.790 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.790 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.790 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.790 03:04:09 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.790 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.790 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.790 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.790 03:04:09 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.790 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.790 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.790 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.790 03:04:09 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.790 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.790 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.790 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.790 03:04:09 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.790 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.790 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.790 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.790 03:04:09 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.790 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.790 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.790 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.790 03:04:09 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.790 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.790 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.790 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.790 03:04:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.790 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.790 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.790 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.790 03:04:09 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.790 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.790 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.790 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.790 03:04:09 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.790 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.790 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.790 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.790 03:04:09 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.790 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.790 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.790 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.790 03:04:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.790 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.790 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.790 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.790 03:04:09 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.790 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.790 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.790 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.790 03:04:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.790 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.790 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.790 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.790 03:04:09 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.790 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.790 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.790 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.790 03:04:09 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.790 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.790 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.790 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.790 03:04:09 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.790 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.790 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.790 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.790 03:04:09 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.790 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.790 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.790 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.790 03:04:09 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.790 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.790 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.790 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.791 03:04:09 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.791 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.791 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.791 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.791 03:04:09 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.791 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.791 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.791 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.791 03:04:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.791 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.791 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.791 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.791 03:04:09 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.791 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.791 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.791 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.791 03:04:09 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.791 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.791 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.791 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.791 03:04:09 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.791 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.791 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.791 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.791 03:04:09 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.791 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.791 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.791 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.791 03:04:09 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.791 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.791 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.791 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.791 03:04:09 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.791 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.791 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.791 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.791 03:04:09 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.791 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.791 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.791 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.791 03:04:09 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.791 03:04:09 -- setup/common.sh@33 -- # echo 1024 00:03:34.791 03:04:09 -- setup/common.sh@33 -- # return 0 00:03:34.791 03:04:09 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:34.791 03:04:09 -- setup/hugepages.sh@112 -- # get_nodes 00:03:34.791 03:04:09 -- setup/hugepages.sh@27 -- # local node 00:03:34.791 03:04:09 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:34.791 03:04:09 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:34.791 03:04:09 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:34.791 03:04:09 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:34.791 03:04:09 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:34.791 03:04:09 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:34.791 03:04:09 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:34.791 03:04:09 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:34.791 03:04:09 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:34.791 03:04:09 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:34.791 03:04:09 -- setup/common.sh@18 -- # local node=0 00:03:34.791 03:04:09 -- setup/common.sh@19 -- # local var val 00:03:34.791 03:04:09 -- setup/common.sh@20 -- # local mem_f mem 00:03:34.791 03:04:09 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.791 03:04:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:34.791 03:04:09 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:34.791 03:04:09 -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.791 03:04:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.791 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.791 03:04:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 21477988 kB' 'MemUsed: 11351896 kB' 'SwapCached: 0 kB' 'Active: 7623812 kB' 'Inactive: 271804 kB' 'Active(anon): 7223408 kB' 'Inactive(anon): 0 kB' 'Active(file): 400404 kB' 'Inactive(file): 271804 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7697780 kB' 'Mapped: 54016 kB' 'AnonPages: 200976 kB' 'Shmem: 7025572 kB' 'KernelStack: 7160 kB' 'PageTables: 4496 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 288912 kB' 'Slab: 491728 kB' 'SReclaimable: 288912 kB' 'SUnreclaim: 202816 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:34.791 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.791 03:04:09 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.791 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.791 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.791 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.791 03:04:09 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.791 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.791 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.791 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.791 03:04:09 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.791 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.791 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.791 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.791 03:04:09 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.791 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.791 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.791 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.791 03:04:09 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.791 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.791 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.791 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.791 03:04:09 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.791 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.791 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.791 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.791 03:04:09 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.791 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.791 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.791 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.791 03:04:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.791 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.791 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.791 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.791 03:04:09 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.791 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.791 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.791 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.791 03:04:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.791 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.791 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.791 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.791 03:04:09 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.791 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.791 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.791 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.791 03:04:09 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.791 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.791 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.791 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.791 03:04:09 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.791 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.791 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.791 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.791 03:04:09 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.791 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.791 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.791 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.791 03:04:09 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.791 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.791 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.791 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.791 03:04:09 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.791 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.791 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.791 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.791 03:04:09 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.791 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.791 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.792 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.792 03:04:09 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.792 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.792 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.792 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.792 03:04:09 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.792 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.792 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.792 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.792 03:04:09 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.792 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.792 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.792 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.792 03:04:09 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.792 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.792 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.792 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.792 03:04:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.792 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.792 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.792 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.792 03:04:09 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.792 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.792 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.792 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.792 03:04:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.792 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.792 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.792 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.792 03:04:09 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.792 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.792 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.792 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.792 03:04:09 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.792 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.792 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.792 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.792 03:04:09 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.792 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.792 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.792 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.792 03:04:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.792 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.792 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.792 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.792 03:04:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.792 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.792 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.792 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.792 03:04:09 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.792 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.792 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.792 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.792 03:04:09 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.792 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.792 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.792 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.792 03:04:09 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.792 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.792 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.792 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.792 03:04:09 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.792 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.792 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.792 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.792 03:04:09 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.792 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.792 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.792 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.792 03:04:09 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.792 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.792 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.792 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.792 03:04:09 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.792 03:04:09 -- setup/common.sh@32 -- # continue 00:03:34.792 03:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.792 03:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.792 03:04:09 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.792 03:04:09 -- setup/common.sh@33 -- # echo 0 00:03:34.792 03:04:09 -- setup/common.sh@33 -- # return 0 00:03:34.792 03:04:09 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:34.792 03:04:09 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:34.792 03:04:09 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:34.792 03:04:09 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:34.792 03:04:09 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:34.792 node0=1024 expecting 1024 00:03:34.792 03:04:09 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:34.792 00:03:34.792 real 0m2.498s 00:03:34.792 user 0m0.665s 00:03:34.792 sys 0m0.918s 00:03:34.792 03:04:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:34.792 03:04:09 -- common/autotest_common.sh@10 -- # set +x 00:03:34.792 ************************************ 00:03:34.792 END TEST default_setup 00:03:34.792 ************************************ 00:03:34.792 03:04:09 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:34.792 03:04:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:34.792 03:04:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:34.792 03:04:09 -- common/autotest_common.sh@10 -- # set +x 00:03:34.792 ************************************ 00:03:34.792 START TEST per_node_1G_alloc 00:03:34.792 ************************************ 00:03:34.792 03:04:09 -- common/autotest_common.sh@1111 -- # per_node_1G_alloc 00:03:34.792 03:04:09 -- setup/hugepages.sh@143 -- # local IFS=, 00:03:34.792 03:04:09 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:34.792 03:04:09 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:34.792 03:04:09 -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:34.792 03:04:09 -- setup/hugepages.sh@51 -- # shift 00:03:34.792 03:04:09 -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:34.792 03:04:09 -- setup/hugepages.sh@52 -- # local node_ids 00:03:34.792 03:04:09 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:34.792 03:04:09 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:34.792 03:04:09 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:34.792 03:04:09 -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:34.792 03:04:09 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:34.792 03:04:09 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:34.792 03:04:09 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:34.792 03:04:09 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:34.792 03:04:09 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:34.792 03:04:09 -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:34.792 03:04:09 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:34.792 03:04:09 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:34.792 03:04:09 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:34.792 03:04:09 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:34.792 03:04:09 -- setup/hugepages.sh@73 -- # return 0 00:03:34.792 03:04:09 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:34.792 03:04:09 -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:34.792 03:04:09 -- setup/hugepages.sh@146 -- # setup output 00:03:34.792 03:04:09 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:34.792 03:04:09 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:36.180 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:36.180 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:36.180 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:36.180 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:36.180 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:36.180 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:36.180 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:36.180 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:36.180 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:36.180 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:36.180 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:36.180 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:36.180 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:36.180 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:36.180 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:36.180 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:36.180 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:36.180 03:04:10 -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:36.180 03:04:10 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:36.180 03:04:10 -- setup/hugepages.sh@89 -- # local node 00:03:36.180 03:04:10 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:36.180 03:04:10 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:36.180 03:04:10 -- setup/hugepages.sh@92 -- # local surp 00:03:36.180 03:04:10 -- setup/hugepages.sh@93 -- # local resv 00:03:36.180 03:04:10 -- setup/hugepages.sh@94 -- # local anon 00:03:36.180 03:04:10 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:36.180 03:04:10 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:36.180 03:04:10 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:36.180 03:04:10 -- setup/common.sh@18 -- # local node= 00:03:36.180 03:04:10 -- setup/common.sh@19 -- # local var val 00:03:36.180 03:04:10 -- setup/common.sh@20 -- # local mem_f mem 00:03:36.180 03:04:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.180 03:04:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:36.180 03:04:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:36.180 03:04:10 -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.180 03:04:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.180 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.180 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.180 03:04:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 36897024 kB' 'MemAvailable: 42068280 kB' 'Buffers: 3728 kB' 'Cached: 18761728 kB' 'SwapCached: 0 kB' 'Active: 14684360 kB' 'Inactive: 4649000 kB' 'Active(anon): 14070240 kB' 'Inactive(anon): 0 kB' 'Active(file): 614120 kB' 'Inactive(file): 4649000 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 571180 kB' 'Mapped: 229020 kB' 'Shmem: 13502336 kB' 'KReclaimable: 552572 kB' 'Slab: 952584 kB' 'SReclaimable: 552572 kB' 'SUnreclaim: 400012 kB' 'KernelStack: 12880 kB' 'PageTables: 9112 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 15248940 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197116 kB' 'VmallocChunk: 0 kB' 'Percpu: 44928 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2207324 kB' 'DirectMap2M: 28121088 kB' 'DirectMap1G: 38797312 kB' 00:03:36.180 03:04:10 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.180 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.180 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.180 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.180 03:04:10 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.180 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.180 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.180 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.180 03:04:10 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.180 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.180 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.180 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.180 03:04:10 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.180 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.180 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.180 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.180 03:04:10 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.180 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.180 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.180 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.180 03:04:10 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.180 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.180 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.180 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.180 03:04:10 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.180 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.180 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.180 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.180 03:04:10 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.180 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.180 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.180 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.180 03:04:10 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.180 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.180 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.180 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.180 03:04:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.180 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.180 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.180 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.180 03:04:10 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.180 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.180 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.180 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.180 03:04:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.180 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.180 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.180 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.180 03:04:10 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.180 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.181 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.181 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.181 03:04:10 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.181 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.181 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.181 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.181 03:04:10 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.181 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.181 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.181 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.181 03:04:10 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.181 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.181 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.181 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.181 03:04:10 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.181 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.181 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.181 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.181 03:04:10 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.181 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.181 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.181 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.181 03:04:10 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.181 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.181 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.181 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.181 03:04:10 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.181 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.181 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.181 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.181 03:04:10 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.181 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.181 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.181 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.181 03:04:10 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.181 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.181 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.181 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.181 03:04:10 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.181 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.181 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.181 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.181 03:04:10 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.181 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.181 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.181 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.181 03:04:10 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.181 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.181 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.181 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.181 03:04:10 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.181 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.181 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.181 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.181 03:04:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.181 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.181 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.181 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.181 03:04:10 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.181 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.181 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.181 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.181 03:04:10 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.181 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.181 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.181 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.181 03:04:10 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.181 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.181 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.181 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.181 03:04:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.181 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.181 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.181 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.181 03:04:10 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.181 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.181 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.181 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.181 03:04:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.181 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.181 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.181 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.181 03:04:10 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.181 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.181 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.181 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.181 03:04:10 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.181 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.181 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.181 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.181 03:04:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.181 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.181 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.181 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.181 03:04:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.181 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.181 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.181 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.181 03:04:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.181 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.181 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.181 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.181 03:04:10 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.181 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.181 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.181 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.181 03:04:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.181 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.181 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.181 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.181 03:04:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.181 03:04:10 -- setup/common.sh@33 -- # echo 0 00:03:36.181 03:04:10 -- setup/common.sh@33 -- # return 0 00:03:36.181 03:04:10 -- setup/hugepages.sh@97 -- # anon=0 00:03:36.181 03:04:10 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:36.181 03:04:10 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:36.181 03:04:10 -- setup/common.sh@18 -- # local node= 00:03:36.181 03:04:10 -- setup/common.sh@19 -- # local var val 00:03:36.181 03:04:10 -- setup/common.sh@20 -- # local mem_f mem 00:03:36.181 03:04:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.181 03:04:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:36.181 03:04:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:36.181 03:04:10 -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.181 03:04:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.181 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.181 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.181 03:04:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 36897772 kB' 'MemAvailable: 42069028 kB' 'Buffers: 3728 kB' 'Cached: 18761732 kB' 'SwapCached: 0 kB' 'Active: 14684204 kB' 'Inactive: 4649000 kB' 'Active(anon): 14070084 kB' 'Inactive(anon): 0 kB' 'Active(file): 614120 kB' 'Inactive(file): 4649000 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 571028 kB' 'Mapped: 229012 kB' 'Shmem: 13502340 kB' 'KReclaimable: 552572 kB' 'Slab: 952576 kB' 'SReclaimable: 552572 kB' 'SUnreclaim: 400004 kB' 'KernelStack: 12912 kB' 'PageTables: 9192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 15248952 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197084 kB' 'VmallocChunk: 0 kB' 'Percpu: 44928 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2207324 kB' 'DirectMap2M: 28121088 kB' 'DirectMap1G: 38797312 kB' 00:03:36.181 03:04:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.181 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.181 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.181 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.181 03:04:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.181 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.181 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.181 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.181 03:04:10 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.181 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.181 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.181 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.181 03:04:10 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.181 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.181 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.181 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.181 03:04:10 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.181 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.181 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.181 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.182 03:04:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.182 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.182 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.182 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.182 03:04:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.182 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.182 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.182 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.182 03:04:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.182 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.182 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.182 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.182 03:04:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.182 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.182 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.182 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.182 03:04:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.182 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.182 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.182 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.182 03:04:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.182 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.182 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.182 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.182 03:04:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.182 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.182 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.182 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.182 03:04:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.182 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.182 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.182 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.182 03:04:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.182 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.182 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.182 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.182 03:04:10 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.182 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.182 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.182 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.182 03:04:10 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.182 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.182 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.182 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.182 03:04:10 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.182 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.182 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.182 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.182 03:04:10 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.182 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.182 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.182 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.182 03:04:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.182 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.182 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.182 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.182 03:04:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.182 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.182 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.182 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.182 03:04:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.182 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.182 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.182 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.182 03:04:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.182 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.182 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.182 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.182 03:04:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.182 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.182 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.182 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.182 03:04:10 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.182 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.182 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.182 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.182 03:04:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.182 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.182 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.182 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.182 03:04:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.182 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.182 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.182 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.182 03:04:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.182 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.182 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.182 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.182 03:04:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.182 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.182 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.182 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.182 03:04:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.182 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.182 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.182 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.182 03:04:10 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.182 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.182 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.182 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.182 03:04:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.182 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.182 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.182 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.182 03:04:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.182 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.182 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.182 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.182 03:04:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.182 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.182 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.182 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.182 03:04:10 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.182 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.182 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.182 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.182 03:04:10 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.182 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.182 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.182 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.182 03:04:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.182 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.182 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.182 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.182 03:04:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.182 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.182 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.182 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.182 03:04:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.182 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.182 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.182 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.182 03:04:10 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.182 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.182 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.182 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.182 03:04:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.182 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.182 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.182 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.182 03:04:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.182 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.182 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.182 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.182 03:04:10 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.182 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.182 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.182 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.182 03:04:10 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.182 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.182 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.182 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.182 03:04:10 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.182 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.182 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.182 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.182 03:04:10 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.182 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.182 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.182 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.182 03:04:10 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.182 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.182 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.182 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.183 03:04:10 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.183 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.183 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.183 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.183 03:04:10 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.183 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.183 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.183 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.183 03:04:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.183 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.183 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.183 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.183 03:04:10 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.183 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.183 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.183 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.183 03:04:10 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.183 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.183 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.183 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.183 03:04:10 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.183 03:04:10 -- setup/common.sh@33 -- # echo 0 00:03:36.183 03:04:10 -- setup/common.sh@33 -- # return 0 00:03:36.183 03:04:10 -- setup/hugepages.sh@99 -- # surp=0 00:03:36.183 03:04:10 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:36.183 03:04:10 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:36.183 03:04:10 -- setup/common.sh@18 -- # local node= 00:03:36.183 03:04:10 -- setup/common.sh@19 -- # local var val 00:03:36.183 03:04:10 -- setup/common.sh@20 -- # local mem_f mem 00:03:36.183 03:04:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.183 03:04:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:36.183 03:04:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:36.183 03:04:10 -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.183 03:04:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.183 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.183 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.183 03:04:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 36897196 kB' 'MemAvailable: 42068484 kB' 'Buffers: 3728 kB' 'Cached: 18761744 kB' 'SwapCached: 0 kB' 'Active: 14683948 kB' 'Inactive: 4649000 kB' 'Active(anon): 14069828 kB' 'Inactive(anon): 0 kB' 'Active(file): 614120 kB' 'Inactive(file): 4649000 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 570752 kB' 'Mapped: 229012 kB' 'Shmem: 13502352 kB' 'KReclaimable: 552604 kB' 'Slab: 952672 kB' 'SReclaimable: 552604 kB' 'SUnreclaim: 400068 kB' 'KernelStack: 12896 kB' 'PageTables: 9156 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 15248968 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197084 kB' 'VmallocChunk: 0 kB' 'Percpu: 44928 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2207324 kB' 'DirectMap2M: 28121088 kB' 'DirectMap1G: 38797312 kB' 00:03:36.183 03:04:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.183 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.183 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.183 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.183 03:04:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.183 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.183 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.183 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.183 03:04:10 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.183 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.183 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.183 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.183 03:04:10 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.183 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.183 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.183 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.183 03:04:10 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.183 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.183 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.183 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.183 03:04:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.183 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.183 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.183 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.183 03:04:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.183 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.183 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.183 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.183 03:04:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.183 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.183 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.183 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.183 03:04:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.183 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.183 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.183 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.183 03:04:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.183 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.183 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.183 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.183 03:04:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.183 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.183 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.183 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.183 03:04:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.183 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.183 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.183 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.183 03:04:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.183 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.183 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.183 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.183 03:04:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.183 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.183 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.183 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.183 03:04:10 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.183 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.183 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.183 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.183 03:04:10 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.183 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.183 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.183 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.183 03:04:10 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.183 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.183 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.183 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.183 03:04:10 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.183 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.183 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.183 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.183 03:04:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.183 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.183 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.183 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.183 03:04:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.183 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.183 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.183 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.183 03:04:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.183 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.183 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.183 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.183 03:04:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.183 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.183 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.183 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.183 03:04:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.183 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.183 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.183 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.183 03:04:10 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.183 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.183 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.183 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.183 03:04:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.183 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.183 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.183 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.183 03:04:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.183 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.183 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.183 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.183 03:04:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.183 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.183 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.183 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.183 03:04:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.184 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.184 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.184 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.184 03:04:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.184 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.184 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.184 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.184 03:04:10 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.184 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.184 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.184 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.184 03:04:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.184 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.184 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.184 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.184 03:04:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.184 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.184 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.184 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.184 03:04:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.184 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.184 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.184 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.184 03:04:10 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.184 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.184 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.184 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.184 03:04:10 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.184 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.184 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.184 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.184 03:04:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.184 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.184 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.184 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.184 03:04:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.184 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.184 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.184 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.184 03:04:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.184 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.184 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.184 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.184 03:04:10 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.184 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.184 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.184 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.184 03:04:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.184 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.184 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.184 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.184 03:04:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.184 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.184 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.184 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.184 03:04:10 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.184 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.184 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.184 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.184 03:04:10 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.184 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.184 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.184 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.184 03:04:10 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.184 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.184 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.184 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.184 03:04:10 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.184 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.184 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.184 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.184 03:04:10 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.184 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.184 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.184 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.184 03:04:10 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.184 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.184 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.184 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.184 03:04:10 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.184 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.184 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.184 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.184 03:04:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.184 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.184 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.184 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.184 03:04:10 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.184 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.184 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.184 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.184 03:04:10 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.184 03:04:10 -- setup/common.sh@33 -- # echo 0 00:03:36.184 03:04:10 -- setup/common.sh@33 -- # return 0 00:03:36.184 03:04:10 -- setup/hugepages.sh@100 -- # resv=0 00:03:36.184 03:04:10 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:36.184 nr_hugepages=1024 00:03:36.184 03:04:10 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:36.184 resv_hugepages=0 00:03:36.184 03:04:10 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:36.184 surplus_hugepages=0 00:03:36.184 03:04:10 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:36.184 anon_hugepages=0 00:03:36.184 03:04:10 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:36.184 03:04:10 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:36.184 03:04:10 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:36.184 03:04:10 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:36.184 03:04:10 -- setup/common.sh@18 -- # local node= 00:03:36.184 03:04:10 -- setup/common.sh@19 -- # local var val 00:03:36.184 03:04:10 -- setup/common.sh@20 -- # local mem_f mem 00:03:36.184 03:04:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.184 03:04:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:36.184 03:04:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:36.184 03:04:10 -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.184 03:04:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.184 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.184 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.184 03:04:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 36896944 kB' 'MemAvailable: 42068232 kB' 'Buffers: 3728 kB' 'Cached: 18761756 kB' 'SwapCached: 0 kB' 'Active: 14683876 kB' 'Inactive: 4649000 kB' 'Active(anon): 14069756 kB' 'Inactive(anon): 0 kB' 'Active(file): 614120 kB' 'Inactive(file): 4649000 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 570652 kB' 'Mapped: 229012 kB' 'Shmem: 13502364 kB' 'KReclaimable: 552604 kB' 'Slab: 952672 kB' 'SReclaimable: 552604 kB' 'SUnreclaim: 400068 kB' 'KernelStack: 12896 kB' 'PageTables: 9156 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 15248984 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197084 kB' 'VmallocChunk: 0 kB' 'Percpu: 44928 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2207324 kB' 'DirectMap2M: 28121088 kB' 'DirectMap1G: 38797312 kB' 00:03:36.184 03:04:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.184 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.184 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.184 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.184 03:04:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.184 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.184 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.184 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.184 03:04:10 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.184 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.184 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.184 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.184 03:04:10 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.184 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.184 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.184 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.184 03:04:10 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.184 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.184 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.184 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.184 03:04:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.185 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.185 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.185 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.185 03:04:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.185 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.185 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.185 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.185 03:04:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.185 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.185 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.185 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.185 03:04:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.185 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.185 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.185 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.185 03:04:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.185 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.185 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.185 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.185 03:04:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.185 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.185 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.185 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.185 03:04:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.185 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.185 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.185 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.185 03:04:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.185 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.185 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.185 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.185 03:04:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.185 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.185 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.185 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.185 03:04:10 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.185 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.185 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.185 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.185 03:04:10 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.185 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.185 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.185 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.185 03:04:10 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.185 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.185 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.185 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.185 03:04:10 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.185 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.185 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.185 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.185 03:04:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.185 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.185 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.185 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.185 03:04:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.185 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.185 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.185 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.185 03:04:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.185 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.185 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.185 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.185 03:04:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.185 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.185 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.185 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.185 03:04:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.185 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.185 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.185 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.185 03:04:10 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.185 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.185 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.185 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.185 03:04:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.185 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.185 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.185 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.185 03:04:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.185 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.185 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.185 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.185 03:04:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.185 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.185 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.185 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.185 03:04:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.185 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.185 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.185 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.185 03:04:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.185 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.185 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.185 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.185 03:04:10 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.185 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.185 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.185 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.185 03:04:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.185 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.185 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.185 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.185 03:04:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.185 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.185 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.185 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.185 03:04:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.185 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.185 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.185 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.185 03:04:10 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.185 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.185 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.185 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.185 03:04:10 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.185 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.185 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.185 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.185 03:04:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.185 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.185 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.185 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.185 03:04:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.185 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.185 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.185 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.185 03:04:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.185 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.185 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.185 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.185 03:04:10 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.185 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.186 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.186 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.186 03:04:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.186 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.186 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.186 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.186 03:04:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.186 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.186 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.186 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.186 03:04:10 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.186 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.186 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.186 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.186 03:04:10 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.186 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.186 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.186 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.186 03:04:10 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.186 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.186 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.186 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.186 03:04:10 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.186 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.186 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.186 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.186 03:04:10 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.186 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.186 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.186 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.186 03:04:10 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.186 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.186 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.186 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.186 03:04:10 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.186 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.186 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.186 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.186 03:04:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.186 03:04:10 -- setup/common.sh@33 -- # echo 1024 00:03:36.186 03:04:10 -- setup/common.sh@33 -- # return 0 00:03:36.186 03:04:10 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:36.186 03:04:10 -- setup/hugepages.sh@112 -- # get_nodes 00:03:36.186 03:04:10 -- setup/hugepages.sh@27 -- # local node 00:03:36.186 03:04:10 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:36.186 03:04:10 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:36.186 03:04:10 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:36.186 03:04:10 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:36.186 03:04:10 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:36.186 03:04:10 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:36.186 03:04:10 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:36.186 03:04:10 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:36.186 03:04:10 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:36.186 03:04:10 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:36.186 03:04:10 -- setup/common.sh@18 -- # local node=0 00:03:36.186 03:04:10 -- setup/common.sh@19 -- # local var val 00:03:36.186 03:04:10 -- setup/common.sh@20 -- # local mem_f mem 00:03:36.186 03:04:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.186 03:04:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:36.186 03:04:10 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:36.186 03:04:10 -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.186 03:04:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.186 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.186 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.186 03:04:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 22538320 kB' 'MemUsed: 10291564 kB' 'SwapCached: 0 kB' 'Active: 7625892 kB' 'Inactive: 271804 kB' 'Active(anon): 7225488 kB' 'Inactive(anon): 0 kB' 'Active(file): 400404 kB' 'Inactive(file): 271804 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7697860 kB' 'Mapped: 54016 kB' 'AnonPages: 203008 kB' 'Shmem: 7025652 kB' 'KernelStack: 7208 kB' 'PageTables: 4640 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 288976 kB' 'Slab: 491800 kB' 'SReclaimable: 288976 kB' 'SUnreclaim: 202824 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:36.186 03:04:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.186 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.186 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.186 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.186 03:04:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.186 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.186 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.186 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.186 03:04:10 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.186 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.186 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.186 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.186 03:04:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.186 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.186 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.186 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.186 03:04:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.186 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.186 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.186 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.186 03:04:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.186 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.186 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.186 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.186 03:04:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.186 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.186 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.186 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.186 03:04:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.186 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.186 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.186 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.186 03:04:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.186 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.186 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.186 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.186 03:04:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.186 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.186 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.186 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.186 03:04:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.186 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.186 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.186 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.186 03:04:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.186 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.186 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.186 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.186 03:04:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.186 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.186 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.186 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.186 03:04:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.186 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.186 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.186 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.186 03:04:10 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.186 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.186 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.186 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.186 03:04:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.186 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.186 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.186 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.186 03:04:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.186 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.186 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.186 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.186 03:04:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.186 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.186 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.186 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.186 03:04:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.186 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.186 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.186 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.186 03:04:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.186 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.186 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.186 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.186 03:04:10 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.186 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.186 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.186 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.186 03:04:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.186 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.186 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.187 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.187 03:04:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.187 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.187 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.187 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.187 03:04:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.187 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.187 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.187 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.187 03:04:10 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.187 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.187 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.187 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.187 03:04:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.187 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.187 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.187 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.187 03:04:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.187 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.187 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.187 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.187 03:04:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.187 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.187 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.187 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.187 03:04:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.187 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.187 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.187 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.187 03:04:10 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.187 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.187 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.187 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.187 03:04:10 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.187 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.187 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.187 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.187 03:04:10 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.187 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.187 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.187 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.187 03:04:10 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.187 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.187 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.187 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.187 03:04:10 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.187 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.187 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.187 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.187 03:04:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.187 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.187 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.187 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.187 03:04:10 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.187 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.187 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.187 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.187 03:04:10 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.187 03:04:10 -- setup/common.sh@33 -- # echo 0 00:03:36.187 03:04:10 -- setup/common.sh@33 -- # return 0 00:03:36.187 03:04:10 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:36.187 03:04:10 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:36.187 03:04:10 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:36.187 03:04:10 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:36.187 03:04:10 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:36.187 03:04:10 -- setup/common.sh@18 -- # local node=1 00:03:36.187 03:04:10 -- setup/common.sh@19 -- # local var val 00:03:36.187 03:04:10 -- setup/common.sh@20 -- # local mem_f mem 00:03:36.187 03:04:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.187 03:04:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:36.187 03:04:10 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:36.187 03:04:10 -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.187 03:04:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.187 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.187 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.187 03:04:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 14359104 kB' 'MemUsed: 13352720 kB' 'SwapCached: 0 kB' 'Active: 7057820 kB' 'Inactive: 4377196 kB' 'Active(anon): 6844104 kB' 'Inactive(anon): 0 kB' 'Active(file): 213716 kB' 'Inactive(file): 4377196 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11067640 kB' 'Mapped: 174996 kB' 'AnonPages: 367440 kB' 'Shmem: 6476728 kB' 'KernelStack: 5688 kB' 'PageTables: 4516 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 263628 kB' 'Slab: 460872 kB' 'SReclaimable: 263628 kB' 'SUnreclaim: 197244 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:36.187 03:04:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.187 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.187 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.187 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.187 03:04:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.187 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.187 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.187 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.187 03:04:10 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.187 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.187 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.187 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.187 03:04:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.187 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.187 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.187 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.187 03:04:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.187 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.187 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.187 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.187 03:04:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.187 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.187 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.187 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.187 03:04:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.187 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.187 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.187 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.187 03:04:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.187 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.187 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.187 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.187 03:04:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.187 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.187 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.187 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.187 03:04:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.187 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.187 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.187 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.187 03:04:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.187 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.187 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.187 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.187 03:04:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.187 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.187 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.187 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.187 03:04:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.187 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.187 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.187 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.187 03:04:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.187 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.187 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.187 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.187 03:04:10 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.187 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.187 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.187 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.187 03:04:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.187 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.187 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.187 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.187 03:04:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.187 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.187 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.187 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.187 03:04:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.187 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.187 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.187 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.187 03:04:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.187 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.187 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.187 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.187 03:04:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.188 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.188 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.188 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.188 03:04:10 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.188 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.188 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.188 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.188 03:04:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.188 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.188 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.188 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.188 03:04:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.188 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.188 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.188 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.188 03:04:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.188 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.188 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.188 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.188 03:04:10 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.188 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.188 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.188 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.188 03:04:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.188 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.188 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.188 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.188 03:04:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.188 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.188 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.188 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.188 03:04:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.188 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.188 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.188 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.188 03:04:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.188 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.188 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.188 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.188 03:04:10 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.188 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.188 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.188 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.188 03:04:10 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.188 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.188 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.188 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.188 03:04:10 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.188 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.188 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.188 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.188 03:04:10 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.188 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.188 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.188 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.188 03:04:10 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.188 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.188 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.188 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.188 03:04:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.188 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.188 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.188 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.188 03:04:10 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.188 03:04:10 -- setup/common.sh@32 -- # continue 00:03:36.188 03:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.188 03:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.188 03:04:10 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.188 03:04:10 -- setup/common.sh@33 -- # echo 0 00:03:36.188 03:04:10 -- setup/common.sh@33 -- # return 0 00:03:36.188 03:04:10 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:36.188 03:04:10 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:36.188 03:04:10 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:36.188 03:04:10 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:36.188 03:04:10 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:36.188 node0=512 expecting 512 00:03:36.188 03:04:10 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:36.188 03:04:10 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:36.188 03:04:10 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:36.188 03:04:10 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:36.188 node1=512 expecting 512 00:03:36.188 03:04:10 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:36.188 00:03:36.188 real 0m1.347s 00:03:36.188 user 0m0.562s 00:03:36.188 sys 0m0.740s 00:03:36.188 03:04:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:36.188 03:04:10 -- common/autotest_common.sh@10 -- # set +x 00:03:36.188 ************************************ 00:03:36.188 END TEST per_node_1G_alloc 00:03:36.188 ************************************ 00:03:36.188 03:04:10 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:36.188 03:04:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:36.188 03:04:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:36.188 03:04:10 -- common/autotest_common.sh@10 -- # set +x 00:03:36.460 ************************************ 00:03:36.460 START TEST even_2G_alloc 00:03:36.460 ************************************ 00:03:36.460 03:04:10 -- common/autotest_common.sh@1111 -- # even_2G_alloc 00:03:36.460 03:04:10 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:36.460 03:04:10 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:36.460 03:04:10 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:36.460 03:04:10 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:36.460 03:04:10 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:36.460 03:04:10 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:36.460 03:04:10 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:36.460 03:04:10 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:36.460 03:04:10 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:36.460 03:04:10 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:36.460 03:04:10 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:36.460 03:04:10 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:36.460 03:04:10 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:36.460 03:04:10 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:36.460 03:04:10 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:36.460 03:04:10 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:36.460 03:04:10 -- setup/hugepages.sh@83 -- # : 512 00:03:36.460 03:04:10 -- setup/hugepages.sh@84 -- # : 1 00:03:36.460 03:04:10 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:36.460 03:04:10 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:36.460 03:04:10 -- setup/hugepages.sh@83 -- # : 0 00:03:36.460 03:04:10 -- setup/hugepages.sh@84 -- # : 0 00:03:36.460 03:04:10 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:36.460 03:04:10 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:36.460 03:04:10 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:36.460 03:04:10 -- setup/hugepages.sh@153 -- # setup output 00:03:36.460 03:04:10 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:36.460 03:04:10 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:37.400 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:37.400 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:37.400 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:37.400 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:37.400 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:37.400 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:37.401 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:37.401 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:37.401 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:37.401 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:37.401 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:37.401 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:37.401 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:37.401 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:37.401 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:37.401 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:37.401 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:37.666 03:04:12 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:37.666 03:04:12 -- setup/hugepages.sh@89 -- # local node 00:03:37.666 03:04:12 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:37.666 03:04:12 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:37.666 03:04:12 -- setup/hugepages.sh@92 -- # local surp 00:03:37.666 03:04:12 -- setup/hugepages.sh@93 -- # local resv 00:03:37.666 03:04:12 -- setup/hugepages.sh@94 -- # local anon 00:03:37.666 03:04:12 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:37.666 03:04:12 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:37.666 03:04:12 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:37.666 03:04:12 -- setup/common.sh@18 -- # local node= 00:03:37.666 03:04:12 -- setup/common.sh@19 -- # local var val 00:03:37.666 03:04:12 -- setup/common.sh@20 -- # local mem_f mem 00:03:37.666 03:04:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:37.666 03:04:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:37.666 03:04:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:37.666 03:04:12 -- setup/common.sh@28 -- # mapfile -t mem 00:03:37.666 03:04:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:37.666 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.666 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.666 03:04:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 36896208 kB' 'MemAvailable: 42067496 kB' 'Buffers: 3728 kB' 'Cached: 18761820 kB' 'SwapCached: 0 kB' 'Active: 14681696 kB' 'Inactive: 4649000 kB' 'Active(anon): 14067576 kB' 'Inactive(anon): 0 kB' 'Active(file): 614120 kB' 'Inactive(file): 4649000 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 568344 kB' 'Mapped: 229068 kB' 'Shmem: 13502428 kB' 'KReclaimable: 552604 kB' 'Slab: 952492 kB' 'SReclaimable: 552604 kB' 'SUnreclaim: 399888 kB' 'KernelStack: 12880 kB' 'PageTables: 9028 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 15249328 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197228 kB' 'VmallocChunk: 0 kB' 'Percpu: 44928 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2207324 kB' 'DirectMap2M: 28121088 kB' 'DirectMap1G: 38797312 kB' 00:03:37.666 03:04:12 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.666 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.666 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.666 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.666 03:04:12 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.666 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.666 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.666 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.666 03:04:12 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.666 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.666 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.666 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.666 03:04:12 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.666 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.666 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.667 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.667 03:04:12 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.667 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.667 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.667 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.667 03:04:12 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.667 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.667 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.667 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.667 03:04:12 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.667 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.667 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.667 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.667 03:04:12 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.667 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.667 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.667 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.667 03:04:12 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.667 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.667 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.667 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.667 03:04:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.667 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.667 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.667 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.667 03:04:12 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.667 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.667 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.667 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.667 03:04:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.667 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.667 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.667 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.667 03:04:12 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.667 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.667 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.667 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.667 03:04:12 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.667 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.667 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.667 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.667 03:04:12 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.667 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.667 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.667 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.667 03:04:12 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.667 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.667 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.667 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.667 03:04:12 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.667 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.667 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.667 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.667 03:04:12 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.667 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.667 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.667 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.667 03:04:12 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.667 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.667 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.667 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.667 03:04:12 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.667 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.667 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.667 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.667 03:04:12 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.667 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.667 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.667 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.667 03:04:12 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.667 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.667 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.667 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.667 03:04:12 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.667 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.667 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.667 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.667 03:04:12 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.667 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.667 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.667 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.667 03:04:12 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.667 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.667 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.667 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.667 03:04:12 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.667 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.667 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.667 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.667 03:04:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.667 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.667 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.667 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.667 03:04:12 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.667 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.667 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.667 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.667 03:04:12 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.667 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.667 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.667 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.667 03:04:12 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.667 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.667 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.667 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.667 03:04:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.667 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.667 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.667 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.667 03:04:12 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.667 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.667 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.667 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.667 03:04:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.667 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.667 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.667 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.667 03:04:12 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.667 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.667 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.667 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.667 03:04:12 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.667 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.667 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.667 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.667 03:04:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.667 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.667 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.667 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.667 03:04:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.667 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.667 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.667 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.667 03:04:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.667 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.667 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.667 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.667 03:04:12 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.667 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.667 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.667 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.667 03:04:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.667 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.667 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.667 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.667 03:04:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.667 03:04:12 -- setup/common.sh@33 -- # echo 0 00:03:37.667 03:04:12 -- setup/common.sh@33 -- # return 0 00:03:37.667 03:04:12 -- setup/hugepages.sh@97 -- # anon=0 00:03:37.667 03:04:12 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:37.667 03:04:12 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:37.667 03:04:12 -- setup/common.sh@18 -- # local node= 00:03:37.667 03:04:12 -- setup/common.sh@19 -- # local var val 00:03:37.667 03:04:12 -- setup/common.sh@20 -- # local mem_f mem 00:03:37.667 03:04:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:37.667 03:04:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:37.667 03:04:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:37.667 03:04:12 -- setup/common.sh@28 -- # mapfile -t mem 00:03:37.667 03:04:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:37.668 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.668 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.668 03:04:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 36896056 kB' 'MemAvailable: 42067344 kB' 'Buffers: 3728 kB' 'Cached: 18761824 kB' 'SwapCached: 0 kB' 'Active: 14681704 kB' 'Inactive: 4649000 kB' 'Active(anon): 14067584 kB' 'Inactive(anon): 0 kB' 'Active(file): 614120 kB' 'Inactive(file): 4649000 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 568396 kB' 'Mapped: 229048 kB' 'Shmem: 13502432 kB' 'KReclaimable: 552604 kB' 'Slab: 952464 kB' 'SReclaimable: 552604 kB' 'SUnreclaim: 399860 kB' 'KernelStack: 12928 kB' 'PageTables: 9124 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 15249340 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197196 kB' 'VmallocChunk: 0 kB' 'Percpu: 44928 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2207324 kB' 'DirectMap2M: 28121088 kB' 'DirectMap1G: 38797312 kB' 00:03:37.668 03:04:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.668 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.668 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.668 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.668 03:04:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.668 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.668 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.668 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.668 03:04:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.668 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.668 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.668 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.668 03:04:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.668 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.668 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.668 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.668 03:04:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.668 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.668 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.668 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.668 03:04:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.668 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.668 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.668 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.668 03:04:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.668 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.668 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.668 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.668 03:04:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.668 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.668 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.668 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.668 03:04:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.668 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.668 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.668 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.668 03:04:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.668 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.668 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.668 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.668 03:04:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.668 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.668 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.668 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.668 03:04:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.668 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.668 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.668 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.668 03:04:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.668 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.668 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.668 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.668 03:04:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.668 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.668 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.668 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.668 03:04:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.668 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.668 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.668 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.668 03:04:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.668 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.668 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.668 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.668 03:04:12 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.668 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.668 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.668 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.668 03:04:12 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.668 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.668 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.668 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.668 03:04:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.668 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.668 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.668 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.668 03:04:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.668 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.668 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.668 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.668 03:04:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.668 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.668 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.668 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.668 03:04:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.668 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.668 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.668 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.668 03:04:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.668 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.668 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.668 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.668 03:04:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.668 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.668 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.668 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.668 03:04:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.668 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.668 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.668 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.668 03:04:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.668 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.668 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.668 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.668 03:04:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.668 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.668 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.668 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.668 03:04:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.668 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.668 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.668 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.668 03:04:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.668 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.668 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.668 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.668 03:04:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.668 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.668 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.668 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.668 03:04:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.668 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.668 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.668 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.668 03:04:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.668 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.668 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.669 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.669 03:04:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.669 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.669 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.669 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.669 03:04:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.669 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.669 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.669 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.669 03:04:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.669 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.669 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.669 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.669 03:04:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.669 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.669 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.669 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.669 03:04:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.669 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.669 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.669 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.669 03:04:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.669 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.669 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.669 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.669 03:04:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.669 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.669 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.669 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.669 03:04:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.669 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.669 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.669 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.669 03:04:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.669 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.669 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.669 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.669 03:04:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.669 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.669 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.669 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.669 03:04:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.669 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.669 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.669 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.669 03:04:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.669 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.669 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.669 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.669 03:04:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.669 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.669 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.669 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.669 03:04:12 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.669 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.669 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.669 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.669 03:04:12 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.669 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.669 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.669 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.669 03:04:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.669 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.669 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.669 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.669 03:04:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.669 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.669 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.669 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.669 03:04:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.669 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.669 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.669 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.669 03:04:12 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.669 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.669 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.669 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.669 03:04:12 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.669 03:04:12 -- setup/common.sh@33 -- # echo 0 00:03:37.669 03:04:12 -- setup/common.sh@33 -- # return 0 00:03:37.669 03:04:12 -- setup/hugepages.sh@99 -- # surp=0 00:03:37.669 03:04:12 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:37.669 03:04:12 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:37.669 03:04:12 -- setup/common.sh@18 -- # local node= 00:03:37.669 03:04:12 -- setup/common.sh@19 -- # local var val 00:03:37.669 03:04:12 -- setup/common.sh@20 -- # local mem_f mem 00:03:37.669 03:04:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:37.669 03:04:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:37.669 03:04:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:37.669 03:04:12 -- setup/common.sh@28 -- # mapfile -t mem 00:03:37.669 03:04:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:37.669 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.669 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.669 03:04:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 36895804 kB' 'MemAvailable: 42067092 kB' 'Buffers: 3728 kB' 'Cached: 18761836 kB' 'SwapCached: 0 kB' 'Active: 14681652 kB' 'Inactive: 4649000 kB' 'Active(anon): 14067532 kB' 'Inactive(anon): 0 kB' 'Active(file): 614120 kB' 'Inactive(file): 4649000 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 568336 kB' 'Mapped: 229048 kB' 'Shmem: 13502444 kB' 'KReclaimable: 552604 kB' 'Slab: 952528 kB' 'SReclaimable: 552604 kB' 'SUnreclaim: 399924 kB' 'KernelStack: 12912 kB' 'PageTables: 9100 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 15249352 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197180 kB' 'VmallocChunk: 0 kB' 'Percpu: 44928 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2207324 kB' 'DirectMap2M: 28121088 kB' 'DirectMap1G: 38797312 kB' 00:03:37.669 03:04:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.669 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.669 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.669 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.669 03:04:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.669 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.669 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.669 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.669 03:04:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.669 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.669 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.669 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.669 03:04:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.669 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.669 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.669 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.669 03:04:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.669 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.669 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.669 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.669 03:04:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.669 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.669 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.669 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.669 03:04:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.669 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.669 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.669 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.669 03:04:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.669 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.669 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.669 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.669 03:04:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.669 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.669 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.669 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.669 03:04:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.669 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.669 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.669 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.669 03:04:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.669 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.669 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.669 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.669 03:04:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.669 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.669 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.669 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.669 03:04:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.669 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.670 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.670 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.670 03:04:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.670 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.670 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.670 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.670 03:04:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.670 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.670 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.670 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.670 03:04:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.670 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.670 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.670 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.670 03:04:12 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.670 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.670 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.670 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.670 03:04:12 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.670 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.670 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.670 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.670 03:04:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.670 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.670 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.670 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.670 03:04:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.670 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.670 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.670 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.670 03:04:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.670 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.670 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.670 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.670 03:04:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.670 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.670 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.670 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.670 03:04:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.670 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.670 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.670 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.670 03:04:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.670 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.670 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.670 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.670 03:04:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.670 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.670 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.670 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.670 03:04:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.670 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.670 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.670 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.670 03:04:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.670 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.670 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.670 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.670 03:04:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.670 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.670 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.670 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.670 03:04:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.670 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.670 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.670 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.670 03:04:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.670 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.670 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.670 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.670 03:04:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.670 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.670 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.670 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.670 03:04:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.670 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.670 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.670 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.670 03:04:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.670 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.670 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.670 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.670 03:04:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.670 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.670 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.670 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.670 03:04:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.670 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.670 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.670 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.670 03:04:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.670 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.670 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.670 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.670 03:04:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.670 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.670 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.670 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.670 03:04:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.670 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.670 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.670 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.670 03:04:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.670 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.670 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.670 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.670 03:04:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.670 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.670 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.670 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.670 03:04:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.670 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.670 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.670 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.670 03:04:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.670 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.670 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.670 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.670 03:04:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.670 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.670 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.670 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.670 03:04:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.670 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.670 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.670 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.670 03:04:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.670 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.670 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.670 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.670 03:04:12 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.670 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.670 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.670 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.670 03:04:12 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.670 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.670 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.670 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.670 03:04:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.670 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.670 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.670 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.670 03:04:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.670 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.670 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.670 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.670 03:04:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.670 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.670 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.670 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.670 03:04:12 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.670 03:04:12 -- setup/common.sh@33 -- # echo 0 00:03:37.670 03:04:12 -- setup/common.sh@33 -- # return 0 00:03:37.670 03:04:12 -- setup/hugepages.sh@100 -- # resv=0 00:03:37.670 03:04:12 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:37.670 nr_hugepages=1024 00:03:37.670 03:04:12 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:37.670 resv_hugepages=0 00:03:37.670 03:04:12 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:37.670 surplus_hugepages=0 00:03:37.670 03:04:12 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:37.670 anon_hugepages=0 00:03:37.670 03:04:12 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:37.670 03:04:12 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:37.671 03:04:12 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:37.671 03:04:12 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:37.671 03:04:12 -- setup/common.sh@18 -- # local node= 00:03:37.671 03:04:12 -- setup/common.sh@19 -- # local var val 00:03:37.671 03:04:12 -- setup/common.sh@20 -- # local mem_f mem 00:03:37.671 03:04:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:37.671 03:04:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:37.671 03:04:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:37.671 03:04:12 -- setup/common.sh@28 -- # mapfile -t mem 00:03:37.671 03:04:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:37.671 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.671 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.671 03:04:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 36895804 kB' 'MemAvailable: 42067092 kB' 'Buffers: 3728 kB' 'Cached: 18761852 kB' 'SwapCached: 0 kB' 'Active: 14681628 kB' 'Inactive: 4649000 kB' 'Active(anon): 14067508 kB' 'Inactive(anon): 0 kB' 'Active(file): 614120 kB' 'Inactive(file): 4649000 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 568364 kB' 'Mapped: 229048 kB' 'Shmem: 13502460 kB' 'KReclaimable: 552604 kB' 'Slab: 952520 kB' 'SReclaimable: 552604 kB' 'SUnreclaim: 399916 kB' 'KernelStack: 12928 kB' 'PageTables: 9156 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 15249368 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197180 kB' 'VmallocChunk: 0 kB' 'Percpu: 44928 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2207324 kB' 'DirectMap2M: 28121088 kB' 'DirectMap1G: 38797312 kB' 00:03:37.671 03:04:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.671 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.671 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.671 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.671 03:04:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.671 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.671 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.671 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.671 03:04:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.671 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.671 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.671 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.671 03:04:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.671 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.671 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.671 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.671 03:04:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.671 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.671 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.671 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.671 03:04:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.671 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.671 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.671 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.671 03:04:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.671 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.671 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.671 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.671 03:04:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.671 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.671 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.671 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.671 03:04:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.671 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.671 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.671 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.671 03:04:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.671 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.671 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.671 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.671 03:04:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.671 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.671 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.671 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.671 03:04:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.671 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.671 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.671 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.671 03:04:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.671 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.671 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.671 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.671 03:04:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.671 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.671 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.671 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.671 03:04:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.671 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.671 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.671 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.671 03:04:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.671 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.671 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.671 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.671 03:04:12 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.671 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.671 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.671 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.671 03:04:12 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.671 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.671 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.671 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.671 03:04:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.671 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.671 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.671 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.671 03:04:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.671 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.671 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.671 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.672 03:04:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.672 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.672 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.672 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.672 03:04:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.672 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.672 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.672 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.672 03:04:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.672 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.672 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.672 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.672 03:04:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.672 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.672 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.672 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.672 03:04:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.672 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.672 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.672 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.672 03:04:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.672 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.672 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.672 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.672 03:04:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.672 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.672 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.672 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.672 03:04:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.672 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.672 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.672 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.672 03:04:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.672 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.672 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.672 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.672 03:04:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.672 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.672 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.672 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.672 03:04:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.672 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.672 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.672 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.672 03:04:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.672 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.672 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.672 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.672 03:04:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.672 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.672 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.672 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.672 03:04:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.672 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.672 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.672 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.672 03:04:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.672 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.672 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.672 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.672 03:04:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.672 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.672 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.672 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.672 03:04:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.672 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.672 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.672 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.672 03:04:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.672 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.672 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.672 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.672 03:04:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.672 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.672 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.672 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.672 03:04:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.672 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.672 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.672 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.672 03:04:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.672 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.672 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.672 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.672 03:04:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.672 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.672 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.672 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.672 03:04:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.672 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.672 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.672 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.672 03:04:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.672 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.672 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.672 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.672 03:04:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.672 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.672 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.672 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.672 03:04:12 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.672 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.672 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.672 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.672 03:04:12 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.672 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.672 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.672 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.672 03:04:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.672 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.672 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.672 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.672 03:04:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.672 03:04:12 -- setup/common.sh@33 -- # echo 1024 00:03:37.672 03:04:12 -- setup/common.sh@33 -- # return 0 00:03:37.672 03:04:12 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:37.672 03:04:12 -- setup/hugepages.sh@112 -- # get_nodes 00:03:37.672 03:04:12 -- setup/hugepages.sh@27 -- # local node 00:03:37.672 03:04:12 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:37.672 03:04:12 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:37.672 03:04:12 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:37.672 03:04:12 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:37.672 03:04:12 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:37.672 03:04:12 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:37.672 03:04:12 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:37.672 03:04:12 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:37.672 03:04:12 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:37.672 03:04:12 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:37.672 03:04:12 -- setup/common.sh@18 -- # local node=0 00:03:37.672 03:04:12 -- setup/common.sh@19 -- # local var val 00:03:37.672 03:04:12 -- setup/common.sh@20 -- # local mem_f mem 00:03:37.672 03:04:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:37.672 03:04:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:37.672 03:04:12 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:37.672 03:04:12 -- setup/common.sh@28 -- # mapfile -t mem 00:03:37.672 03:04:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:37.672 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.672 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.672 03:04:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 22534808 kB' 'MemUsed: 10295076 kB' 'SwapCached: 0 kB' 'Active: 7624288 kB' 'Inactive: 271804 kB' 'Active(anon): 7223884 kB' 'Inactive(anon): 0 kB' 'Active(file): 400404 kB' 'Inactive(file): 271804 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7697952 kB' 'Mapped: 54020 kB' 'AnonPages: 201376 kB' 'Shmem: 7025744 kB' 'KernelStack: 7224 kB' 'PageTables: 4684 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 288976 kB' 'Slab: 491728 kB' 'SReclaimable: 288976 kB' 'SUnreclaim: 202752 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:37.672 03:04:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.672 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.672 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.672 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.672 03:04:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.672 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.672 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.672 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.673 03:04:12 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.673 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.673 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.673 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.673 03:04:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.673 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.673 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.673 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.673 03:04:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.673 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.673 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.934 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.934 03:04:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.934 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.934 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.934 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.934 03:04:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.934 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.934 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.934 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.934 03:04:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.934 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.934 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.934 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.934 03:04:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.934 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.934 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.934 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.934 03:04:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.934 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.934 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.934 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.934 03:04:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.934 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.934 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.934 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.934 03:04:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.934 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.934 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.934 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.934 03:04:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.934 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.934 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.934 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.934 03:04:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.934 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.934 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.934 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.934 03:04:12 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.934 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.934 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.934 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.934 03:04:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.934 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.934 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.934 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.934 03:04:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.934 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.934 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.934 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.934 03:04:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.934 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.934 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.934 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.934 03:04:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.934 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.934 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.934 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.934 03:04:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.934 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.934 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.934 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.934 03:04:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.934 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.934 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.934 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.934 03:04:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.934 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.934 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.934 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.934 03:04:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.934 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.934 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.934 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.934 03:04:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.934 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.934 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.934 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.934 03:04:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.934 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.934 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.934 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.934 03:04:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.934 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.934 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.934 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.934 03:04:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.934 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.934 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.934 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.934 03:04:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.934 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.934 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.934 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.934 03:04:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.934 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.934 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.934 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.934 03:04:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.934 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.934 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.934 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.934 03:04:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.934 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.934 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.934 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.934 03:04:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.934 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.934 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.934 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.934 03:04:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.934 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.934 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.934 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.934 03:04:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.934 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.934 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.934 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.934 03:04:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.934 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.934 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.934 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.934 03:04:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.934 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.934 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.934 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.934 03:04:12 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.934 03:04:12 -- setup/common.sh@33 -- # echo 0 00:03:37.934 03:04:12 -- setup/common.sh@33 -- # return 0 00:03:37.934 03:04:12 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:37.934 03:04:12 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:37.934 03:04:12 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:37.934 03:04:12 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:37.934 03:04:12 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:37.934 03:04:12 -- setup/common.sh@18 -- # local node=1 00:03:37.934 03:04:12 -- setup/common.sh@19 -- # local var val 00:03:37.934 03:04:12 -- setup/common.sh@20 -- # local mem_f mem 00:03:37.935 03:04:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:37.935 03:04:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:37.935 03:04:12 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:37.935 03:04:12 -- setup/common.sh@28 -- # mapfile -t mem 00:03:37.935 03:04:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:37.935 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.935 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.935 03:04:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 14361284 kB' 'MemUsed: 13350540 kB' 'SwapCached: 0 kB' 'Active: 7057552 kB' 'Inactive: 4377196 kB' 'Active(anon): 6843836 kB' 'Inactive(anon): 0 kB' 'Active(file): 213716 kB' 'Inactive(file): 4377196 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11067640 kB' 'Mapped: 175028 kB' 'AnonPages: 367132 kB' 'Shmem: 6476728 kB' 'KernelStack: 5704 kB' 'PageTables: 4472 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 263628 kB' 'Slab: 460792 kB' 'SReclaimable: 263628 kB' 'SUnreclaim: 197164 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:37.935 03:04:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.935 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.935 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.935 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.935 03:04:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.935 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.935 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.935 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.935 03:04:12 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.935 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.935 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.935 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.935 03:04:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.935 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.935 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.935 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.935 03:04:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.935 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.935 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.935 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.935 03:04:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.935 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.935 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.935 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.935 03:04:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.935 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.935 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.935 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.935 03:04:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.935 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.935 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.935 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.935 03:04:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.935 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.935 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.935 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.935 03:04:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.935 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.935 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.935 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.935 03:04:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.935 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.935 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.935 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.935 03:04:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.935 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.935 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.935 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.935 03:04:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.935 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.935 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.935 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.935 03:04:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.935 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.935 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.935 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.935 03:04:12 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.935 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.935 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.935 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.935 03:04:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.935 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.935 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.935 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.935 03:04:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.935 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.935 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.935 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.935 03:04:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.935 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.935 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.935 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.935 03:04:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.935 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.935 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.935 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.935 03:04:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.935 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.935 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.935 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.935 03:04:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.935 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.935 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.935 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.935 03:04:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.935 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.935 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.935 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.935 03:04:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.935 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.935 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.935 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.935 03:04:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.935 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.935 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.935 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.935 03:04:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.935 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.935 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.935 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.935 03:04:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.935 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.935 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.935 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.935 03:04:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.935 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.935 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.935 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.935 03:04:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.935 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.935 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.935 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.935 03:04:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.935 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.935 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.935 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.935 03:04:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.935 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.935 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.935 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.935 03:04:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.935 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.935 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.935 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.935 03:04:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.935 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.935 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.935 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.935 03:04:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.935 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.935 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.935 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.935 03:04:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.935 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.935 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.935 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.935 03:04:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.935 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.935 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.935 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.935 03:04:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.936 03:04:12 -- setup/common.sh@32 -- # continue 00:03:37.936 03:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.936 03:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.936 03:04:12 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.936 03:04:12 -- setup/common.sh@33 -- # echo 0 00:03:37.936 03:04:12 -- setup/common.sh@33 -- # return 0 00:03:37.936 03:04:12 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:37.936 03:04:12 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:37.936 03:04:12 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:37.936 03:04:12 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:37.936 03:04:12 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:37.936 node0=512 expecting 512 00:03:37.936 03:04:12 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:37.936 03:04:12 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:37.936 03:04:12 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:37.936 03:04:12 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:37.936 node1=512 expecting 512 00:03:37.936 03:04:12 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:37.936 00:03:37.936 real 0m1.512s 00:03:37.936 user 0m0.630s 00:03:37.936 sys 0m0.833s 00:03:37.936 03:04:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:37.936 03:04:12 -- common/autotest_common.sh@10 -- # set +x 00:03:37.936 ************************************ 00:03:37.936 END TEST even_2G_alloc 00:03:37.936 ************************************ 00:03:37.936 03:04:12 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:37.936 03:04:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:37.936 03:04:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:37.936 03:04:12 -- common/autotest_common.sh@10 -- # set +x 00:03:37.936 ************************************ 00:03:37.936 START TEST odd_alloc 00:03:37.936 ************************************ 00:03:37.936 03:04:12 -- common/autotest_common.sh@1111 -- # odd_alloc 00:03:37.936 03:04:12 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:37.936 03:04:12 -- setup/hugepages.sh@49 -- # local size=2098176 00:03:37.936 03:04:12 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:37.936 03:04:12 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:37.936 03:04:12 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:37.936 03:04:12 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:37.936 03:04:12 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:37.936 03:04:12 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:37.936 03:04:12 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:37.936 03:04:12 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:37.936 03:04:12 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:37.936 03:04:12 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:37.936 03:04:12 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:37.936 03:04:12 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:37.936 03:04:12 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:37.936 03:04:12 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:37.936 03:04:12 -- setup/hugepages.sh@83 -- # : 513 00:03:37.936 03:04:12 -- setup/hugepages.sh@84 -- # : 1 00:03:37.936 03:04:12 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:37.936 03:04:12 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:37.936 03:04:12 -- setup/hugepages.sh@83 -- # : 0 00:03:37.936 03:04:12 -- setup/hugepages.sh@84 -- # : 0 00:03:37.936 03:04:12 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:37.936 03:04:12 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:37.936 03:04:12 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:37.936 03:04:12 -- setup/hugepages.sh@160 -- # setup output 00:03:37.936 03:04:12 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:37.936 03:04:12 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:38.875 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:38.875 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:38.875 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:38.875 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:38.875 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:39.136 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:39.136 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:39.136 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:39.136 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:39.136 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:39.136 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:39.136 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:39.136 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:39.136 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:39.136 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:39.137 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:39.137 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:39.137 03:04:13 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:39.137 03:04:13 -- setup/hugepages.sh@89 -- # local node 00:03:39.137 03:04:13 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:39.137 03:04:13 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:39.137 03:04:13 -- setup/hugepages.sh@92 -- # local surp 00:03:39.137 03:04:13 -- setup/hugepages.sh@93 -- # local resv 00:03:39.137 03:04:13 -- setup/hugepages.sh@94 -- # local anon 00:03:39.137 03:04:13 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:39.137 03:04:13 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:39.137 03:04:13 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:39.137 03:04:13 -- setup/common.sh@18 -- # local node= 00:03:39.137 03:04:13 -- setup/common.sh@19 -- # local var val 00:03:39.137 03:04:13 -- setup/common.sh@20 -- # local mem_f mem 00:03:39.137 03:04:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:39.137 03:04:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:39.137 03:04:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:39.137 03:04:13 -- setup/common.sh@28 -- # mapfile -t mem 00:03:39.137 03:04:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:39.137 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.137 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.137 03:04:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 36876180 kB' 'MemAvailable: 42047468 kB' 'Buffers: 3728 kB' 'Cached: 18761924 kB' 'SwapCached: 0 kB' 'Active: 14678332 kB' 'Inactive: 4649000 kB' 'Active(anon): 14064212 kB' 'Inactive(anon): 0 kB' 'Active(file): 614120 kB' 'Inactive(file): 4649000 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 564868 kB' 'Mapped: 228044 kB' 'Shmem: 13502532 kB' 'KReclaimable: 552604 kB' 'Slab: 952588 kB' 'SReclaimable: 552604 kB' 'SUnreclaim: 399984 kB' 'KernelStack: 12880 kB' 'PageTables: 8908 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 15235604 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197212 kB' 'VmallocChunk: 0 kB' 'Percpu: 44928 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2207324 kB' 'DirectMap2M: 28121088 kB' 'DirectMap1G: 38797312 kB' 00:03:39.137 03:04:13 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.137 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.137 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.137 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.137 03:04:13 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.137 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.137 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.137 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.137 03:04:13 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.137 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.137 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.137 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.137 03:04:13 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.137 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.137 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.137 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.137 03:04:13 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.137 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.137 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.137 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.137 03:04:13 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.137 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.137 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.137 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.137 03:04:13 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.137 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.137 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.137 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.137 03:04:13 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.137 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.137 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.137 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.137 03:04:13 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.137 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.137 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.137 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.137 03:04:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.137 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.137 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.137 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.137 03:04:13 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.137 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.137 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.137 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.137 03:04:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.137 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.137 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.137 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.137 03:04:13 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.137 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.137 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.137 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.137 03:04:13 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.137 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.137 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.137 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.137 03:04:13 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.137 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.137 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.137 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.137 03:04:13 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.137 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.137 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.137 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.137 03:04:13 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.137 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.137 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.137 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.137 03:04:13 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.137 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.137 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.137 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.137 03:04:13 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.137 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.137 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.137 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.137 03:04:13 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.137 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.137 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.137 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.137 03:04:13 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.137 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.137 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.137 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.137 03:04:13 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.137 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.137 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.137 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.137 03:04:13 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.137 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.137 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.137 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.137 03:04:13 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.137 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.137 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.137 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.137 03:04:13 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.137 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.137 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.137 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.137 03:04:13 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.137 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.137 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.137 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.137 03:04:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.137 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.137 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.137 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.137 03:04:13 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.137 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.137 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.137 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.137 03:04:13 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.137 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.137 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.138 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.138 03:04:13 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.138 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.138 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.138 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.138 03:04:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.138 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.138 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.138 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.138 03:04:13 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.138 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.138 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.138 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.138 03:04:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.138 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.138 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.138 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.138 03:04:13 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.138 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.138 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.138 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.138 03:04:13 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.138 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.138 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.138 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.138 03:04:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.138 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.138 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.138 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.138 03:04:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.138 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.138 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.138 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.138 03:04:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.138 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.138 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.138 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.138 03:04:13 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.138 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.138 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.138 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.138 03:04:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.138 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.138 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.138 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.138 03:04:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.138 03:04:13 -- setup/common.sh@33 -- # echo 0 00:03:39.138 03:04:13 -- setup/common.sh@33 -- # return 0 00:03:39.138 03:04:13 -- setup/hugepages.sh@97 -- # anon=0 00:03:39.138 03:04:13 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:39.138 03:04:13 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:39.138 03:04:13 -- setup/common.sh@18 -- # local node= 00:03:39.138 03:04:13 -- setup/common.sh@19 -- # local var val 00:03:39.138 03:04:13 -- setup/common.sh@20 -- # local mem_f mem 00:03:39.138 03:04:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:39.138 03:04:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:39.138 03:04:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:39.138 03:04:13 -- setup/common.sh@28 -- # mapfile -t mem 00:03:39.138 03:04:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:39.138 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.138 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.138 03:04:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 36875928 kB' 'MemAvailable: 42047216 kB' 'Buffers: 3728 kB' 'Cached: 18761924 kB' 'SwapCached: 0 kB' 'Active: 14678768 kB' 'Inactive: 4649000 kB' 'Active(anon): 14064648 kB' 'Inactive(anon): 0 kB' 'Active(file): 614120 kB' 'Inactive(file): 4649000 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 565344 kB' 'Mapped: 228088 kB' 'Shmem: 13502532 kB' 'KReclaimable: 552604 kB' 'Slab: 952580 kB' 'SReclaimable: 552604 kB' 'SUnreclaim: 399976 kB' 'KernelStack: 12896 kB' 'PageTables: 8984 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 15235248 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197180 kB' 'VmallocChunk: 0 kB' 'Percpu: 44928 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2207324 kB' 'DirectMap2M: 28121088 kB' 'DirectMap1G: 38797312 kB' 00:03:39.138 03:04:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.138 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.138 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.138 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.138 03:04:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.138 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.138 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.138 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.138 03:04:13 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.138 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.138 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.138 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.138 03:04:13 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.138 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.138 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.138 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.138 03:04:13 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.138 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.138 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.138 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.138 03:04:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.138 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.138 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.138 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.138 03:04:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.138 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.138 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.138 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.138 03:04:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.138 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.138 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.138 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.138 03:04:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.138 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.138 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.138 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.138 03:04:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.138 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.138 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.138 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.138 03:04:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.138 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.138 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.138 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.138 03:04:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.138 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.138 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.138 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.138 03:04:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.138 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.138 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.138 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.138 03:04:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.138 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.138 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.138 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.138 03:04:13 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.138 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.138 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.138 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.138 03:04:13 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.138 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.138 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.138 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.138 03:04:13 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.138 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.139 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.139 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.139 03:04:13 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.139 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.139 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.139 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.139 03:04:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.139 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.139 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.139 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.139 03:04:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.139 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.139 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.139 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.139 03:04:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.139 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.139 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.139 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.139 03:04:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.139 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.139 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.139 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.139 03:04:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.139 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.139 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.139 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.139 03:04:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.139 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.139 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.139 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.139 03:04:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.139 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.139 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.139 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.139 03:04:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.139 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.139 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.139 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.139 03:04:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.139 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.139 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.139 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.139 03:04:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.139 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.139 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.139 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.139 03:04:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.139 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.139 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.139 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.139 03:04:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.139 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.139 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.139 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.139 03:04:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.139 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.139 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.139 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.139 03:04:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.139 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.139 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.139 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.139 03:04:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.139 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.139 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.139 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.139 03:04:13 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.139 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.139 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.139 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.139 03:04:13 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.139 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.139 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.139 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.139 03:04:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.139 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.139 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.139 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.139 03:04:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.139 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.139 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.139 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.139 03:04:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.139 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.139 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.139 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.139 03:04:13 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.139 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.139 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.139 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.139 03:04:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.139 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.139 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.139 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.139 03:04:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.139 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.139 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.139 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.139 03:04:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.139 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.139 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.139 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.139 03:04:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.139 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.139 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.139 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.139 03:04:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.139 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.139 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.139 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.139 03:04:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.139 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.139 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.139 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.139 03:04:13 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.139 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.139 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.139 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.139 03:04:13 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.139 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.139 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.139 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.139 03:04:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.139 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.139 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.139 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.139 03:04:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.139 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.139 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.139 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.139 03:04:13 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.139 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.139 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.139 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.139 03:04:13 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.139 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.139 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.139 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.139 03:04:13 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.139 03:04:13 -- setup/common.sh@33 -- # echo 0 00:03:39.139 03:04:13 -- setup/common.sh@33 -- # return 0 00:03:39.139 03:04:13 -- setup/hugepages.sh@99 -- # surp=0 00:03:39.139 03:04:13 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:39.139 03:04:13 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:39.139 03:04:13 -- setup/common.sh@18 -- # local node= 00:03:39.139 03:04:13 -- setup/common.sh@19 -- # local var val 00:03:39.139 03:04:13 -- setup/common.sh@20 -- # local mem_f mem 00:03:39.139 03:04:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:39.139 03:04:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:39.139 03:04:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:39.139 03:04:13 -- setup/common.sh@28 -- # mapfile -t mem 00:03:39.139 03:04:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:39.139 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.139 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.140 03:04:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 36876072 kB' 'MemAvailable: 42047360 kB' 'Buffers: 3728 kB' 'Cached: 18761936 kB' 'SwapCached: 0 kB' 'Active: 14678116 kB' 'Inactive: 4649000 kB' 'Active(anon): 14063996 kB' 'Inactive(anon): 0 kB' 'Active(file): 614120 kB' 'Inactive(file): 4649000 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 564664 kB' 'Mapped: 228016 kB' 'Shmem: 13502544 kB' 'KReclaimable: 552604 kB' 'Slab: 952580 kB' 'SReclaimable: 552604 kB' 'SUnreclaim: 399976 kB' 'KernelStack: 12832 kB' 'PageTables: 8752 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 15235260 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197132 kB' 'VmallocChunk: 0 kB' 'Percpu: 44928 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2207324 kB' 'DirectMap2M: 28121088 kB' 'DirectMap1G: 38797312 kB' 00:03:39.140 03:04:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.140 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.140 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.140 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.140 03:04:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.140 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.140 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.140 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.140 03:04:13 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.140 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.140 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.140 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.140 03:04:13 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.140 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.140 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.140 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.140 03:04:13 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.140 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.140 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.140 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.140 03:04:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.140 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.140 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.140 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.140 03:04:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.140 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.140 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.140 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.140 03:04:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.140 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.140 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.140 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.140 03:04:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.140 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.140 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.140 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.140 03:04:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.140 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.140 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.140 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.140 03:04:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.140 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.140 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.140 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.140 03:04:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.140 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.140 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.140 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.140 03:04:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.140 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.140 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.140 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.140 03:04:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.140 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.140 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.140 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.140 03:04:13 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.140 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.140 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.140 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.140 03:04:13 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.140 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.140 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.140 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.140 03:04:13 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.140 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.140 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.140 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.140 03:04:13 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.140 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.140 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.140 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.140 03:04:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.140 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.140 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.140 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.140 03:04:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.140 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.140 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.140 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.140 03:04:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.140 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.140 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.140 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.140 03:04:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.140 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.140 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.140 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.140 03:04:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.140 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.140 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.140 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.140 03:04:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.140 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.140 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.140 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.140 03:04:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.140 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.140 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.140 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.140 03:04:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.140 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.140 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.140 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.140 03:04:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.140 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.140 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.140 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.140 03:04:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.140 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.140 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.140 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.140 03:04:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.140 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.140 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.140 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.140 03:04:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.140 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.140 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.140 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.140 03:04:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.140 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.140 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.140 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.140 03:04:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.140 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.140 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.140 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.140 03:04:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.140 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.140 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.140 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.140 03:04:13 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.140 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.140 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.140 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.140 03:04:13 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.140 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.140 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.140 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.140 03:04:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.140 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.141 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.141 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.141 03:04:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.141 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.141 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.141 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.141 03:04:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.141 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.141 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.403 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.403 03:04:13 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.403 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.403 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.403 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.403 03:04:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.403 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.403 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.403 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.403 03:04:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.403 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.403 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.403 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.403 03:04:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.403 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.403 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.403 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.403 03:04:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.403 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.403 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.403 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.403 03:04:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.403 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.403 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.403 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.403 03:04:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.403 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.403 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.403 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.403 03:04:13 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.403 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.403 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.403 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.403 03:04:13 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.403 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.403 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.403 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.403 03:04:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.403 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.403 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.403 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.403 03:04:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.403 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.403 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.403 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.403 03:04:13 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.403 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.403 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.403 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.403 03:04:13 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.403 03:04:13 -- setup/common.sh@33 -- # echo 0 00:03:39.403 03:04:13 -- setup/common.sh@33 -- # return 0 00:03:39.403 03:04:13 -- setup/hugepages.sh@100 -- # resv=0 00:03:39.403 03:04:13 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:39.403 nr_hugepages=1025 00:03:39.403 03:04:13 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:39.403 resv_hugepages=0 00:03:39.403 03:04:13 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:39.403 surplus_hugepages=0 00:03:39.403 03:04:13 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:39.403 anon_hugepages=0 00:03:39.403 03:04:13 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:39.403 03:04:13 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:39.403 03:04:13 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:39.403 03:04:13 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:39.403 03:04:13 -- setup/common.sh@18 -- # local node= 00:03:39.403 03:04:13 -- setup/common.sh@19 -- # local var val 00:03:39.403 03:04:13 -- setup/common.sh@20 -- # local mem_f mem 00:03:39.403 03:04:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:39.403 03:04:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:39.403 03:04:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:39.403 03:04:13 -- setup/common.sh@28 -- # mapfile -t mem 00:03:39.403 03:04:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:39.403 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.403 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.403 03:04:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 36876432 kB' 'MemAvailable: 42047720 kB' 'Buffers: 3728 kB' 'Cached: 18761952 kB' 'SwapCached: 0 kB' 'Active: 14678060 kB' 'Inactive: 4649000 kB' 'Active(anon): 14063940 kB' 'Inactive(anon): 0 kB' 'Active(file): 614120 kB' 'Inactive(file): 4649000 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 564668 kB' 'Mapped: 228016 kB' 'Shmem: 13502560 kB' 'KReclaimable: 552604 kB' 'Slab: 952572 kB' 'SReclaimable: 552604 kB' 'SUnreclaim: 399968 kB' 'KernelStack: 12832 kB' 'PageTables: 8752 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 15235408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197132 kB' 'VmallocChunk: 0 kB' 'Percpu: 44928 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2207324 kB' 'DirectMap2M: 28121088 kB' 'DirectMap1G: 38797312 kB' 00:03:39.403 03:04:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.403 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.403 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.403 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.403 03:04:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.403 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.403 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.403 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.403 03:04:13 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.403 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.404 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.404 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.404 03:04:13 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.404 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.404 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.404 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.404 03:04:13 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.404 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.404 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.404 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.404 03:04:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.404 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.404 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.404 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.404 03:04:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.404 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.404 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.404 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.404 03:04:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.404 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.404 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.404 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.404 03:04:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.404 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.404 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.404 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.404 03:04:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.404 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.404 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.404 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.404 03:04:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.404 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.404 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.404 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.404 03:04:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.404 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.404 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.404 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.404 03:04:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.404 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.404 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.404 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.404 03:04:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.404 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.404 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.404 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.404 03:04:13 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.404 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.404 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.404 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.404 03:04:13 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.404 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.404 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.404 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.404 03:04:13 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.404 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.404 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.404 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.404 03:04:13 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.404 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.404 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.404 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.404 03:04:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.404 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.404 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.404 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.404 03:04:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.404 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.404 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.404 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.404 03:04:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.404 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.404 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.404 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.404 03:04:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.404 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.404 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.404 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.404 03:04:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.404 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.404 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.404 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.404 03:04:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.404 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.404 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.404 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.404 03:04:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.404 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.404 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.404 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.404 03:04:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.404 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.404 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.404 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.404 03:04:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.404 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.404 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.404 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.404 03:04:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.404 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.404 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.404 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.404 03:04:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.404 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.404 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.404 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.404 03:04:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.404 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.404 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.404 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.404 03:04:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.404 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.404 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.404 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.404 03:04:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.404 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.404 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.404 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.404 03:04:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.404 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.404 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.404 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.404 03:04:13 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.404 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.404 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.404 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.404 03:04:13 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.404 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.404 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.404 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.404 03:04:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.404 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.404 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.404 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.404 03:04:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.404 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.404 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.404 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.404 03:04:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.404 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.404 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.404 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.404 03:04:13 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.404 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.404 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.404 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.404 03:04:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.404 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.404 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.404 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.404 03:04:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.404 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.404 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.404 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.404 03:04:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.404 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.404 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.404 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.404 03:04:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.404 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.404 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.404 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.405 03:04:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.405 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.405 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.405 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.405 03:04:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.405 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.405 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.405 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.405 03:04:13 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.405 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.405 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.405 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.405 03:04:13 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.405 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.405 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.405 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.405 03:04:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.405 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.405 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.405 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.405 03:04:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.405 03:04:13 -- setup/common.sh@33 -- # echo 1025 00:03:39.405 03:04:13 -- setup/common.sh@33 -- # return 0 00:03:39.405 03:04:13 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:39.405 03:04:13 -- setup/hugepages.sh@112 -- # get_nodes 00:03:39.405 03:04:13 -- setup/hugepages.sh@27 -- # local node 00:03:39.405 03:04:13 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:39.405 03:04:13 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:39.405 03:04:13 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:39.405 03:04:13 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:39.405 03:04:13 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:39.405 03:04:13 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:39.405 03:04:13 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:39.405 03:04:13 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:39.405 03:04:13 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:39.405 03:04:13 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:39.405 03:04:13 -- setup/common.sh@18 -- # local node=0 00:03:39.405 03:04:13 -- setup/common.sh@19 -- # local var val 00:03:39.405 03:04:13 -- setup/common.sh@20 -- # local mem_f mem 00:03:39.405 03:04:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:39.405 03:04:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:39.405 03:04:13 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:39.405 03:04:13 -- setup/common.sh@28 -- # mapfile -t mem 00:03:39.405 03:04:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:39.405 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.405 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.405 03:04:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 22523164 kB' 'MemUsed: 10306720 kB' 'SwapCached: 0 kB' 'Active: 7621920 kB' 'Inactive: 271804 kB' 'Active(anon): 7221516 kB' 'Inactive(anon): 0 kB' 'Active(file): 400404 kB' 'Inactive(file): 271804 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7698052 kB' 'Mapped: 53060 kB' 'AnonPages: 198816 kB' 'Shmem: 7025844 kB' 'KernelStack: 7128 kB' 'PageTables: 4360 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 288976 kB' 'Slab: 491656 kB' 'SReclaimable: 288976 kB' 'SUnreclaim: 202680 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:39.405 03:04:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.405 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.405 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.405 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.405 03:04:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.405 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.405 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.405 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.405 03:04:13 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.405 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.405 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.405 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.405 03:04:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.405 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.405 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.405 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.405 03:04:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.405 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.405 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.405 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.405 03:04:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.405 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.405 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.405 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.405 03:04:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.405 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.405 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.405 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.405 03:04:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.405 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.405 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.405 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.405 03:04:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.405 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.405 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.405 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.405 03:04:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.405 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.405 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.405 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.405 03:04:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.405 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.405 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.405 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.405 03:04:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.405 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.405 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.405 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.405 03:04:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.405 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.405 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.405 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.405 03:04:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.405 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.405 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.405 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.405 03:04:13 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.405 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.405 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.405 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.405 03:04:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.405 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.405 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.405 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.405 03:04:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.405 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.405 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.405 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.405 03:04:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.405 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.405 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.405 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.405 03:04:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.405 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.405 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.405 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.405 03:04:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.405 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.405 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.405 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.405 03:04:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.405 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.405 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.405 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.405 03:04:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.405 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.405 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.405 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.405 03:04:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.405 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.405 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.405 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.405 03:04:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.405 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.405 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.405 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.405 03:04:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.405 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.405 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.405 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.405 03:04:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.405 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.405 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.405 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.405 03:04:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.406 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.406 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.406 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.406 03:04:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.406 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.406 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.406 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.406 03:04:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.406 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.406 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.406 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.406 03:04:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.406 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.406 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.406 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.406 03:04:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.406 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.406 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.406 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.406 03:04:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.406 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.406 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.406 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.406 03:04:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.406 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.406 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.406 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.406 03:04:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.406 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.406 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.406 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.406 03:04:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.406 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.406 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.406 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.406 03:04:13 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.406 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.406 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.406 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.406 03:04:13 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.406 03:04:13 -- setup/common.sh@33 -- # echo 0 00:03:39.406 03:04:13 -- setup/common.sh@33 -- # return 0 00:03:39.406 03:04:13 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:39.406 03:04:13 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:39.406 03:04:13 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:39.406 03:04:13 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:39.406 03:04:13 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:39.406 03:04:13 -- setup/common.sh@18 -- # local node=1 00:03:39.406 03:04:13 -- setup/common.sh@19 -- # local var val 00:03:39.406 03:04:13 -- setup/common.sh@20 -- # local mem_f mem 00:03:39.406 03:04:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:39.406 03:04:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:39.406 03:04:13 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:39.406 03:04:13 -- setup/common.sh@28 -- # mapfile -t mem 00:03:39.406 03:04:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:39.406 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.406 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.406 03:04:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 14352764 kB' 'MemUsed: 13359060 kB' 'SwapCached: 0 kB' 'Active: 7056484 kB' 'Inactive: 4377196 kB' 'Active(anon): 6842768 kB' 'Inactive(anon): 0 kB' 'Active(file): 213716 kB' 'Inactive(file): 4377196 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11067644 kB' 'Mapped: 174956 kB' 'AnonPages: 366208 kB' 'Shmem: 6476732 kB' 'KernelStack: 5720 kB' 'PageTables: 4440 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 263628 kB' 'Slab: 460916 kB' 'SReclaimable: 263628 kB' 'SUnreclaim: 197288 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:39.406 03:04:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.406 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.406 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.406 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.406 03:04:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.406 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.406 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.406 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.406 03:04:13 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.406 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.406 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.406 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.406 03:04:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.406 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.406 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.406 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.406 03:04:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.406 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.406 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.406 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.406 03:04:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.406 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.406 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.406 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.406 03:04:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.406 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.406 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.406 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.406 03:04:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.406 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.406 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.406 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.406 03:04:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.406 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.406 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.406 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.406 03:04:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.406 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.406 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.406 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.406 03:04:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.406 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.406 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.406 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.406 03:04:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.406 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.406 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.406 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.406 03:04:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.406 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.406 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.406 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.406 03:04:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.406 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.406 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.406 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.406 03:04:13 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.406 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.406 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.406 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.406 03:04:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.406 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.406 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.406 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.406 03:04:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.406 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.406 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.406 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.406 03:04:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.406 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.406 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.406 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.406 03:04:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.406 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.406 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.406 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.406 03:04:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.406 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.406 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.406 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.406 03:04:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.406 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.406 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.406 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.406 03:04:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.406 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.406 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.406 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.406 03:04:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.406 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.406 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.406 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.406 03:04:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.406 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.406 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.407 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.407 03:04:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.407 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.407 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.407 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.407 03:04:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.407 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.407 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.407 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.407 03:04:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.407 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.407 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.407 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.407 03:04:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.407 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.407 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.407 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.407 03:04:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.407 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.407 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.407 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.407 03:04:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.407 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.407 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.407 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.407 03:04:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.407 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.407 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.407 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.407 03:04:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.407 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.407 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.407 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.407 03:04:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.407 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.407 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.407 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.407 03:04:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.407 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.407 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.407 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.407 03:04:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.407 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.407 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.407 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.407 03:04:13 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.407 03:04:13 -- setup/common.sh@32 -- # continue 00:03:39.407 03:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.407 03:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.407 03:04:13 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.407 03:04:13 -- setup/common.sh@33 -- # echo 0 00:03:39.407 03:04:13 -- setup/common.sh@33 -- # return 0 00:03:39.407 03:04:13 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:39.407 03:04:13 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:39.407 03:04:13 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:39.407 03:04:13 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:39.407 03:04:13 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:39.407 node0=512 expecting 513 00:03:39.407 03:04:13 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:39.407 03:04:13 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:39.407 03:04:13 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:39.407 03:04:13 -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:39.407 node1=513 expecting 512 00:03:39.407 03:04:13 -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:39.407 00:03:39.407 real 0m1.397s 00:03:39.407 user 0m0.588s 00:03:39.407 sys 0m0.767s 00:03:39.407 03:04:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:39.407 03:04:13 -- common/autotest_common.sh@10 -- # set +x 00:03:39.407 ************************************ 00:03:39.407 END TEST odd_alloc 00:03:39.407 ************************************ 00:03:39.407 03:04:13 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:39.407 03:04:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:39.407 03:04:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:39.407 03:04:13 -- common/autotest_common.sh@10 -- # set +x 00:03:39.407 ************************************ 00:03:39.407 START TEST custom_alloc 00:03:39.407 ************************************ 00:03:39.407 03:04:13 -- common/autotest_common.sh@1111 -- # custom_alloc 00:03:39.407 03:04:13 -- setup/hugepages.sh@167 -- # local IFS=, 00:03:39.407 03:04:13 -- setup/hugepages.sh@169 -- # local node 00:03:39.407 03:04:13 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:39.407 03:04:13 -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:39.407 03:04:13 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:39.407 03:04:13 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:39.407 03:04:13 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:39.407 03:04:13 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:39.407 03:04:13 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:39.407 03:04:13 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:39.407 03:04:13 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:39.407 03:04:13 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:39.407 03:04:13 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:39.407 03:04:13 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:39.407 03:04:13 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:39.407 03:04:13 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:39.407 03:04:13 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:39.407 03:04:13 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:39.407 03:04:13 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:39.407 03:04:13 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:39.407 03:04:13 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:39.407 03:04:13 -- setup/hugepages.sh@83 -- # : 256 00:03:39.407 03:04:13 -- setup/hugepages.sh@84 -- # : 1 00:03:39.407 03:04:13 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:39.407 03:04:13 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:39.407 03:04:13 -- setup/hugepages.sh@83 -- # : 0 00:03:39.407 03:04:13 -- setup/hugepages.sh@84 -- # : 0 00:03:39.407 03:04:13 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:39.407 03:04:13 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:39.407 03:04:13 -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:39.407 03:04:13 -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:39.407 03:04:13 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:39.407 03:04:13 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:39.407 03:04:13 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:39.407 03:04:13 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:39.407 03:04:13 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:39.407 03:04:13 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:39.407 03:04:13 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:39.407 03:04:13 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:39.407 03:04:13 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:39.407 03:04:13 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:39.407 03:04:13 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:39.407 03:04:13 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:39.407 03:04:13 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:39.407 03:04:13 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:39.407 03:04:13 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:39.407 03:04:13 -- setup/hugepages.sh@78 -- # return 0 00:03:39.407 03:04:13 -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:39.407 03:04:13 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:39.407 03:04:13 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:39.407 03:04:13 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:39.407 03:04:13 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:39.407 03:04:13 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:39.407 03:04:13 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:39.407 03:04:13 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:39.407 03:04:13 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:39.407 03:04:13 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:39.407 03:04:13 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:39.407 03:04:13 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:39.407 03:04:13 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:39.407 03:04:13 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:39.407 03:04:13 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:39.407 03:04:13 -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:39.407 03:04:13 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:39.407 03:04:13 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:39.407 03:04:13 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:39.407 03:04:13 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:39.407 03:04:13 -- setup/hugepages.sh@78 -- # return 0 00:03:39.407 03:04:13 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:39.407 03:04:13 -- setup/hugepages.sh@187 -- # setup output 00:03:39.407 03:04:13 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:39.407 03:04:13 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:40.349 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:40.349 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:40.349 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:40.349 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:40.349 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:40.613 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:40.613 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:40.613 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:40.613 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:40.613 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:40.613 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:40.613 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:40.613 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:40.613 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:40.613 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:40.613 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:40.613 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:40.613 03:04:15 -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:40.613 03:04:15 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:40.613 03:04:15 -- setup/hugepages.sh@89 -- # local node 00:03:40.613 03:04:15 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:40.613 03:04:15 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:40.613 03:04:15 -- setup/hugepages.sh@92 -- # local surp 00:03:40.613 03:04:15 -- setup/hugepages.sh@93 -- # local resv 00:03:40.613 03:04:15 -- setup/hugepages.sh@94 -- # local anon 00:03:40.613 03:04:15 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:40.613 03:04:15 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:40.613 03:04:15 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:40.613 03:04:15 -- setup/common.sh@18 -- # local node= 00:03:40.613 03:04:15 -- setup/common.sh@19 -- # local var val 00:03:40.613 03:04:15 -- setup/common.sh@20 -- # local mem_f mem 00:03:40.613 03:04:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.613 03:04:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:40.613 03:04:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:40.613 03:04:15 -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.613 03:04:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.613 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.613 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.613 03:04:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 35833936 kB' 'MemAvailable: 41005224 kB' 'Buffers: 3728 kB' 'Cached: 18762020 kB' 'SwapCached: 0 kB' 'Active: 14678508 kB' 'Inactive: 4649000 kB' 'Active(anon): 14064388 kB' 'Inactive(anon): 0 kB' 'Active(file): 614120 kB' 'Inactive(file): 4649000 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 564980 kB' 'Mapped: 228056 kB' 'Shmem: 13502628 kB' 'KReclaimable: 552604 kB' 'Slab: 952360 kB' 'SReclaimable: 552604 kB' 'SUnreclaim: 399756 kB' 'KernelStack: 12800 kB' 'PageTables: 8632 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 15235344 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197180 kB' 'VmallocChunk: 0 kB' 'Percpu: 44928 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2207324 kB' 'DirectMap2M: 28121088 kB' 'DirectMap1G: 38797312 kB' 00:03:40.613 03:04:15 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.613 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.613 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.613 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.613 03:04:15 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.613 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.613 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.613 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.613 03:04:15 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.613 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.613 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.613 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.613 03:04:15 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.613 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.613 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.613 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.613 03:04:15 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.613 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.613 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.613 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.613 03:04:15 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.613 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.613 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.613 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.613 03:04:15 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.613 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.613 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.613 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.613 03:04:15 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.613 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.613 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.613 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.613 03:04:15 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.613 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.613 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.613 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.613 03:04:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.613 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.613 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.613 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.613 03:04:15 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.613 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.613 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.613 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.613 03:04:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.613 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.613 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.613 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.613 03:04:15 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.613 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.613 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.613 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.613 03:04:15 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.613 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.613 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.613 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.613 03:04:15 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.613 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.613 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.613 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.613 03:04:15 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.613 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.613 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.613 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.613 03:04:15 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.613 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.613 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.613 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.613 03:04:15 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.613 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.613 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.613 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.613 03:04:15 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.613 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.613 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.613 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.613 03:04:15 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.613 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.613 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.613 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.613 03:04:15 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.613 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.613 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.613 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.613 03:04:15 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.613 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.613 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.613 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.613 03:04:15 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.613 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.613 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.613 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.613 03:04:15 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.613 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.613 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.613 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.613 03:04:15 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.613 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.613 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.613 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.613 03:04:15 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.613 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.613 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.613 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.613 03:04:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.613 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.613 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.613 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.614 03:04:15 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.614 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.614 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.614 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.614 03:04:15 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.614 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.614 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.614 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.614 03:04:15 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.614 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.614 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.614 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.614 03:04:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.614 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.614 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.614 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.614 03:04:15 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.614 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.614 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.614 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.614 03:04:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.614 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.614 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.614 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.614 03:04:15 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.614 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.614 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.614 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.614 03:04:15 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.614 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.614 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.614 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.614 03:04:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.614 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.614 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.614 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.614 03:04:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.614 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.614 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.614 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.614 03:04:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.614 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.614 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.614 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.614 03:04:15 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.614 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.614 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.614 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.614 03:04:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.614 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.614 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.614 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.614 03:04:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.614 03:04:15 -- setup/common.sh@33 -- # echo 0 00:03:40.614 03:04:15 -- setup/common.sh@33 -- # return 0 00:03:40.614 03:04:15 -- setup/hugepages.sh@97 -- # anon=0 00:03:40.614 03:04:15 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:40.614 03:04:15 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:40.614 03:04:15 -- setup/common.sh@18 -- # local node= 00:03:40.614 03:04:15 -- setup/common.sh@19 -- # local var val 00:03:40.614 03:04:15 -- setup/common.sh@20 -- # local mem_f mem 00:03:40.614 03:04:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.614 03:04:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:40.614 03:04:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:40.614 03:04:15 -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.614 03:04:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.614 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.614 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.614 03:04:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 35835812 kB' 'MemAvailable: 41007100 kB' 'Buffers: 3728 kB' 'Cached: 18762020 kB' 'SwapCached: 0 kB' 'Active: 14679272 kB' 'Inactive: 4649000 kB' 'Active(anon): 14065152 kB' 'Inactive(anon): 0 kB' 'Active(file): 614120 kB' 'Inactive(file): 4649000 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 565768 kB' 'Mapped: 228036 kB' 'Shmem: 13502628 kB' 'KReclaimable: 552604 kB' 'Slab: 952372 kB' 'SReclaimable: 552604 kB' 'SUnreclaim: 399768 kB' 'KernelStack: 12864 kB' 'PageTables: 8816 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 15235356 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197132 kB' 'VmallocChunk: 0 kB' 'Percpu: 44928 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2207324 kB' 'DirectMap2M: 28121088 kB' 'DirectMap1G: 38797312 kB' 00:03:40.614 03:04:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.614 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.614 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.614 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.614 03:04:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.614 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.614 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.614 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.614 03:04:15 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.614 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.614 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.614 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.614 03:04:15 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.614 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.614 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.614 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.614 03:04:15 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.614 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.614 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.614 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.614 03:04:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.614 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.614 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.614 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.614 03:04:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.614 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.614 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.614 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.614 03:04:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.614 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.614 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.614 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.614 03:04:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.614 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.614 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.614 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.614 03:04:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.614 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.614 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.614 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.614 03:04:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.614 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.614 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.614 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.614 03:04:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.614 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.614 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.614 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.614 03:04:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.614 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.614 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.614 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.614 03:04:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.614 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.614 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.614 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.614 03:04:15 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.614 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.614 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.614 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.614 03:04:15 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.614 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.614 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.614 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.614 03:04:15 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.614 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.614 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.614 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.614 03:04:15 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.614 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.614 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.614 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.614 03:04:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.614 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.614 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.614 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.614 03:04:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.615 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.615 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.615 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.615 03:04:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.615 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.615 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.615 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.615 03:04:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.615 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.615 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.615 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.615 03:04:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.615 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.615 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.615 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.615 03:04:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.615 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.615 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.615 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.615 03:04:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.615 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.615 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.615 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.615 03:04:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.615 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.615 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.615 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.615 03:04:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.615 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.615 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.615 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.615 03:04:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.615 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.615 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.615 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.615 03:04:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.615 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.615 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.615 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.615 03:04:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.615 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.615 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.615 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.615 03:04:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.615 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.615 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.615 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.615 03:04:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.615 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.615 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.615 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.615 03:04:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.615 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.615 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.615 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.615 03:04:15 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.615 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.615 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.615 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.615 03:04:15 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.615 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.615 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.615 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.615 03:04:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.615 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.615 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.615 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.615 03:04:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.615 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.615 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.615 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.615 03:04:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.615 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.615 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.615 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.615 03:04:15 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.615 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.615 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.615 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.615 03:04:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.615 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.615 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.615 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.615 03:04:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.615 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.615 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.615 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.615 03:04:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.615 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.615 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.615 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.615 03:04:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.615 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.615 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.615 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.615 03:04:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.615 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.615 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.615 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.615 03:04:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.615 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.615 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.615 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.615 03:04:15 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.615 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.615 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.615 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.615 03:04:15 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.615 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.615 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.615 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.615 03:04:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.615 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.615 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.615 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.615 03:04:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.615 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.615 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.615 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.615 03:04:15 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.615 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.615 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.615 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.615 03:04:15 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.615 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.615 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.615 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.615 03:04:15 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.615 03:04:15 -- setup/common.sh@33 -- # echo 0 00:03:40.615 03:04:15 -- setup/common.sh@33 -- # return 0 00:03:40.615 03:04:15 -- setup/hugepages.sh@99 -- # surp=0 00:03:40.615 03:04:15 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:40.615 03:04:15 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:40.615 03:04:15 -- setup/common.sh@18 -- # local node= 00:03:40.615 03:04:15 -- setup/common.sh@19 -- # local var val 00:03:40.615 03:04:15 -- setup/common.sh@20 -- # local mem_f mem 00:03:40.615 03:04:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.615 03:04:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:40.615 03:04:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:40.615 03:04:15 -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.615 03:04:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.615 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.615 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.616 03:04:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 35835056 kB' 'MemAvailable: 41006344 kB' 'Buffers: 3728 kB' 'Cached: 18762036 kB' 'SwapCached: 0 kB' 'Active: 14679092 kB' 'Inactive: 4649000 kB' 'Active(anon): 14064972 kB' 'Inactive(anon): 0 kB' 'Active(file): 614120 kB' 'Inactive(file): 4649000 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 565660 kB' 'Mapped: 228036 kB' 'Shmem: 13502644 kB' 'KReclaimable: 552604 kB' 'Slab: 952396 kB' 'SReclaimable: 552604 kB' 'SUnreclaim: 399792 kB' 'KernelStack: 12864 kB' 'PageTables: 8844 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 15235368 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197116 kB' 'VmallocChunk: 0 kB' 'Percpu: 44928 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2207324 kB' 'DirectMap2M: 28121088 kB' 'DirectMap1G: 38797312 kB' 00:03:40.616 03:04:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.616 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.616 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.616 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.616 03:04:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.616 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.616 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.616 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.616 03:04:15 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.616 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.616 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.616 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.616 03:04:15 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.616 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.616 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.616 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.616 03:04:15 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.616 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.616 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.616 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.616 03:04:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.616 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.616 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.616 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.616 03:04:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.616 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.616 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.616 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.616 03:04:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.616 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.616 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.616 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.616 03:04:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.616 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.616 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.616 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.616 03:04:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.616 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.616 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.616 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.616 03:04:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.616 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.616 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.616 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.616 03:04:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.616 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.616 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.616 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.616 03:04:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.616 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.616 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.616 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.616 03:04:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.616 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.616 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.616 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.616 03:04:15 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.616 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.616 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.616 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.616 03:04:15 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.616 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.616 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.616 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.616 03:04:15 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.616 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.616 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.616 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.616 03:04:15 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.616 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.616 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.616 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.616 03:04:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.616 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.616 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.616 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.616 03:04:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.616 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.616 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.616 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.616 03:04:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.616 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.616 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.616 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.616 03:04:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.616 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.616 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.616 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.616 03:04:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.616 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.616 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.616 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.616 03:04:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.616 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.616 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.616 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.616 03:04:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.616 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.616 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.616 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.616 03:04:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.616 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.616 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.616 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.616 03:04:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.616 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.616 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.616 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.616 03:04:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.616 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.616 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.616 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.616 03:04:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.616 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.616 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.616 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.616 03:04:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.616 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.616 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.616 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.616 03:04:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.616 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.616 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.616 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.616 03:04:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.616 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.616 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.616 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.616 03:04:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.616 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.616 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.616 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.616 03:04:15 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.616 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.616 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.616 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.616 03:04:15 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.616 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.616 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.616 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.616 03:04:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.616 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.616 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.616 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.616 03:04:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.616 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.616 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.616 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.616 03:04:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.616 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.616 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.616 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.616 03:04:15 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.616 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.617 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.617 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.617 03:04:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.617 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.617 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.617 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.617 03:04:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.617 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.617 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.617 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.617 03:04:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.617 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.617 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.617 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.617 03:04:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.617 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.617 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.617 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.617 03:04:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.617 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.617 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.617 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.617 03:04:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.617 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.617 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.617 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.617 03:04:15 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.879 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.879 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.879 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.879 03:04:15 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.879 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.879 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.879 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.879 03:04:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.879 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.879 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.879 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.879 03:04:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.879 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.879 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.879 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.879 03:04:15 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.879 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.879 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.879 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.879 03:04:15 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.879 03:04:15 -- setup/common.sh@33 -- # echo 0 00:03:40.879 03:04:15 -- setup/common.sh@33 -- # return 0 00:03:40.879 03:04:15 -- setup/hugepages.sh@100 -- # resv=0 00:03:40.879 03:04:15 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:40.879 nr_hugepages=1536 00:03:40.879 03:04:15 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:40.879 resv_hugepages=0 00:03:40.879 03:04:15 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:40.879 surplus_hugepages=0 00:03:40.879 03:04:15 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:40.879 anon_hugepages=0 00:03:40.879 03:04:15 -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:40.879 03:04:15 -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:40.879 03:04:15 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:40.879 03:04:15 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:40.879 03:04:15 -- setup/common.sh@18 -- # local node= 00:03:40.879 03:04:15 -- setup/common.sh@19 -- # local var val 00:03:40.879 03:04:15 -- setup/common.sh@20 -- # local mem_f mem 00:03:40.879 03:04:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.879 03:04:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:40.879 03:04:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:40.879 03:04:15 -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.879 03:04:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.879 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.879 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.879 03:04:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 35834076 kB' 'MemAvailable: 41005364 kB' 'Buffers: 3728 kB' 'Cached: 18762052 kB' 'SwapCached: 0 kB' 'Active: 14679224 kB' 'Inactive: 4649000 kB' 'Active(anon): 14065104 kB' 'Inactive(anon): 0 kB' 'Active(file): 614120 kB' 'Inactive(file): 4649000 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 565664 kB' 'Mapped: 228036 kB' 'Shmem: 13502660 kB' 'KReclaimable: 552604 kB' 'Slab: 952396 kB' 'SReclaimable: 552604 kB' 'SUnreclaim: 399792 kB' 'KernelStack: 12864 kB' 'PageTables: 8844 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 15235384 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197132 kB' 'VmallocChunk: 0 kB' 'Percpu: 44928 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2207324 kB' 'DirectMap2M: 28121088 kB' 'DirectMap1G: 38797312 kB' 00:03:40.880 03:04:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.880 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.880 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.880 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.880 03:04:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.880 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.880 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.880 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.880 03:04:15 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.880 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.880 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.880 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.880 03:04:15 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.880 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.880 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.880 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.880 03:04:15 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.880 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.880 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.880 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.880 03:04:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.880 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.880 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.880 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.880 03:04:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.880 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.880 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.880 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.880 03:04:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.880 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.880 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.880 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.880 03:04:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.880 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.880 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.880 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.880 03:04:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.880 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.880 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.880 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.880 03:04:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.880 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.880 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.880 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.880 03:04:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.880 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.880 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.880 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.880 03:04:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.880 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.880 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.880 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.880 03:04:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.880 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.880 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.880 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.880 03:04:15 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.880 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.880 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.880 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.880 03:04:15 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.880 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.880 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.880 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.880 03:04:15 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.880 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.880 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.880 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.880 03:04:15 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.880 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.880 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.880 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.880 03:04:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.880 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.880 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.880 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.880 03:04:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.880 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.880 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.880 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.880 03:04:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.880 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.880 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.880 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.880 03:04:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.880 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.880 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.880 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.880 03:04:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.880 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.880 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.880 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.880 03:04:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.880 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.880 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.880 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.880 03:04:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.880 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.880 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.880 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.880 03:04:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.880 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.880 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.880 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.880 03:04:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.880 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.880 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.880 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.880 03:04:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.880 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.880 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.880 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.880 03:04:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.880 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.880 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.880 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.880 03:04:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.880 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.880 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.880 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.880 03:04:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.880 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.880 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.880 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.880 03:04:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.880 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.880 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.880 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.880 03:04:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.880 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.880 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.880 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.880 03:04:15 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.880 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.880 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.880 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.880 03:04:15 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.880 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.880 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.880 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.880 03:04:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.880 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.880 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.880 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.880 03:04:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.880 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.880 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.880 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.880 03:04:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.880 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.880 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.880 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.880 03:04:15 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.880 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.880 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.880 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.880 03:04:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.880 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.880 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.880 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.880 03:04:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.880 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.880 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.880 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.880 03:04:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.880 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.880 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.880 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.880 03:04:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.880 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.880 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.881 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.881 03:04:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.881 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.881 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.881 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.881 03:04:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.881 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.881 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.881 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.881 03:04:15 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.881 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.881 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.881 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.881 03:04:15 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.881 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.881 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.881 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.881 03:04:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.881 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.881 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.881 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.881 03:04:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.881 03:04:15 -- setup/common.sh@33 -- # echo 1536 00:03:40.881 03:04:15 -- setup/common.sh@33 -- # return 0 00:03:40.881 03:04:15 -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:40.881 03:04:15 -- setup/hugepages.sh@112 -- # get_nodes 00:03:40.881 03:04:15 -- setup/hugepages.sh@27 -- # local node 00:03:40.881 03:04:15 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:40.881 03:04:15 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:40.881 03:04:15 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:40.881 03:04:15 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:40.881 03:04:15 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:40.881 03:04:15 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:40.881 03:04:15 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:40.881 03:04:15 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:40.881 03:04:15 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:40.881 03:04:15 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:40.881 03:04:15 -- setup/common.sh@18 -- # local node=0 00:03:40.881 03:04:15 -- setup/common.sh@19 -- # local var val 00:03:40.881 03:04:15 -- setup/common.sh@20 -- # local mem_f mem 00:03:40.881 03:04:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.881 03:04:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:40.881 03:04:15 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:40.881 03:04:15 -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.881 03:04:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.881 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.881 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.881 03:04:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 22527280 kB' 'MemUsed: 10302604 kB' 'SwapCached: 0 kB' 'Active: 7622308 kB' 'Inactive: 271804 kB' 'Active(anon): 7221904 kB' 'Inactive(anon): 0 kB' 'Active(file): 400404 kB' 'Inactive(file): 271804 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7698132 kB' 'Mapped: 53060 kB' 'AnonPages: 199124 kB' 'Shmem: 7025924 kB' 'KernelStack: 7160 kB' 'PageTables: 4408 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 288976 kB' 'Slab: 491648 kB' 'SReclaimable: 288976 kB' 'SUnreclaim: 202672 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:40.881 03:04:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.881 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.881 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.881 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.881 03:04:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.881 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.881 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.881 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.881 03:04:15 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.881 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.881 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.881 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.881 03:04:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.881 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.881 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.881 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.881 03:04:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.881 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.881 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.881 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.881 03:04:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.881 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.881 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.881 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.881 03:04:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.881 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.881 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.881 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.881 03:04:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.881 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.881 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.881 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.881 03:04:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.881 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.881 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.881 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.881 03:04:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.881 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.881 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.881 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.881 03:04:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.881 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.881 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.881 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.881 03:04:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.881 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.881 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.881 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.881 03:04:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.881 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.881 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.881 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.881 03:04:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.881 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.881 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.881 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.881 03:04:15 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.881 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.881 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.881 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.881 03:04:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.881 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.881 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.881 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.881 03:04:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.881 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.881 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.881 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.881 03:04:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.881 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.881 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.881 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.881 03:04:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.881 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.881 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.881 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.881 03:04:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.881 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.881 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.881 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.881 03:04:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.881 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.881 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.881 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.881 03:04:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.881 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.881 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.881 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.881 03:04:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.881 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.881 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.881 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.881 03:04:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.881 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.881 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.881 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.881 03:04:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.881 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.881 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.881 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.881 03:04:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.881 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.881 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.881 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.881 03:04:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.882 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.882 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.882 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.882 03:04:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.882 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.882 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.882 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.882 03:04:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.882 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.882 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.882 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.882 03:04:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.882 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.882 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.882 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.882 03:04:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.882 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.882 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.882 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.882 03:04:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.882 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.882 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.882 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.882 03:04:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.882 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.882 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.882 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.882 03:04:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.882 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.882 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.882 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.882 03:04:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.882 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.882 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.882 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.882 03:04:15 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.882 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.882 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.882 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.882 03:04:15 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.882 03:04:15 -- setup/common.sh@33 -- # echo 0 00:03:40.882 03:04:15 -- setup/common.sh@33 -- # return 0 00:03:40.882 03:04:15 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:40.882 03:04:15 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:40.882 03:04:15 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:40.882 03:04:15 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:40.882 03:04:15 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:40.882 03:04:15 -- setup/common.sh@18 -- # local node=1 00:03:40.882 03:04:15 -- setup/common.sh@19 -- # local var val 00:03:40.882 03:04:15 -- setup/common.sh@20 -- # local mem_f mem 00:03:40.882 03:04:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.882 03:04:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:40.882 03:04:15 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:40.882 03:04:15 -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.882 03:04:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.882 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.882 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.882 03:04:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 13306796 kB' 'MemUsed: 14405028 kB' 'SwapCached: 0 kB' 'Active: 7056892 kB' 'Inactive: 4377196 kB' 'Active(anon): 6843176 kB' 'Inactive(anon): 0 kB' 'Active(file): 213716 kB' 'Inactive(file): 4377196 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11067648 kB' 'Mapped: 174976 kB' 'AnonPages: 366520 kB' 'Shmem: 6476736 kB' 'KernelStack: 5704 kB' 'PageTables: 4436 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 263628 kB' 'Slab: 460748 kB' 'SReclaimable: 263628 kB' 'SUnreclaim: 197120 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:40.882 03:04:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.882 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.882 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.882 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.882 03:04:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.882 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.882 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.882 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.882 03:04:15 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.882 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.882 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.882 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.882 03:04:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.882 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.882 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.882 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.882 03:04:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.882 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.882 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.882 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.882 03:04:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.882 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.882 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.882 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.882 03:04:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.882 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.882 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.882 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.882 03:04:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.882 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.882 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.882 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.882 03:04:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.882 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.882 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.882 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.882 03:04:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.882 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.882 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.882 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.882 03:04:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.882 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.882 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.882 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.882 03:04:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.882 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.882 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.882 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.882 03:04:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.882 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.882 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.882 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.882 03:04:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.882 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.882 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.882 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.882 03:04:15 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.882 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.882 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.882 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.882 03:04:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.882 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.882 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.882 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.882 03:04:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.882 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.882 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.882 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.882 03:04:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.882 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.882 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.882 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.882 03:04:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.882 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.882 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.882 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.882 03:04:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.882 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.882 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.882 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.882 03:04:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.882 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.882 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.882 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.882 03:04:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.882 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.882 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.882 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.882 03:04:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.882 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.882 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.882 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.882 03:04:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.882 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.882 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.882 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.882 03:04:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.882 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.882 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.882 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.882 03:04:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.883 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.883 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.883 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.883 03:04:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.883 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.883 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.883 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.883 03:04:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.883 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.883 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.883 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.883 03:04:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.883 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.883 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.883 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.883 03:04:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.883 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.883 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.883 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.883 03:04:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.883 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.883 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.883 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.883 03:04:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.883 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.883 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.883 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.883 03:04:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.883 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.883 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.883 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.883 03:04:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.883 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.883 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.883 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.883 03:04:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.883 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.883 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.883 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.883 03:04:15 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.883 03:04:15 -- setup/common.sh@32 -- # continue 00:03:40.883 03:04:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.883 03:04:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.883 03:04:15 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.883 03:04:15 -- setup/common.sh@33 -- # echo 0 00:03:40.883 03:04:15 -- setup/common.sh@33 -- # return 0 00:03:40.883 03:04:15 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:40.883 03:04:15 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:40.883 03:04:15 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:40.883 03:04:15 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:40.883 03:04:15 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:40.883 node0=512 expecting 512 00:03:40.883 03:04:15 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:40.883 03:04:15 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:40.883 03:04:15 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:40.883 03:04:15 -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:40.883 node1=1024 expecting 1024 00:03:40.883 03:04:15 -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:40.883 00:03:40.883 real 0m1.362s 00:03:40.883 user 0m0.550s 00:03:40.883 sys 0m0.771s 00:03:40.883 03:04:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:40.883 03:04:15 -- common/autotest_common.sh@10 -- # set +x 00:03:40.883 ************************************ 00:03:40.883 END TEST custom_alloc 00:03:40.883 ************************************ 00:03:40.883 03:04:15 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:40.883 03:04:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:40.883 03:04:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:40.883 03:04:15 -- common/autotest_common.sh@10 -- # set +x 00:03:40.883 ************************************ 00:03:40.883 START TEST no_shrink_alloc 00:03:40.883 ************************************ 00:03:40.883 03:04:15 -- common/autotest_common.sh@1111 -- # no_shrink_alloc 00:03:40.883 03:04:15 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:40.883 03:04:15 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:40.883 03:04:15 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:40.883 03:04:15 -- setup/hugepages.sh@51 -- # shift 00:03:40.883 03:04:15 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:40.883 03:04:15 -- setup/hugepages.sh@52 -- # local node_ids 00:03:40.883 03:04:15 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:40.883 03:04:15 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:40.883 03:04:15 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:40.883 03:04:15 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:40.883 03:04:15 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:40.883 03:04:15 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:40.883 03:04:15 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:40.883 03:04:15 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:40.883 03:04:15 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:40.883 03:04:15 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:40.883 03:04:15 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:40.883 03:04:15 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:40.883 03:04:15 -- setup/hugepages.sh@73 -- # return 0 00:03:40.883 03:04:15 -- setup/hugepages.sh@198 -- # setup output 00:03:40.883 03:04:15 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:40.883 03:04:15 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:41.824 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:41.824 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:41.824 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:41.824 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:41.824 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:41.824 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:41.824 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:41.824 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:41.824 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:41.824 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:41.824 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:41.824 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:41.824 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:41.824 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:41.824 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:41.824 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:41.824 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:42.088 03:04:16 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:42.088 03:04:16 -- setup/hugepages.sh@89 -- # local node 00:03:42.088 03:04:16 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:42.088 03:04:16 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:42.088 03:04:16 -- setup/hugepages.sh@92 -- # local surp 00:03:42.088 03:04:16 -- setup/hugepages.sh@93 -- # local resv 00:03:42.088 03:04:16 -- setup/hugepages.sh@94 -- # local anon 00:03:42.088 03:04:16 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:42.088 03:04:16 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:42.088 03:04:16 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:42.088 03:04:16 -- setup/common.sh@18 -- # local node= 00:03:42.088 03:04:16 -- setup/common.sh@19 -- # local var val 00:03:42.088 03:04:16 -- setup/common.sh@20 -- # local mem_f mem 00:03:42.088 03:04:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.088 03:04:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:42.088 03:04:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:42.088 03:04:16 -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.088 03:04:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.088 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.088 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.088 03:04:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 36784452 kB' 'MemAvailable: 41955740 kB' 'Buffers: 3728 kB' 'Cached: 18762120 kB' 'SwapCached: 0 kB' 'Active: 14680028 kB' 'Inactive: 4649000 kB' 'Active(anon): 14065908 kB' 'Inactive(anon): 0 kB' 'Active(file): 614120 kB' 'Inactive(file): 4649000 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 566500 kB' 'Mapped: 228524 kB' 'Shmem: 13502728 kB' 'KReclaimable: 552604 kB' 'Slab: 952304 kB' 'SReclaimable: 552604 kB' 'SUnreclaim: 399700 kB' 'KernelStack: 12848 kB' 'PageTables: 8748 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 15237540 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197148 kB' 'VmallocChunk: 0 kB' 'Percpu: 44928 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2207324 kB' 'DirectMap2M: 28121088 kB' 'DirectMap1G: 38797312 kB' 00:03:42.088 03:04:16 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.088 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.088 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.088 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.088 03:04:16 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.088 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.088 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.088 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.088 03:04:16 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.088 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.088 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.088 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.088 03:04:16 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.088 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.088 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.088 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.088 03:04:16 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.088 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.088 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.088 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.088 03:04:16 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.088 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.088 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.088 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.088 03:04:16 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.088 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.088 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.088 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.088 03:04:16 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.088 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.088 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.088 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.088 03:04:16 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.088 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.088 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.089 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.089 03:04:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.089 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.089 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.089 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.089 03:04:16 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.089 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.089 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.089 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.089 03:04:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.089 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.089 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.089 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.089 03:04:16 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.089 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.089 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.089 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.089 03:04:16 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.089 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.089 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.089 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.089 03:04:16 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.089 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.089 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.089 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.089 03:04:16 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.089 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.089 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.089 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.089 03:04:16 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.089 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.089 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.089 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.089 03:04:16 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.089 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.089 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.089 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.089 03:04:16 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.089 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.089 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.089 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.089 03:04:16 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.089 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.089 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.089 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.089 03:04:16 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.089 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.089 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.089 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.089 03:04:16 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.089 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.089 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.089 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.089 03:04:16 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.089 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.089 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.089 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.089 03:04:16 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.089 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.089 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.089 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.089 03:04:16 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.089 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.089 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.089 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.089 03:04:16 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.089 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.089 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.089 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.089 03:04:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.089 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.089 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.089 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.089 03:04:16 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.089 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.089 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.089 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.089 03:04:16 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.089 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.089 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.089 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.089 03:04:16 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.089 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.089 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.089 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.089 03:04:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.089 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.089 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.089 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.089 03:04:16 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.089 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.089 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.089 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.089 03:04:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.089 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.089 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.089 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.089 03:04:16 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.089 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.089 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.089 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.089 03:04:16 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.089 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.089 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.089 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.089 03:04:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.089 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.089 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.089 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.089 03:04:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.089 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.089 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.089 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.089 03:04:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.089 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.089 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.089 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.089 03:04:16 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.089 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.089 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.089 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.089 03:04:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.089 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.089 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.089 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.089 03:04:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.089 03:04:16 -- setup/common.sh@33 -- # echo 0 00:03:42.089 03:04:16 -- setup/common.sh@33 -- # return 0 00:03:42.089 03:04:16 -- setup/hugepages.sh@97 -- # anon=0 00:03:42.089 03:04:16 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:42.089 03:04:16 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:42.089 03:04:16 -- setup/common.sh@18 -- # local node= 00:03:42.089 03:04:16 -- setup/common.sh@19 -- # local var val 00:03:42.089 03:04:16 -- setup/common.sh@20 -- # local mem_f mem 00:03:42.089 03:04:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.089 03:04:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:42.089 03:04:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:42.089 03:04:16 -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.089 03:04:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.089 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.089 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.089 03:04:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 36780824 kB' 'MemAvailable: 41952112 kB' 'Buffers: 3728 kB' 'Cached: 18762120 kB' 'SwapCached: 0 kB' 'Active: 14683028 kB' 'Inactive: 4649000 kB' 'Active(anon): 14068908 kB' 'Inactive(anon): 0 kB' 'Active(file): 614120 kB' 'Inactive(file): 4649000 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 569500 kB' 'Mapped: 228492 kB' 'Shmem: 13502728 kB' 'KReclaimable: 552604 kB' 'Slab: 952304 kB' 'SReclaimable: 552604 kB' 'SUnreclaim: 399700 kB' 'KernelStack: 12896 kB' 'PageTables: 8872 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 15240192 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197100 kB' 'VmallocChunk: 0 kB' 'Percpu: 44928 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2207324 kB' 'DirectMap2M: 28121088 kB' 'DirectMap1G: 38797312 kB' 00:03:42.090 03:04:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.090 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.090 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.090 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.090 03:04:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.090 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.090 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.090 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.090 03:04:16 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.090 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.090 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.090 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.090 03:04:16 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.090 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.090 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.090 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.090 03:04:16 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.090 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.090 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.090 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.090 03:04:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.090 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.090 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.090 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.090 03:04:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.090 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.090 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.090 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.090 03:04:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.090 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.090 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.090 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.090 03:04:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.090 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.090 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.090 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.090 03:04:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.090 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.090 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.090 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.090 03:04:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.090 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.090 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.090 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.090 03:04:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.090 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.090 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.090 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.090 03:04:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.090 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.090 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.090 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.090 03:04:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.090 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.090 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.090 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.090 03:04:16 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.090 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.090 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.090 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.090 03:04:16 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.090 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.090 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.090 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.090 03:04:16 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.090 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.090 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.090 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.090 03:04:16 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.090 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.090 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.090 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.090 03:04:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.090 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.090 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.090 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.090 03:04:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.090 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.090 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.090 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.090 03:04:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.090 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.090 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.090 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.090 03:04:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.090 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.090 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.090 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.090 03:04:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.090 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.090 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.090 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.090 03:04:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.090 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.090 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.090 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.090 03:04:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.090 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.090 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.090 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.090 03:04:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.090 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.090 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.090 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.090 03:04:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.090 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.090 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.090 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.090 03:04:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.090 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.090 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.090 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.090 03:04:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.090 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.090 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.090 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.090 03:04:16 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.090 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.090 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.090 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.090 03:04:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.090 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.090 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.090 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.090 03:04:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.090 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.090 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.090 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.090 03:04:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.090 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.090 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.090 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.090 03:04:16 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.090 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.090 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.090 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.090 03:04:16 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.090 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.090 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.090 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.090 03:04:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.090 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.090 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.090 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.090 03:04:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.090 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.090 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.090 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.090 03:04:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.090 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.090 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.090 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.090 03:04:16 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.090 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.090 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.090 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.090 03:04:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.090 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.090 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.090 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.090 03:04:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.090 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.090 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.090 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.090 03:04:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.091 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.091 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.091 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.091 03:04:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.091 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.091 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.091 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.091 03:04:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.091 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.091 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.091 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.091 03:04:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.091 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.091 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.091 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.091 03:04:16 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.091 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.091 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.091 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.091 03:04:16 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.091 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.091 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.091 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.091 03:04:16 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.091 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.091 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.091 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.091 03:04:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.091 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.091 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.091 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.091 03:04:16 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.091 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.091 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.091 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.091 03:04:16 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.091 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.091 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.091 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.091 03:04:16 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.091 03:04:16 -- setup/common.sh@33 -- # echo 0 00:03:42.091 03:04:16 -- setup/common.sh@33 -- # return 0 00:03:42.091 03:04:16 -- setup/hugepages.sh@99 -- # surp=0 00:03:42.091 03:04:16 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:42.091 03:04:16 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:42.091 03:04:16 -- setup/common.sh@18 -- # local node= 00:03:42.091 03:04:16 -- setup/common.sh@19 -- # local var val 00:03:42.091 03:04:16 -- setup/common.sh@20 -- # local mem_f mem 00:03:42.091 03:04:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.091 03:04:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:42.091 03:04:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:42.091 03:04:16 -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.091 03:04:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.091 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.091 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.091 03:04:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 36776792 kB' 'MemAvailable: 41948080 kB' 'Buffers: 3728 kB' 'Cached: 18762132 kB' 'SwapCached: 0 kB' 'Active: 14685128 kB' 'Inactive: 4649000 kB' 'Active(anon): 14071008 kB' 'Inactive(anon): 0 kB' 'Active(file): 614120 kB' 'Inactive(file): 4649000 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 571632 kB' 'Mapped: 228828 kB' 'Shmem: 13502740 kB' 'KReclaimable: 552604 kB' 'Slab: 952304 kB' 'SReclaimable: 552604 kB' 'SUnreclaim: 399700 kB' 'KernelStack: 12896 kB' 'PageTables: 8900 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 15242196 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197104 kB' 'VmallocChunk: 0 kB' 'Percpu: 44928 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2207324 kB' 'DirectMap2M: 28121088 kB' 'DirectMap1G: 38797312 kB' 00:03:42.091 03:04:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.091 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.091 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.091 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.091 03:04:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.091 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.091 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.091 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.091 03:04:16 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.091 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.091 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.091 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.091 03:04:16 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.091 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.091 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.091 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.091 03:04:16 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.091 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.091 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.091 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.091 03:04:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.091 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.091 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.091 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.091 03:04:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.091 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.091 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.091 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.091 03:04:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.091 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.091 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.091 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.091 03:04:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.091 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.091 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.091 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.091 03:04:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.091 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.091 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.091 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.091 03:04:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.091 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.091 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.091 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.091 03:04:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.091 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.091 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.091 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.091 03:04:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.091 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.091 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.091 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.091 03:04:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.091 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.091 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.091 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.091 03:04:16 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.091 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.091 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.091 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.091 03:04:16 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.091 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.091 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.091 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.091 03:04:16 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.091 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.091 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.091 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.091 03:04:16 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.091 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.091 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.091 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.091 03:04:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.091 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.091 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.091 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.091 03:04:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.091 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.091 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.091 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.091 03:04:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.091 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.091 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.091 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.091 03:04:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.091 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.091 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.091 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.091 03:04:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.091 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.091 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.091 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.092 03:04:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.092 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.092 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.092 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.092 03:04:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.092 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.092 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.092 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.092 03:04:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.092 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.092 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.092 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.092 03:04:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.092 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.092 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.092 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.092 03:04:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.092 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.092 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.092 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.092 03:04:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.092 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.092 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.092 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.092 03:04:16 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.092 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.092 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.092 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.092 03:04:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.092 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.092 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.092 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.092 03:04:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.092 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.092 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.092 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.092 03:04:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.092 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.092 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.092 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.092 03:04:16 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.092 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.092 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.092 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.092 03:04:16 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.092 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.092 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.092 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.092 03:04:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.092 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.092 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.092 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.092 03:04:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.092 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.092 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.092 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.092 03:04:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.092 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.092 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.092 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.092 03:04:16 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.092 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.092 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.092 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.092 03:04:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.092 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.092 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.092 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.092 03:04:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.092 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.092 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.092 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.092 03:04:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.092 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.092 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.092 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.092 03:04:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.092 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.092 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.092 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.092 03:04:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.092 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.092 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.092 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.092 03:04:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.092 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.092 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.092 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.092 03:04:16 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.092 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.092 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.092 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.092 03:04:16 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.092 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.092 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.092 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.092 03:04:16 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.092 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.092 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.092 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.092 03:04:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.092 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.092 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.092 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.092 03:04:16 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.092 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.092 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.092 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.092 03:04:16 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.092 03:04:16 -- setup/common.sh@33 -- # echo 0 00:03:42.092 03:04:16 -- setup/common.sh@33 -- # return 0 00:03:42.092 03:04:16 -- setup/hugepages.sh@100 -- # resv=0 00:03:42.092 03:04:16 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:42.092 nr_hugepages=1024 00:03:42.092 03:04:16 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:42.092 resv_hugepages=0 00:03:42.092 03:04:16 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:42.092 surplus_hugepages=0 00:03:42.092 03:04:16 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:42.092 anon_hugepages=0 00:03:42.092 03:04:16 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:42.092 03:04:16 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:42.092 03:04:16 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:42.092 03:04:16 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:42.092 03:04:16 -- setup/common.sh@18 -- # local node= 00:03:42.092 03:04:16 -- setup/common.sh@19 -- # local var val 00:03:42.092 03:04:16 -- setup/common.sh@20 -- # local mem_f mem 00:03:42.092 03:04:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.092 03:04:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:42.092 03:04:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:42.092 03:04:16 -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.092 03:04:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.092 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.092 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.092 03:04:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 36774776 kB' 'MemAvailable: 41946064 kB' 'Buffers: 3728 kB' 'Cached: 18762148 kB' 'SwapCached: 0 kB' 'Active: 14681104 kB' 'Inactive: 4649000 kB' 'Active(anon): 14066984 kB' 'Inactive(anon): 0 kB' 'Active(file): 614120 kB' 'Inactive(file): 4649000 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 567572 kB' 'Mapped: 228900 kB' 'Shmem: 13502756 kB' 'KReclaimable: 552604 kB' 'Slab: 952304 kB' 'SReclaimable: 552604 kB' 'SUnreclaim: 399700 kB' 'KernelStack: 12880 kB' 'PageTables: 8876 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 15238504 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197116 kB' 'VmallocChunk: 0 kB' 'Percpu: 44928 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2207324 kB' 'DirectMap2M: 28121088 kB' 'DirectMap1G: 38797312 kB' 00:03:42.092 03:04:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.092 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.092 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.092 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.092 03:04:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.092 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.092 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.092 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.092 03:04:16 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.092 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.092 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.092 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.093 03:04:16 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.093 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.093 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.093 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.093 03:04:16 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.093 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.093 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.093 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.093 03:04:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.093 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.093 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.093 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.093 03:04:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.093 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.093 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.093 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.093 03:04:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.093 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.093 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.093 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.093 03:04:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.093 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.093 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.093 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.093 03:04:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.093 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.093 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.093 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.093 03:04:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.093 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.093 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.093 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.093 03:04:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.093 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.093 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.093 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.093 03:04:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.093 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.093 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.093 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.093 03:04:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.093 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.093 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.093 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.093 03:04:16 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.093 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.093 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.093 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.093 03:04:16 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.093 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.093 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.093 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.093 03:04:16 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.093 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.093 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.093 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.093 03:04:16 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.093 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.093 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.093 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.093 03:04:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.093 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.093 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.093 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.093 03:04:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.093 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.093 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.093 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.093 03:04:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.093 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.093 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.093 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.093 03:04:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.093 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.093 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.093 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.093 03:04:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.093 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.093 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.093 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.093 03:04:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.093 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.093 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.093 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.093 03:04:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.093 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.093 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.093 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.093 03:04:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.093 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.093 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.093 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.093 03:04:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.093 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.093 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.093 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.093 03:04:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.093 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.093 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.093 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.093 03:04:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.093 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.093 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.093 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.093 03:04:16 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.093 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.093 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.093 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.093 03:04:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.093 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.093 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.093 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.093 03:04:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.093 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.093 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.093 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.093 03:04:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.093 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.093 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.093 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.093 03:04:16 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.093 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.093 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.093 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.093 03:04:16 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.093 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.093 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.093 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.093 03:04:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.093 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.093 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.094 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.094 03:04:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.094 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.094 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.094 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.094 03:04:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.094 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.094 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.094 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.094 03:04:16 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.094 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.094 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.094 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.094 03:04:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.094 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.094 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.094 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.094 03:04:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.094 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.094 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.094 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.094 03:04:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.094 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.094 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.094 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.094 03:04:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.094 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.094 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.094 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.094 03:04:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.094 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.094 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.094 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.094 03:04:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.094 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.094 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.094 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.094 03:04:16 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.094 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.094 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.094 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.094 03:04:16 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.094 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.094 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.094 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.094 03:04:16 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.094 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.094 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.094 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.094 03:04:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.094 03:04:16 -- setup/common.sh@33 -- # echo 1024 00:03:42.094 03:04:16 -- setup/common.sh@33 -- # return 0 00:03:42.094 03:04:16 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:42.094 03:04:16 -- setup/hugepages.sh@112 -- # get_nodes 00:03:42.094 03:04:16 -- setup/hugepages.sh@27 -- # local node 00:03:42.094 03:04:16 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:42.094 03:04:16 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:42.094 03:04:16 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:42.094 03:04:16 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:42.094 03:04:16 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:42.094 03:04:16 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:42.094 03:04:16 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:42.094 03:04:16 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:42.094 03:04:16 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:42.094 03:04:16 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:42.094 03:04:16 -- setup/common.sh@18 -- # local node=0 00:03:42.094 03:04:16 -- setup/common.sh@19 -- # local var val 00:03:42.094 03:04:16 -- setup/common.sh@20 -- # local mem_f mem 00:03:42.354 03:04:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.354 03:04:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:42.354 03:04:16 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:42.354 03:04:16 -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.354 03:04:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.354 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.354 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.355 03:04:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 21492336 kB' 'MemUsed: 11337548 kB' 'SwapCached: 0 kB' 'Active: 7621528 kB' 'Inactive: 271804 kB' 'Active(anon): 7221124 kB' 'Inactive(anon): 0 kB' 'Active(file): 400404 kB' 'Inactive(file): 271804 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7698232 kB' 'Mapped: 53056 kB' 'AnonPages: 198268 kB' 'Shmem: 7026024 kB' 'KernelStack: 7128 kB' 'PageTables: 4304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 288976 kB' 'Slab: 491588 kB' 'SReclaimable: 288976 kB' 'SUnreclaim: 202612 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:42.355 03:04:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.355 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.355 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.355 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.355 03:04:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.355 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.355 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.355 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.355 03:04:16 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.355 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.355 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.355 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.355 03:04:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.355 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.355 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.355 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.355 03:04:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.355 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.355 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.355 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.355 03:04:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.355 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.355 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.355 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.355 03:04:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.355 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.355 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.355 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.355 03:04:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.355 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.355 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.355 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.355 03:04:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.355 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.355 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.355 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.355 03:04:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.355 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.355 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.355 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.355 03:04:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.355 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.355 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.355 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.355 03:04:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.355 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.355 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.355 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.355 03:04:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.355 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.355 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.355 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.355 03:04:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.355 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.355 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.355 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.355 03:04:16 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.355 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.355 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.355 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.355 03:04:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.355 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.355 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.355 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.355 03:04:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.355 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.355 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.355 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.355 03:04:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.355 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.355 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.355 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.355 03:04:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.355 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.355 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.355 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.355 03:04:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.355 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.355 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.355 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.355 03:04:16 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.355 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.355 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.355 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.355 03:04:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.355 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.355 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.355 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.355 03:04:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.355 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.355 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.355 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.355 03:04:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.355 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.355 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.355 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.355 03:04:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.355 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.355 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.355 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.355 03:04:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.355 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.355 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.355 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.355 03:04:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.355 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.355 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.355 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.355 03:04:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.355 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.355 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.355 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.355 03:04:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.355 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.355 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.355 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.355 03:04:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.355 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.355 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.355 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.355 03:04:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.355 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.355 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.355 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.355 03:04:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.355 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.355 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.355 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.355 03:04:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.355 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.355 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.355 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.355 03:04:16 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.355 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.355 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.355 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.355 03:04:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.355 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.355 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.355 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.355 03:04:16 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.355 03:04:16 -- setup/common.sh@32 -- # continue 00:03:42.355 03:04:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.355 03:04:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.356 03:04:16 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.356 03:04:16 -- setup/common.sh@33 -- # echo 0 00:03:42.356 03:04:16 -- setup/common.sh@33 -- # return 0 00:03:42.356 03:04:16 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:42.356 03:04:16 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:42.356 03:04:16 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:42.356 03:04:16 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:42.356 03:04:16 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:42.356 node0=1024 expecting 1024 00:03:42.356 03:04:16 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:42.356 03:04:16 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:42.356 03:04:16 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:42.356 03:04:16 -- setup/hugepages.sh@202 -- # setup output 00:03:42.356 03:04:16 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:42.356 03:04:16 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:43.297 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:43.297 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:43.297 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:43.297 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:43.297 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:43.298 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:43.298 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:43.298 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:43.298 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:43.298 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:43.298 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:43.298 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:43.298 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:43.298 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:43.298 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:43.298 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:43.298 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:43.562 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:43.562 03:04:17 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:43.562 03:04:17 -- setup/hugepages.sh@89 -- # local node 00:03:43.562 03:04:17 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:43.562 03:04:17 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:43.562 03:04:17 -- setup/hugepages.sh@92 -- # local surp 00:03:43.562 03:04:17 -- setup/hugepages.sh@93 -- # local resv 00:03:43.562 03:04:17 -- setup/hugepages.sh@94 -- # local anon 00:03:43.562 03:04:17 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:43.563 03:04:17 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:43.563 03:04:17 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:43.563 03:04:17 -- setup/common.sh@18 -- # local node= 00:03:43.563 03:04:17 -- setup/common.sh@19 -- # local var val 00:03:43.563 03:04:17 -- setup/common.sh@20 -- # local mem_f mem 00:03:43.563 03:04:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.563 03:04:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:43.563 03:04:17 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:43.563 03:04:17 -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.563 03:04:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.563 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.563 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.563 03:04:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 36813056 kB' 'MemAvailable: 41984344 kB' 'Buffers: 3728 kB' 'Cached: 18762192 kB' 'SwapCached: 0 kB' 'Active: 14679492 kB' 'Inactive: 4649000 kB' 'Active(anon): 14065372 kB' 'Inactive(anon): 0 kB' 'Active(file): 614120 kB' 'Inactive(file): 4649000 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 565860 kB' 'Mapped: 228224 kB' 'Shmem: 13502800 kB' 'KReclaimable: 552604 kB' 'Slab: 952228 kB' 'SReclaimable: 552604 kB' 'SUnreclaim: 399624 kB' 'KernelStack: 12864 kB' 'PageTables: 8756 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 15236076 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197212 kB' 'VmallocChunk: 0 kB' 'Percpu: 44928 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2207324 kB' 'DirectMap2M: 28121088 kB' 'DirectMap1G: 38797312 kB' 00:03:43.563 03:04:17 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.563 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.563 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.563 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.563 03:04:17 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.563 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.563 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.563 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.563 03:04:17 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.563 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.563 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.563 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.563 03:04:17 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.563 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.563 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.563 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.563 03:04:17 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.563 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.563 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.563 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.563 03:04:17 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.563 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.563 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.563 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.563 03:04:17 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.563 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.563 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.563 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.563 03:04:17 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.563 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.563 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.563 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.563 03:04:17 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.563 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.563 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.563 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.563 03:04:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.563 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.563 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.563 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.563 03:04:17 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.563 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.563 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.563 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.563 03:04:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.563 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.563 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.563 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.563 03:04:17 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.563 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.563 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.563 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.563 03:04:17 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.563 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.563 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.563 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.563 03:04:17 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.563 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.563 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.563 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.563 03:04:17 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.563 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.563 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.563 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.563 03:04:17 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.563 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.563 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.563 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.563 03:04:17 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.563 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.563 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.563 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.563 03:04:17 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.563 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.563 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.563 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.563 03:04:17 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.563 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.563 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.563 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.563 03:04:17 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.563 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.563 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.563 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.563 03:04:17 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.563 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.563 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.563 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.563 03:04:17 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.563 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.563 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.563 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.563 03:04:17 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.563 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.563 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.563 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.563 03:04:17 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.563 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.563 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.563 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.563 03:04:17 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.563 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.563 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.563 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.563 03:04:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.563 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.563 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.563 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.563 03:04:17 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.563 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.563 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.563 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.563 03:04:17 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.563 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.563 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.563 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.563 03:04:17 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.563 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.563 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.563 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.563 03:04:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.563 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.563 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.563 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.563 03:04:17 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.563 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.563 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.563 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.563 03:04:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.563 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.563 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.564 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.564 03:04:17 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.564 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.564 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.564 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.564 03:04:17 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.564 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.564 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.564 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.564 03:04:17 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.564 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.564 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.564 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.564 03:04:17 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.564 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.564 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.564 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.564 03:04:17 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.564 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.564 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.564 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.564 03:04:17 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.564 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.564 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.564 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.564 03:04:17 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.564 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.564 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.564 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.564 03:04:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.564 03:04:17 -- setup/common.sh@33 -- # echo 0 00:03:43.564 03:04:17 -- setup/common.sh@33 -- # return 0 00:03:43.564 03:04:17 -- setup/hugepages.sh@97 -- # anon=0 00:03:43.564 03:04:17 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:43.564 03:04:17 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:43.564 03:04:17 -- setup/common.sh@18 -- # local node= 00:03:43.564 03:04:17 -- setup/common.sh@19 -- # local var val 00:03:43.564 03:04:17 -- setup/common.sh@20 -- # local mem_f mem 00:03:43.564 03:04:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.564 03:04:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:43.564 03:04:17 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:43.564 03:04:17 -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.564 03:04:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.564 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.564 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.564 03:04:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 36812528 kB' 'MemAvailable: 41983816 kB' 'Buffers: 3728 kB' 'Cached: 18762192 kB' 'SwapCached: 0 kB' 'Active: 14679116 kB' 'Inactive: 4649000 kB' 'Active(anon): 14064996 kB' 'Inactive(anon): 0 kB' 'Active(file): 614120 kB' 'Inactive(file): 4649000 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 565496 kB' 'Mapped: 228196 kB' 'Shmem: 13502800 kB' 'KReclaimable: 552604 kB' 'Slab: 952204 kB' 'SReclaimable: 552604 kB' 'SUnreclaim: 399600 kB' 'KernelStack: 12880 kB' 'PageTables: 8788 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 15236088 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197148 kB' 'VmallocChunk: 0 kB' 'Percpu: 44928 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2207324 kB' 'DirectMap2M: 28121088 kB' 'DirectMap1G: 38797312 kB' 00:03:43.564 03:04:17 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.564 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.564 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.564 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.564 03:04:17 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.564 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.564 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.564 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.564 03:04:17 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.564 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.564 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.564 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.564 03:04:17 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.564 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.564 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.564 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.564 03:04:17 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.564 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.564 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.564 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.564 03:04:17 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.564 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.564 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.564 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.564 03:04:17 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.564 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.564 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.564 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.564 03:04:17 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.564 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.564 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.564 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.564 03:04:17 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.564 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.564 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.564 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.564 03:04:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.564 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.564 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.564 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.564 03:04:17 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.564 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.564 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.564 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.564 03:04:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.564 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.564 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.564 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.564 03:04:17 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.564 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.564 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.564 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.564 03:04:17 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.564 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.564 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.564 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.564 03:04:17 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.564 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.564 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.564 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.564 03:04:17 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.564 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.564 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.564 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.564 03:04:17 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.564 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.564 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.564 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.564 03:04:17 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.564 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.564 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.564 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.564 03:04:17 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.564 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.564 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.564 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.564 03:04:17 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.564 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.564 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.564 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.564 03:04:17 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.564 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.564 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.564 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.564 03:04:17 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.564 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.564 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.564 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.564 03:04:17 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.564 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.564 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.564 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.564 03:04:17 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.564 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.564 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.564 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.564 03:04:17 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.564 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.564 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.564 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.564 03:04:17 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.565 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.565 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.565 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.565 03:04:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.565 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.565 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.565 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.565 03:04:17 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.565 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.565 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.565 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.565 03:04:17 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.565 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.565 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.565 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.565 03:04:17 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.565 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.565 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.565 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.565 03:04:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.565 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.565 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.565 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.565 03:04:17 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.565 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.565 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.565 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.565 03:04:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.565 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.565 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.565 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.565 03:04:17 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.565 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.565 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.565 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.565 03:04:17 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.565 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.565 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.565 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.565 03:04:17 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.565 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.565 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.565 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.565 03:04:17 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.565 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.565 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.565 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.565 03:04:17 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.565 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.565 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.565 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.565 03:04:17 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.565 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.565 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.565 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.565 03:04:17 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.565 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.565 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.565 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.565 03:04:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.565 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.565 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.565 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.565 03:04:17 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.565 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.565 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.565 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.565 03:04:17 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.565 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.565 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.565 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.565 03:04:17 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.565 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.565 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.565 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.565 03:04:17 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.565 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.565 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.565 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.565 03:04:17 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.565 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.565 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.565 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.565 03:04:17 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.565 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.565 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.565 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.565 03:04:17 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.565 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.565 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.565 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.565 03:04:17 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.565 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.565 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.565 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.565 03:04:17 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.565 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.565 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.565 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.565 03:04:17 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.565 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.565 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.565 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.565 03:04:17 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.565 03:04:17 -- setup/common.sh@33 -- # echo 0 00:03:43.565 03:04:17 -- setup/common.sh@33 -- # return 0 00:03:43.565 03:04:17 -- setup/hugepages.sh@99 -- # surp=0 00:03:43.565 03:04:17 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:43.565 03:04:17 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:43.565 03:04:17 -- setup/common.sh@18 -- # local node= 00:03:43.565 03:04:17 -- setup/common.sh@19 -- # local var val 00:03:43.565 03:04:17 -- setup/common.sh@20 -- # local mem_f mem 00:03:43.565 03:04:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.565 03:04:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:43.565 03:04:17 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:43.565 03:04:17 -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.565 03:04:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.565 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.565 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.565 03:04:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 36811520 kB' 'MemAvailable: 41982808 kB' 'Buffers: 3728 kB' 'Cached: 18762204 kB' 'SwapCached: 0 kB' 'Active: 14679440 kB' 'Inactive: 4649000 kB' 'Active(anon): 14065320 kB' 'Inactive(anon): 0 kB' 'Active(file): 614120 kB' 'Inactive(file): 4649000 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 565876 kB' 'Mapped: 228080 kB' 'Shmem: 13502812 kB' 'KReclaimable: 552604 kB' 'Slab: 952244 kB' 'SReclaimable: 552604 kB' 'SUnreclaim: 399640 kB' 'KernelStack: 12896 kB' 'PageTables: 8848 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 15236100 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197148 kB' 'VmallocChunk: 0 kB' 'Percpu: 44928 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2207324 kB' 'DirectMap2M: 28121088 kB' 'DirectMap1G: 38797312 kB' 00:03:43.565 03:04:17 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.565 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.565 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.565 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.565 03:04:17 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.565 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.565 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.565 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.565 03:04:17 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.565 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.565 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.565 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.565 03:04:17 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.565 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.565 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.565 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.565 03:04:17 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.565 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.565 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.565 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.565 03:04:17 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.565 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.565 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.565 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.565 03:04:17 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.566 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.566 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.566 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.566 03:04:17 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.566 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.566 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.566 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.566 03:04:17 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.566 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.566 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.566 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.566 03:04:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.566 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.566 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.566 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.566 03:04:17 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.566 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.566 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.566 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.566 03:04:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.566 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.566 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.566 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.566 03:04:17 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.566 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.566 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.566 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.566 03:04:17 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.566 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.566 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.566 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.566 03:04:17 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.566 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.566 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.566 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.566 03:04:17 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.566 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.566 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.566 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.566 03:04:17 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.566 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.566 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.566 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.566 03:04:17 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.566 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.566 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.566 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.566 03:04:17 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.566 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.566 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.566 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.566 03:04:17 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.566 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.566 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.566 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.566 03:04:17 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.566 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.566 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.566 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.566 03:04:17 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.566 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.566 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.566 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.566 03:04:17 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.566 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.566 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.566 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.566 03:04:17 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.566 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.566 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.566 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.566 03:04:17 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.566 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.566 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.566 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.566 03:04:17 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.566 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.566 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.566 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.566 03:04:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.566 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.566 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.566 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.566 03:04:17 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.566 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.566 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.566 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.566 03:04:17 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.566 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.566 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.566 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.566 03:04:17 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.566 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.566 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.566 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.566 03:04:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.566 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.566 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.566 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.566 03:04:17 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.566 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.566 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.566 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.566 03:04:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.566 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.566 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.566 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.566 03:04:17 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.566 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.566 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.566 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.566 03:04:17 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.566 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.566 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.566 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.566 03:04:17 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.566 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.566 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.566 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.566 03:04:17 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.566 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.566 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.566 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.566 03:04:17 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.566 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.566 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.566 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.566 03:04:17 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.566 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.566 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.566 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.566 03:04:17 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.566 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.566 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.566 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.566 03:04:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.566 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.566 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.566 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.566 03:04:17 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.566 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.566 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.566 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.566 03:04:17 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.566 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.566 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.566 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.566 03:04:17 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.566 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.566 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.566 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.566 03:04:17 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.566 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.566 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.566 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.566 03:04:17 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.566 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.566 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.566 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.566 03:04:17 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.566 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.566 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.566 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.566 03:04:17 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.566 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.567 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.567 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.567 03:04:17 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.567 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.567 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.567 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.567 03:04:17 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.567 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.567 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.567 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.567 03:04:17 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.567 03:04:17 -- setup/common.sh@33 -- # echo 0 00:03:43.567 03:04:17 -- setup/common.sh@33 -- # return 0 00:03:43.567 03:04:17 -- setup/hugepages.sh@100 -- # resv=0 00:03:43.567 03:04:17 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:43.567 nr_hugepages=1024 00:03:43.567 03:04:17 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:43.567 resv_hugepages=0 00:03:43.567 03:04:17 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:43.567 surplus_hugepages=0 00:03:43.567 03:04:17 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:43.567 anon_hugepages=0 00:03:43.567 03:04:17 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:43.567 03:04:17 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:43.567 03:04:17 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:43.567 03:04:17 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:43.567 03:04:17 -- setup/common.sh@18 -- # local node= 00:03:43.567 03:04:17 -- setup/common.sh@19 -- # local var val 00:03:43.567 03:04:17 -- setup/common.sh@20 -- # local mem_f mem 00:03:43.567 03:04:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.567 03:04:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:43.567 03:04:17 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:43.567 03:04:17 -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.567 03:04:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.567 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.567 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.567 03:04:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 36811200 kB' 'MemAvailable: 41982488 kB' 'Buffers: 3728 kB' 'Cached: 18762220 kB' 'SwapCached: 0 kB' 'Active: 14680052 kB' 'Inactive: 4649000 kB' 'Active(anon): 14065932 kB' 'Inactive(anon): 0 kB' 'Active(file): 614120 kB' 'Inactive(file): 4649000 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 566488 kB' 'Mapped: 228080 kB' 'Shmem: 13502828 kB' 'KReclaimable: 552604 kB' 'Slab: 952244 kB' 'SReclaimable: 552604 kB' 'SUnreclaim: 399640 kB' 'KernelStack: 12912 kB' 'PageTables: 8916 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 15236776 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197260 kB' 'VmallocChunk: 0 kB' 'Percpu: 44928 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2207324 kB' 'DirectMap2M: 28121088 kB' 'DirectMap1G: 38797312 kB' 00:03:43.567 03:04:17 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.567 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.567 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.567 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.567 03:04:17 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.567 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.567 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.567 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.567 03:04:17 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.567 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.567 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.567 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.567 03:04:17 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.567 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.567 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.567 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.567 03:04:17 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.567 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.567 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.567 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.567 03:04:17 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.567 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.567 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.567 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.567 03:04:17 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.567 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.567 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.567 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.567 03:04:17 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.567 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.567 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.567 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.567 03:04:17 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.567 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.567 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.567 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.567 03:04:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.567 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.567 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.567 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.567 03:04:17 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.567 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.567 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.567 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.567 03:04:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.567 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.567 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.567 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.567 03:04:17 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.567 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.567 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.567 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.567 03:04:17 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.567 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.567 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.567 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.567 03:04:17 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.567 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.567 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.567 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.567 03:04:17 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.567 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.567 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.567 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.567 03:04:17 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.567 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.567 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.567 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.567 03:04:17 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.567 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.567 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.567 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.567 03:04:17 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.567 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.567 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.567 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.567 03:04:17 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.567 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.568 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.568 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.568 03:04:17 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.568 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.568 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.568 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.568 03:04:17 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.568 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.568 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.568 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.568 03:04:17 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.568 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.568 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.568 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.568 03:04:17 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.568 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.568 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.568 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.568 03:04:17 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.568 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.568 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.568 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.568 03:04:17 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.568 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.568 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.568 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.568 03:04:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.568 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.568 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.568 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.568 03:04:17 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.568 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.568 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.568 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.568 03:04:17 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.568 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.568 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.568 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.568 03:04:17 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.568 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.568 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.568 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.568 03:04:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.568 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.568 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.568 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.568 03:04:17 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.568 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.568 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.568 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.568 03:04:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.568 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.568 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.568 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.568 03:04:17 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.568 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.568 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.568 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.568 03:04:17 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.568 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.568 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.568 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.568 03:04:17 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.568 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.568 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.568 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.568 03:04:17 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.568 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.568 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.568 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.568 03:04:17 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.568 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.568 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.568 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.568 03:04:17 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.568 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.568 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.568 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.568 03:04:17 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.568 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.568 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.568 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.568 03:04:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.568 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.568 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.568 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.568 03:04:17 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.568 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.568 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.568 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.568 03:04:17 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.568 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.568 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.568 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.568 03:04:17 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.568 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.568 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.568 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.568 03:04:17 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.568 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.568 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.568 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.568 03:04:17 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.568 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.568 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.568 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.568 03:04:17 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.568 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.568 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.568 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.568 03:04:17 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.568 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.568 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.568 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.568 03:04:17 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.568 03:04:17 -- setup/common.sh@33 -- # echo 1024 00:03:43.568 03:04:17 -- setup/common.sh@33 -- # return 0 00:03:43.568 03:04:17 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:43.568 03:04:17 -- setup/hugepages.sh@112 -- # get_nodes 00:03:43.568 03:04:17 -- setup/hugepages.sh@27 -- # local node 00:03:43.568 03:04:17 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:43.568 03:04:17 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:43.568 03:04:17 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:43.568 03:04:17 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:43.568 03:04:17 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:43.568 03:04:17 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:43.568 03:04:17 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:43.568 03:04:17 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:43.568 03:04:17 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:43.568 03:04:17 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:43.568 03:04:17 -- setup/common.sh@18 -- # local node=0 00:03:43.568 03:04:17 -- setup/common.sh@19 -- # local var val 00:03:43.568 03:04:17 -- setup/common.sh@20 -- # local mem_f mem 00:03:43.568 03:04:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.568 03:04:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:43.568 03:04:17 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:43.568 03:04:17 -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.568 03:04:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.568 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.568 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.568 03:04:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 21492896 kB' 'MemUsed: 11336988 kB' 'SwapCached: 0 kB' 'Active: 7621000 kB' 'Inactive: 271804 kB' 'Active(anon): 7220596 kB' 'Inactive(anon): 0 kB' 'Active(file): 400404 kB' 'Inactive(file): 271804 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7698288 kB' 'Mapped: 52904 kB' 'AnonPages: 197660 kB' 'Shmem: 7026080 kB' 'KernelStack: 7416 kB' 'PageTables: 4980 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 288976 kB' 'Slab: 491552 kB' 'SReclaimable: 288976 kB' 'SUnreclaim: 202576 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:43.568 03:04:17 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.568 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.568 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.568 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.568 03:04:17 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.568 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.568 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.568 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.568 03:04:17 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.568 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.569 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.569 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.569 03:04:17 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.569 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.569 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.569 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.569 03:04:17 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.569 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.569 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.569 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.569 03:04:17 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.569 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.569 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.569 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.569 03:04:17 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.569 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.569 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.569 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.569 03:04:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.569 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.569 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.569 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.569 03:04:17 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.569 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.569 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.569 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.569 03:04:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.569 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.569 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.569 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.569 03:04:17 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.569 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.569 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.569 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.569 03:04:17 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.569 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.569 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.569 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.569 03:04:17 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.569 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.569 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.569 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.569 03:04:17 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.569 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.569 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.569 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.569 03:04:17 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.569 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.569 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.569 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.569 03:04:17 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.569 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.569 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.569 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.569 03:04:17 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.569 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.569 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.569 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.569 03:04:17 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.569 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.569 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.569 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.569 03:04:17 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.569 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.569 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.569 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.569 03:04:17 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.569 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.569 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.569 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.569 03:04:17 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.569 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.569 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.569 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.569 03:04:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.569 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.569 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.569 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.569 03:04:17 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.569 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.569 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.569 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.569 03:04:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.569 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.569 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.569 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.569 03:04:17 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.569 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.569 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.569 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.569 03:04:17 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.569 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.569 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.569 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.569 03:04:17 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.569 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.569 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.569 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.569 03:04:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.569 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.569 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.569 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.569 03:04:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.569 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.569 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.569 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.569 03:04:17 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.569 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.569 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.569 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.569 03:04:17 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.569 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.569 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.569 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.569 03:04:17 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.569 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.569 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.569 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.569 03:04:17 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.569 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.569 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.569 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.569 03:04:17 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.569 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.569 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.569 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.569 03:04:17 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.569 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.569 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.569 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.569 03:04:17 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.569 03:04:17 -- setup/common.sh@32 -- # continue 00:03:43.569 03:04:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.569 03:04:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.569 03:04:17 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.569 03:04:17 -- setup/common.sh@33 -- # echo 0 00:03:43.569 03:04:17 -- setup/common.sh@33 -- # return 0 00:03:43.569 03:04:17 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:43.569 03:04:17 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:43.569 03:04:17 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:43.569 03:04:17 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:43.569 03:04:17 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:43.569 node0=1024 expecting 1024 00:03:43.569 03:04:17 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:43.569 00:03:43.569 real 0m2.680s 00:03:43.569 user 0m1.101s 00:03:43.569 sys 0m1.492s 00:03:43.569 03:04:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:43.569 03:04:17 -- common/autotest_common.sh@10 -- # set +x 00:03:43.569 ************************************ 00:03:43.569 END TEST no_shrink_alloc 00:03:43.569 ************************************ 00:03:43.569 03:04:17 -- setup/hugepages.sh@217 -- # clear_hp 00:03:43.569 03:04:17 -- setup/hugepages.sh@37 -- # local node hp 00:03:43.569 03:04:17 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:43.569 03:04:17 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:43.569 03:04:17 -- setup/hugepages.sh@41 -- # echo 0 00:03:43.569 03:04:17 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:43.569 03:04:17 -- setup/hugepages.sh@41 -- # echo 0 00:03:43.569 03:04:17 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:43.569 03:04:17 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:43.569 03:04:17 -- setup/hugepages.sh@41 -- # echo 0 00:03:43.569 03:04:17 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:43.569 03:04:17 -- setup/hugepages.sh@41 -- # echo 0 00:03:43.570 03:04:17 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:43.570 03:04:17 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:43.570 00:03:43.570 real 0m11.574s 00:03:43.570 user 0m4.387s 00:03:43.570 sys 0m5.965s 00:03:43.570 03:04:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:43.570 03:04:18 -- common/autotest_common.sh@10 -- # set +x 00:03:43.570 ************************************ 00:03:43.570 END TEST hugepages 00:03:43.570 ************************************ 00:03:43.570 03:04:18 -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:43.570 03:04:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:43.570 03:04:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:43.570 03:04:18 -- common/autotest_common.sh@10 -- # set +x 00:03:43.829 ************************************ 00:03:43.829 START TEST driver 00:03:43.829 ************************************ 00:03:43.829 03:04:18 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:43.829 * Looking for test storage... 00:03:43.829 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:43.829 03:04:18 -- setup/driver.sh@68 -- # setup reset 00:03:43.829 03:04:18 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:43.829 03:04:18 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:46.369 03:04:20 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:46.369 03:04:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:46.369 03:04:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:46.369 03:04:20 -- common/autotest_common.sh@10 -- # set +x 00:03:46.369 ************************************ 00:03:46.369 START TEST guess_driver 00:03:46.369 ************************************ 00:03:46.369 03:04:20 -- common/autotest_common.sh@1111 -- # guess_driver 00:03:46.369 03:04:20 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:46.369 03:04:20 -- setup/driver.sh@47 -- # local fail=0 00:03:46.369 03:04:20 -- setup/driver.sh@49 -- # pick_driver 00:03:46.369 03:04:20 -- setup/driver.sh@36 -- # vfio 00:03:46.369 03:04:20 -- setup/driver.sh@21 -- # local iommu_grups 00:03:46.369 03:04:20 -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:46.369 03:04:20 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:46.369 03:04:20 -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:46.369 03:04:20 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:46.369 03:04:20 -- setup/driver.sh@29 -- # (( 141 > 0 )) 00:03:46.369 03:04:20 -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:46.369 03:04:20 -- setup/driver.sh@14 -- # mod vfio_pci 00:03:46.369 03:04:20 -- setup/driver.sh@12 -- # dep vfio_pci 00:03:46.369 03:04:20 -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:46.369 03:04:20 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:46.369 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:46.369 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:46.369 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:46.369 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:46.369 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:46.369 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:46.369 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:46.369 03:04:20 -- setup/driver.sh@30 -- # return 0 00:03:46.369 03:04:20 -- setup/driver.sh@37 -- # echo vfio-pci 00:03:46.369 03:04:20 -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:46.369 03:04:20 -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:46.369 03:04:20 -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:46.369 Looking for driver=vfio-pci 00:03:46.369 03:04:20 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:46.369 03:04:20 -- setup/driver.sh@45 -- # setup output config 00:03:46.369 03:04:20 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:46.369 03:04:20 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:47.307 03:04:21 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:47.307 03:04:21 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:47.307 03:04:21 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:47.307 03:04:21 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:47.307 03:04:21 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:47.307 03:04:21 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:47.307 03:04:21 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:47.307 03:04:21 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:47.307 03:04:21 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:47.307 03:04:21 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:47.307 03:04:21 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:47.307 03:04:21 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:47.307 03:04:21 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:47.307 03:04:21 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:47.307 03:04:21 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:47.307 03:04:21 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:47.307 03:04:21 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:47.307 03:04:21 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:47.307 03:04:21 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:47.307 03:04:21 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:47.307 03:04:21 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:47.307 03:04:21 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:47.569 03:04:21 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:47.569 03:04:21 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:47.569 03:04:21 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:47.569 03:04:21 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:47.569 03:04:21 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:47.569 03:04:21 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:47.569 03:04:21 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:47.569 03:04:21 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:47.569 03:04:21 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:47.569 03:04:21 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:47.569 03:04:21 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:47.569 03:04:21 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:47.569 03:04:21 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:47.569 03:04:21 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:47.569 03:04:21 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:47.569 03:04:21 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:47.569 03:04:21 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:47.569 03:04:21 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:47.569 03:04:21 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:47.569 03:04:21 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:47.569 03:04:21 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:47.569 03:04:21 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:47.569 03:04:21 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:47.569 03:04:21 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:47.569 03:04:21 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:47.569 03:04:21 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:48.541 03:04:22 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:48.541 03:04:22 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:48.541 03:04:22 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:48.541 03:04:22 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:48.541 03:04:22 -- setup/driver.sh@65 -- # setup reset 00:03:48.541 03:04:22 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:48.541 03:04:22 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:51.112 00:03:51.112 real 0m4.760s 00:03:51.112 user 0m1.084s 00:03:51.112 sys 0m1.747s 00:03:51.112 03:04:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:51.112 03:04:25 -- common/autotest_common.sh@10 -- # set +x 00:03:51.112 ************************************ 00:03:51.112 END TEST guess_driver 00:03:51.112 ************************************ 00:03:51.112 00:03:51.112 real 0m7.257s 00:03:51.112 user 0m1.647s 00:03:51.112 sys 0m2.708s 00:03:51.112 03:04:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:51.112 03:04:25 -- common/autotest_common.sh@10 -- # set +x 00:03:51.112 ************************************ 00:03:51.112 END TEST driver 00:03:51.112 ************************************ 00:03:51.112 03:04:25 -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:51.112 03:04:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:51.112 03:04:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:51.112 03:04:25 -- common/autotest_common.sh@10 -- # set +x 00:03:51.112 ************************************ 00:03:51.112 START TEST devices 00:03:51.112 ************************************ 00:03:51.112 03:04:25 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:51.112 * Looking for test storage... 00:03:51.112 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:51.112 03:04:25 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:51.112 03:04:25 -- setup/devices.sh@192 -- # setup reset 00:03:51.112 03:04:25 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:51.112 03:04:25 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:53.021 03:04:27 -- setup/devices.sh@194 -- # get_zoned_devs 00:03:53.021 03:04:27 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:03:53.021 03:04:27 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:03:53.021 03:04:27 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:03:53.021 03:04:27 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:53.021 03:04:27 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:03:53.021 03:04:27 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:03:53.021 03:04:27 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:53.021 03:04:27 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:53.021 03:04:27 -- setup/devices.sh@196 -- # blocks=() 00:03:53.021 03:04:27 -- setup/devices.sh@196 -- # declare -a blocks 00:03:53.021 03:04:27 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:53.021 03:04:27 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:53.021 03:04:27 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:53.021 03:04:27 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:53.021 03:04:27 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:53.021 03:04:27 -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:53.021 03:04:27 -- setup/devices.sh@202 -- # pci=0000:88:00.0 00:03:53.021 03:04:27 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:03:53.021 03:04:27 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:53.021 03:04:27 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:03:53.021 03:04:27 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:03:53.021 No valid GPT data, bailing 00:03:53.021 03:04:27 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:53.021 03:04:27 -- scripts/common.sh@391 -- # pt= 00:03:53.021 03:04:27 -- scripts/common.sh@392 -- # return 1 00:03:53.021 03:04:27 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:53.021 03:04:27 -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:53.021 03:04:27 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:53.021 03:04:27 -- setup/common.sh@80 -- # echo 1000204886016 00:03:53.021 03:04:27 -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:03:53.021 03:04:27 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:53.021 03:04:27 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:88:00.0 00:03:53.021 03:04:27 -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:03:53.021 03:04:27 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:53.021 03:04:27 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:53.021 03:04:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:53.021 03:04:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:53.021 03:04:27 -- common/autotest_common.sh@10 -- # set +x 00:03:53.021 ************************************ 00:03:53.021 START TEST nvme_mount 00:03:53.021 ************************************ 00:03:53.021 03:04:27 -- common/autotest_common.sh@1111 -- # nvme_mount 00:03:53.021 03:04:27 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:53.021 03:04:27 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:53.021 03:04:27 -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:53.021 03:04:27 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:53.021 03:04:27 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:53.021 03:04:27 -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:53.021 03:04:27 -- setup/common.sh@40 -- # local part_no=1 00:03:53.021 03:04:27 -- setup/common.sh@41 -- # local size=1073741824 00:03:53.021 03:04:27 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:53.021 03:04:27 -- setup/common.sh@44 -- # parts=() 00:03:53.021 03:04:27 -- setup/common.sh@44 -- # local parts 00:03:53.021 03:04:27 -- setup/common.sh@46 -- # (( part = 1 )) 00:03:53.021 03:04:27 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:53.021 03:04:27 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:53.021 03:04:27 -- setup/common.sh@46 -- # (( part++ )) 00:03:53.021 03:04:27 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:53.021 03:04:27 -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:53.021 03:04:27 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:53.021 03:04:27 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:53.959 Creating new GPT entries in memory. 00:03:53.959 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:53.959 other utilities. 00:03:53.959 03:04:28 -- setup/common.sh@57 -- # (( part = 1 )) 00:03:53.959 03:04:28 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:53.959 03:04:28 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:53.959 03:04:28 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:53.959 03:04:28 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:54.900 Creating new GPT entries in memory. 00:03:54.900 The operation has completed successfully. 00:03:54.900 03:04:29 -- setup/common.sh@57 -- # (( part++ )) 00:03:54.900 03:04:29 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:54.900 03:04:29 -- setup/common.sh@62 -- # wait 1362279 00:03:54.900 03:04:29 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:54.900 03:04:29 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:03:54.900 03:04:29 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:54.900 03:04:29 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:54.900 03:04:29 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:54.900 03:04:29 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:54.900 03:04:29 -- setup/devices.sh@105 -- # verify 0000:88:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:54.900 03:04:29 -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:54.900 03:04:29 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:54.900 03:04:29 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:54.900 03:04:29 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:54.900 03:04:29 -- setup/devices.sh@53 -- # local found=0 00:03:54.900 03:04:29 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:54.900 03:04:29 -- setup/devices.sh@56 -- # : 00:03:54.900 03:04:29 -- setup/devices.sh@59 -- # local pci status 00:03:54.900 03:04:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.900 03:04:29 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:54.900 03:04:29 -- setup/devices.sh@47 -- # setup output config 00:03:54.900 03:04:29 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:54.900 03:04:29 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:55.837 03:04:30 -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:55.837 03:04:30 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:55.837 03:04:30 -- setup/devices.sh@63 -- # found=1 00:03:55.837 03:04:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.837 03:04:30 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:55.837 03:04:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.837 03:04:30 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:55.837 03:04:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.837 03:04:30 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:55.837 03:04:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.837 03:04:30 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:55.837 03:04:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.837 03:04:30 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:55.837 03:04:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.837 03:04:30 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:55.837 03:04:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.837 03:04:30 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:55.837 03:04:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.837 03:04:30 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:55.837 03:04:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.837 03:04:30 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:55.837 03:04:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.837 03:04:30 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:55.837 03:04:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.837 03:04:30 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:55.837 03:04:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.837 03:04:30 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:55.837 03:04:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.837 03:04:30 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:55.837 03:04:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.837 03:04:30 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:55.837 03:04:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.837 03:04:30 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:55.837 03:04:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.837 03:04:30 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:55.837 03:04:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.098 03:04:30 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:56.098 03:04:30 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:56.098 03:04:30 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:56.098 03:04:30 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:56.098 03:04:30 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:56.098 03:04:30 -- setup/devices.sh@110 -- # cleanup_nvme 00:03:56.098 03:04:30 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:56.098 03:04:30 -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:56.098 03:04:30 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:56.098 03:04:30 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:56.098 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:56.098 03:04:30 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:56.098 03:04:30 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:56.357 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:56.357 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:56.357 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:56.357 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:56.357 03:04:30 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:03:56.357 03:04:30 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:03:56.357 03:04:30 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:56.357 03:04:30 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:56.357 03:04:30 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:56.357 03:04:30 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:56.357 03:04:30 -- setup/devices.sh@116 -- # verify 0000:88:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:56.357 03:04:30 -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:56.357 03:04:30 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:56.357 03:04:30 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:56.357 03:04:30 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:56.357 03:04:30 -- setup/devices.sh@53 -- # local found=0 00:03:56.357 03:04:30 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:56.357 03:04:30 -- setup/devices.sh@56 -- # : 00:03:56.357 03:04:30 -- setup/devices.sh@59 -- # local pci status 00:03:56.357 03:04:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.357 03:04:30 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:56.357 03:04:30 -- setup/devices.sh@47 -- # setup output config 00:03:56.357 03:04:30 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:56.357 03:04:30 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:57.736 03:04:31 -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:57.736 03:04:31 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:57.736 03:04:31 -- setup/devices.sh@63 -- # found=1 00:03:57.736 03:04:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.736 03:04:31 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:57.736 03:04:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.736 03:04:31 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:57.736 03:04:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.736 03:04:31 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:57.736 03:04:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.736 03:04:31 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:57.736 03:04:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.736 03:04:31 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:57.736 03:04:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.736 03:04:31 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:57.736 03:04:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.736 03:04:31 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:57.736 03:04:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.736 03:04:31 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:57.736 03:04:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.736 03:04:31 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:57.736 03:04:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.736 03:04:31 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:57.736 03:04:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.736 03:04:31 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:57.736 03:04:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.736 03:04:31 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:57.736 03:04:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.736 03:04:31 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:57.736 03:04:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.736 03:04:31 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:57.736 03:04:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.736 03:04:31 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:57.736 03:04:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.736 03:04:31 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:57.736 03:04:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.736 03:04:32 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:57.736 03:04:32 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:57.736 03:04:32 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:57.736 03:04:32 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:57.736 03:04:32 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:57.736 03:04:32 -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:57.736 03:04:32 -- setup/devices.sh@125 -- # verify 0000:88:00.0 data@nvme0n1 '' '' 00:03:57.737 03:04:32 -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:57.737 03:04:32 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:57.737 03:04:32 -- setup/devices.sh@50 -- # local mount_point= 00:03:57.737 03:04:32 -- setup/devices.sh@51 -- # local test_file= 00:03:57.737 03:04:32 -- setup/devices.sh@53 -- # local found=0 00:03:57.737 03:04:32 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:57.737 03:04:32 -- setup/devices.sh@59 -- # local pci status 00:03:57.737 03:04:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.737 03:04:32 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:57.737 03:04:32 -- setup/devices.sh@47 -- # setup output config 00:03:57.737 03:04:32 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:57.737 03:04:32 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:58.678 03:04:33 -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:58.678 03:04:33 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:03:58.678 03:04:33 -- setup/devices.sh@63 -- # found=1 00:03:58.678 03:04:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.678 03:04:33 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:58.678 03:04:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.678 03:04:33 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:58.678 03:04:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.678 03:04:33 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:58.678 03:04:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.678 03:04:33 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:58.678 03:04:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.678 03:04:33 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:58.678 03:04:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.678 03:04:33 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:58.678 03:04:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.678 03:04:33 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:58.678 03:04:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.678 03:04:33 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:58.678 03:04:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.678 03:04:33 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:58.678 03:04:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.678 03:04:33 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:58.678 03:04:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.678 03:04:33 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:58.678 03:04:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.678 03:04:33 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:58.678 03:04:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.678 03:04:33 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:58.678 03:04:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.678 03:04:33 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:58.678 03:04:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.678 03:04:33 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:58.678 03:04:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.678 03:04:33 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:58.678 03:04:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.937 03:04:33 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:58.937 03:04:33 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:58.937 03:04:33 -- setup/devices.sh@68 -- # return 0 00:03:58.937 03:04:33 -- setup/devices.sh@128 -- # cleanup_nvme 00:03:58.937 03:04:33 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:58.937 03:04:33 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:58.937 03:04:33 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:58.937 03:04:33 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:58.937 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:58.937 00:03:58.937 real 0m6.128s 00:03:58.937 user 0m1.352s 00:03:58.937 sys 0m2.324s 00:03:58.937 03:04:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:58.937 03:04:33 -- common/autotest_common.sh@10 -- # set +x 00:03:58.937 ************************************ 00:03:58.937 END TEST nvme_mount 00:03:58.937 ************************************ 00:03:58.937 03:04:33 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:03:58.937 03:04:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:58.937 03:04:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:58.937 03:04:33 -- common/autotest_common.sh@10 -- # set +x 00:03:58.937 ************************************ 00:03:58.937 START TEST dm_mount 00:03:58.937 ************************************ 00:03:58.937 03:04:33 -- common/autotest_common.sh@1111 -- # dm_mount 00:03:58.937 03:04:33 -- setup/devices.sh@144 -- # pv=nvme0n1 00:03:58.937 03:04:33 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:03:58.937 03:04:33 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:03:58.937 03:04:33 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:03:58.937 03:04:33 -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:58.937 03:04:33 -- setup/common.sh@40 -- # local part_no=2 00:03:58.937 03:04:33 -- setup/common.sh@41 -- # local size=1073741824 00:03:58.937 03:04:33 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:58.937 03:04:33 -- setup/common.sh@44 -- # parts=() 00:03:58.937 03:04:33 -- setup/common.sh@44 -- # local parts 00:03:58.937 03:04:33 -- setup/common.sh@46 -- # (( part = 1 )) 00:03:58.937 03:04:33 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:58.937 03:04:33 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:58.937 03:04:33 -- setup/common.sh@46 -- # (( part++ )) 00:03:58.937 03:04:33 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:58.937 03:04:33 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:58.937 03:04:33 -- setup/common.sh@46 -- # (( part++ )) 00:03:58.937 03:04:33 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:58.937 03:04:33 -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:58.937 03:04:33 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:58.937 03:04:33 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:00.317 Creating new GPT entries in memory. 00:04:00.317 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:00.317 other utilities. 00:04:00.317 03:04:34 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:00.317 03:04:34 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:00.317 03:04:34 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:00.317 03:04:34 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:00.317 03:04:34 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:01.254 Creating new GPT entries in memory. 00:04:01.254 The operation has completed successfully. 00:04:01.254 03:04:35 -- setup/common.sh@57 -- # (( part++ )) 00:04:01.254 03:04:35 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:01.254 03:04:35 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:01.254 03:04:35 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:01.254 03:04:35 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:02.192 The operation has completed successfully. 00:04:02.192 03:04:36 -- setup/common.sh@57 -- # (( part++ )) 00:04:02.192 03:04:36 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:02.192 03:04:36 -- setup/common.sh@62 -- # wait 1364672 00:04:02.192 03:04:36 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:02.192 03:04:36 -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:02.192 03:04:36 -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:02.192 03:04:36 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:02.192 03:04:36 -- setup/devices.sh@160 -- # for t in {1..5} 00:04:02.192 03:04:36 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:02.192 03:04:36 -- setup/devices.sh@161 -- # break 00:04:02.192 03:04:36 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:02.192 03:04:36 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:02.192 03:04:36 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:02.192 03:04:36 -- setup/devices.sh@166 -- # dm=dm-0 00:04:02.192 03:04:36 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:02.192 03:04:36 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:02.192 03:04:36 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:02.192 03:04:36 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:04:02.192 03:04:36 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:02.192 03:04:36 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:02.192 03:04:36 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:02.192 03:04:36 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:02.192 03:04:36 -- setup/devices.sh@174 -- # verify 0000:88:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:02.192 03:04:36 -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:04:02.192 03:04:36 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:02.192 03:04:36 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:02.192 03:04:36 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:02.192 03:04:36 -- setup/devices.sh@53 -- # local found=0 00:04:02.192 03:04:36 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:02.192 03:04:36 -- setup/devices.sh@56 -- # : 00:04:02.192 03:04:36 -- setup/devices.sh@59 -- # local pci status 00:04:02.192 03:04:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.192 03:04:36 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:04:02.192 03:04:36 -- setup/devices.sh@47 -- # setup output config 00:04:02.192 03:04:36 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:02.192 03:04:36 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:03.131 03:04:37 -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:03.131 03:04:37 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:03.131 03:04:37 -- setup/devices.sh@63 -- # found=1 00:04:03.131 03:04:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.131 03:04:37 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:03.131 03:04:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.131 03:04:37 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:03.131 03:04:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.131 03:04:37 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:03.131 03:04:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.131 03:04:37 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:03.131 03:04:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.131 03:04:37 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:03.131 03:04:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.131 03:04:37 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:03.131 03:04:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.131 03:04:37 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:03.131 03:04:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.131 03:04:37 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:03.131 03:04:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.131 03:04:37 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:03.131 03:04:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.131 03:04:37 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:03.131 03:04:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.131 03:04:37 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:03.131 03:04:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.131 03:04:37 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:03.131 03:04:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.131 03:04:37 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:03.131 03:04:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.131 03:04:37 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:03.131 03:04:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.131 03:04:37 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:03.131 03:04:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.131 03:04:37 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:03.131 03:04:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.391 03:04:37 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:03.391 03:04:37 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:03.391 03:04:37 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:03.391 03:04:37 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:03.391 03:04:37 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:03.391 03:04:37 -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:03.391 03:04:37 -- setup/devices.sh@184 -- # verify 0000:88:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:03.391 03:04:37 -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:04:03.391 03:04:37 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:03.391 03:04:37 -- setup/devices.sh@50 -- # local mount_point= 00:04:03.391 03:04:37 -- setup/devices.sh@51 -- # local test_file= 00:04:03.391 03:04:37 -- setup/devices.sh@53 -- # local found=0 00:04:03.391 03:04:37 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:03.391 03:04:37 -- setup/devices.sh@59 -- # local pci status 00:04:03.391 03:04:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.391 03:04:37 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:04:03.391 03:04:37 -- setup/devices.sh@47 -- # setup output config 00:04:03.391 03:04:37 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:03.391 03:04:37 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:04.346 03:04:38 -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:04.346 03:04:38 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:04.346 03:04:38 -- setup/devices.sh@63 -- # found=1 00:04:04.346 03:04:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.346 03:04:38 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:04.346 03:04:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.346 03:04:38 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:04.346 03:04:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.346 03:04:38 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:04.346 03:04:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.346 03:04:38 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:04.346 03:04:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.346 03:04:38 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:04.347 03:04:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.347 03:04:38 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:04.347 03:04:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.347 03:04:38 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:04.347 03:04:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.347 03:04:38 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:04.347 03:04:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.347 03:04:38 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:04.347 03:04:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.347 03:04:38 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:04.347 03:04:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.347 03:04:38 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:04.347 03:04:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.347 03:04:38 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:04.347 03:04:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.347 03:04:38 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:04.347 03:04:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.347 03:04:38 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:04.347 03:04:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.347 03:04:38 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:04.347 03:04:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.347 03:04:38 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:04.347 03:04:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.642 03:04:38 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:04.642 03:04:38 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:04.642 03:04:38 -- setup/devices.sh@68 -- # return 0 00:04:04.642 03:04:38 -- setup/devices.sh@187 -- # cleanup_dm 00:04:04.642 03:04:38 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:04.642 03:04:38 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:04.642 03:04:38 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:04.642 03:04:38 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:04.642 03:04:38 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:04.642 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:04.642 03:04:38 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:04.642 03:04:38 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:04.642 00:04:04.642 real 0m5.502s 00:04:04.642 user 0m0.894s 00:04:04.642 sys 0m1.440s 00:04:04.642 03:04:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:04.642 03:04:38 -- common/autotest_common.sh@10 -- # set +x 00:04:04.642 ************************************ 00:04:04.642 END TEST dm_mount 00:04:04.642 ************************************ 00:04:04.642 03:04:38 -- setup/devices.sh@1 -- # cleanup 00:04:04.642 03:04:38 -- setup/devices.sh@11 -- # cleanup_nvme 00:04:04.642 03:04:38 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:04.642 03:04:38 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:04.642 03:04:38 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:04.642 03:04:38 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:04.642 03:04:38 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:04.905 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:04.905 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:04.905 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:04.905 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:04.905 03:04:39 -- setup/devices.sh@12 -- # cleanup_dm 00:04:04.905 03:04:39 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:04.905 03:04:39 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:04.905 03:04:39 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:04.905 03:04:39 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:04.905 03:04:39 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:04.905 03:04:39 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:04.905 00:04:04.905 real 0m13.713s 00:04:04.905 user 0m2.959s 00:04:04.905 sys 0m4.881s 00:04:04.905 03:04:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:04.905 03:04:39 -- common/autotest_common.sh@10 -- # set +x 00:04:04.905 ************************************ 00:04:04.905 END TEST devices 00:04:04.905 ************************************ 00:04:04.905 00:04:04.905 real 0m44.031s 00:04:04.905 user 0m12.612s 00:04:04.905 sys 0m19.381s 00:04:04.905 03:04:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:04.905 03:04:39 -- common/autotest_common.sh@10 -- # set +x 00:04:04.905 ************************************ 00:04:04.905 END TEST setup.sh 00:04:04.905 ************************************ 00:04:04.905 03:04:39 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:05.844 Hugepages 00:04:05.844 node hugesize free / total 00:04:05.844 node0 1048576kB 0 / 0 00:04:05.844 node0 2048kB 2048 / 2048 00:04:05.844 node1 1048576kB 0 / 0 00:04:05.844 node1 2048kB 0 / 0 00:04:05.844 00:04:05.844 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:05.844 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:04:05.844 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:04:05.844 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:04:06.103 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:04:06.103 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:04:06.103 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:04:06.103 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:04:06.103 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:04:06.103 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:04:06.103 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:04:06.103 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:04:06.103 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:04:06.103 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:04:06.103 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:04:06.103 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:04:06.103 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:04:06.103 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:04:06.103 03:04:40 -- spdk/autotest.sh@130 -- # uname -s 00:04:06.103 03:04:40 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:06.103 03:04:40 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:06.103 03:04:40 -- common/autotest_common.sh@1517 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:07.483 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:07.483 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:07.484 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:07.484 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:07.484 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:07.484 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:07.484 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:07.484 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:07.484 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:07.484 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:07.484 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:07.484 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:07.484 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:07.484 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:07.484 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:07.484 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:08.421 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:08.421 03:04:42 -- common/autotest_common.sh@1518 -- # sleep 1 00:04:09.357 03:04:43 -- common/autotest_common.sh@1519 -- # bdfs=() 00:04:09.357 03:04:43 -- common/autotest_common.sh@1519 -- # local bdfs 00:04:09.357 03:04:43 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:09.357 03:04:43 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:09.357 03:04:43 -- common/autotest_common.sh@1499 -- # bdfs=() 00:04:09.357 03:04:43 -- common/autotest_common.sh@1499 -- # local bdfs 00:04:09.357 03:04:43 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:09.357 03:04:43 -- common/autotest_common.sh@1500 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:09.357 03:04:43 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:04:09.615 03:04:43 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:04:09.615 03:04:43 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:88:00.0 00:04:09.615 03:04:43 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:10.553 Waiting for block devices as requested 00:04:10.553 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:04:10.553 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:04:10.811 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:04:10.811 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:04:10.811 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:04:11.071 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:04:11.071 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:04:11.071 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:04:11.071 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:04:11.331 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:04:11.331 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:04:11.331 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:04:11.331 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:04:11.591 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:04:11.591 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:04:11.591 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:04:11.591 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:04:11.850 03:04:46 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:11.850 03:04:46 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:04:11.850 03:04:46 -- common/autotest_common.sh@1488 -- # readlink -f /sys/class/nvme/nvme0 00:04:11.850 03:04:46 -- common/autotest_common.sh@1488 -- # grep 0000:88:00.0/nvme/nvme 00:04:11.850 03:04:46 -- common/autotest_common.sh@1488 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:04:11.850 03:04:46 -- common/autotest_common.sh@1489 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:04:11.850 03:04:46 -- common/autotest_common.sh@1493 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:04:11.850 03:04:46 -- common/autotest_common.sh@1493 -- # printf '%s\n' nvme0 00:04:11.850 03:04:46 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:11.850 03:04:46 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:11.850 03:04:46 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:11.850 03:04:46 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:11.850 03:04:46 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:11.850 03:04:46 -- common/autotest_common.sh@1531 -- # oacs=' 0xf' 00:04:11.850 03:04:46 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:11.850 03:04:46 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:11.850 03:04:46 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:11.850 03:04:46 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:11.850 03:04:46 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:11.850 03:04:46 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:11.850 03:04:46 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:11.850 03:04:46 -- common/autotest_common.sh@1543 -- # continue 00:04:11.850 03:04:46 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:11.850 03:04:46 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:11.850 03:04:46 -- common/autotest_common.sh@10 -- # set +x 00:04:11.850 03:04:46 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:11.850 03:04:46 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:11.850 03:04:46 -- common/autotest_common.sh@10 -- # set +x 00:04:11.850 03:04:46 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:13.229 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:13.229 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:13.229 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:13.229 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:13.229 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:13.229 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:13.229 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:13.229 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:13.229 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:13.229 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:13.229 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:13.229 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:13.229 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:13.229 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:13.229 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:13.229 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:14.170 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:14.170 03:04:48 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:14.170 03:04:48 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:14.170 03:04:48 -- common/autotest_common.sh@10 -- # set +x 00:04:14.170 03:04:48 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:14.170 03:04:48 -- common/autotest_common.sh@1577 -- # mapfile -t bdfs 00:04:14.170 03:04:48 -- common/autotest_common.sh@1577 -- # get_nvme_bdfs_by_id 0x0a54 00:04:14.170 03:04:48 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:14.170 03:04:48 -- common/autotest_common.sh@1563 -- # local bdfs 00:04:14.170 03:04:48 -- common/autotest_common.sh@1565 -- # get_nvme_bdfs 00:04:14.170 03:04:48 -- common/autotest_common.sh@1499 -- # bdfs=() 00:04:14.170 03:04:48 -- common/autotest_common.sh@1499 -- # local bdfs 00:04:14.170 03:04:48 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:14.170 03:04:48 -- common/autotest_common.sh@1500 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:14.170 03:04:48 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:04:14.170 03:04:48 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:04:14.170 03:04:48 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:88:00.0 00:04:14.170 03:04:48 -- common/autotest_common.sh@1565 -- # for bdf in $(get_nvme_bdfs) 00:04:14.170 03:04:48 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:04:14.170 03:04:48 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:04:14.170 03:04:48 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:14.170 03:04:48 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:04:14.170 03:04:48 -- common/autotest_common.sh@1572 -- # printf '%s\n' 0000:88:00.0 00:04:14.170 03:04:48 -- common/autotest_common.sh@1578 -- # [[ -z 0000:88:00.0 ]] 00:04:14.170 03:04:48 -- common/autotest_common.sh@1583 -- # spdk_tgt_pid=1369841 00:04:14.170 03:04:48 -- common/autotest_common.sh@1582 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:14.170 03:04:48 -- common/autotest_common.sh@1584 -- # waitforlisten 1369841 00:04:14.170 03:04:48 -- common/autotest_common.sh@817 -- # '[' -z 1369841 ']' 00:04:14.170 03:04:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:14.170 03:04:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:14.170 03:04:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:14.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:14.170 03:04:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:14.170 03:04:48 -- common/autotest_common.sh@10 -- # set +x 00:04:14.170 [2024-04-25 03:04:48.640128] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:04:14.170 [2024-04-25 03:04:48.640218] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1369841 ] 00:04:14.170 EAL: No free 2048 kB hugepages reported on node 1 00:04:14.430 [2024-04-25 03:04:48.698907] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:14.430 [2024-04-25 03:04:48.806118] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:14.691 03:04:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:14.691 03:04:49 -- common/autotest_common.sh@850 -- # return 0 00:04:14.691 03:04:49 -- common/autotest_common.sh@1586 -- # bdf_id=0 00:04:14.691 03:04:49 -- common/autotest_common.sh@1587 -- # for bdf in "${bdfs[@]}" 00:04:14.691 03:04:49 -- common/autotest_common.sh@1588 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:04:17.989 nvme0n1 00:04:17.989 03:04:52 -- common/autotest_common.sh@1590 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:17.989 [2024-04-25 03:04:52.423950] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:04:17.989 [2024-04-25 03:04:52.423993] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:04:17.989 request: 00:04:17.989 { 00:04:17.989 "nvme_ctrlr_name": "nvme0", 00:04:17.989 "password": "test", 00:04:17.989 "method": "bdev_nvme_opal_revert", 00:04:17.989 "req_id": 1 00:04:17.989 } 00:04:17.989 Got JSON-RPC error response 00:04:17.989 response: 00:04:17.989 { 00:04:17.989 "code": -32603, 00:04:17.989 "message": "Internal error" 00:04:17.989 } 00:04:17.989 03:04:52 -- common/autotest_common.sh@1590 -- # true 00:04:17.989 03:04:52 -- common/autotest_common.sh@1591 -- # (( ++bdf_id )) 00:04:17.989 03:04:52 -- common/autotest_common.sh@1594 -- # killprocess 1369841 00:04:17.989 03:04:52 -- common/autotest_common.sh@936 -- # '[' -z 1369841 ']' 00:04:17.989 03:04:52 -- common/autotest_common.sh@940 -- # kill -0 1369841 00:04:17.989 03:04:52 -- common/autotest_common.sh@941 -- # uname 00:04:17.989 03:04:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:17.989 03:04:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1369841 00:04:17.989 03:04:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:17.989 03:04:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:17.989 03:04:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1369841' 00:04:17.989 killing process with pid 1369841 00:04:17.989 03:04:52 -- common/autotest_common.sh@955 -- # kill 1369841 00:04:17.989 03:04:52 -- common/autotest_common.sh@960 -- # wait 1369841 00:04:19.895 03:04:54 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:19.895 03:04:54 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:19.895 03:04:54 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:19.895 03:04:54 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:19.895 03:04:54 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:19.895 03:04:54 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:19.895 03:04:54 -- common/autotest_common.sh@10 -- # set +x 00:04:19.895 03:04:54 -- spdk/autotest.sh@164 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:19.895 03:04:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:19.895 03:04:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:19.895 03:04:54 -- common/autotest_common.sh@10 -- # set +x 00:04:19.895 ************************************ 00:04:19.895 START TEST env 00:04:19.895 ************************************ 00:04:19.895 03:04:54 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:20.154 * Looking for test storage... 00:04:20.154 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:20.155 03:04:54 -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:20.155 03:04:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:20.155 03:04:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:20.155 03:04:54 -- common/autotest_common.sh@10 -- # set +x 00:04:20.155 ************************************ 00:04:20.155 START TEST env_memory 00:04:20.155 ************************************ 00:04:20.155 03:04:54 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:20.155 00:04:20.155 00:04:20.155 CUnit - A unit testing framework for C - Version 2.1-3 00:04:20.155 http://cunit.sourceforge.net/ 00:04:20.155 00:04:20.155 00:04:20.155 Suite: memory 00:04:20.155 Test: alloc and free memory map ...[2024-04-25 03:04:54.560985] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:20.155 passed 00:04:20.155 Test: mem map translation ...[2024-04-25 03:04:54.585837] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:20.155 [2024-04-25 03:04:54.585863] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:20.155 [2024-04-25 03:04:54.585915] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:20.155 [2024-04-25 03:04:54.585943] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:20.155 passed 00:04:20.155 Test: mem map registration ...[2024-04-25 03:04:54.638170] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:20.155 [2024-04-25 03:04:54.638193] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:20.155 passed 00:04:20.415 Test: mem map adjacent registrations ...passed 00:04:20.415 00:04:20.415 Run Summary: Type Total Ran Passed Failed Inactive 00:04:20.415 suites 1 1 n/a 0 0 00:04:20.415 tests 4 4 4 0 0 00:04:20.415 asserts 152 152 152 0 n/a 00:04:20.415 00:04:20.415 Elapsed time = 0.174 seconds 00:04:20.415 00:04:20.415 real 0m0.181s 00:04:20.415 user 0m0.168s 00:04:20.415 sys 0m0.012s 00:04:20.415 03:04:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:20.415 03:04:54 -- common/autotest_common.sh@10 -- # set +x 00:04:20.415 ************************************ 00:04:20.415 END TEST env_memory 00:04:20.415 ************************************ 00:04:20.415 03:04:54 -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:20.415 03:04:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:20.415 03:04:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:20.415 03:04:54 -- common/autotest_common.sh@10 -- # set +x 00:04:20.415 ************************************ 00:04:20.415 START TEST env_vtophys 00:04:20.415 ************************************ 00:04:20.415 03:04:54 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:20.415 EAL: lib.eal log level changed from notice to debug 00:04:20.415 EAL: Detected lcore 0 as core 0 on socket 0 00:04:20.415 EAL: Detected lcore 1 as core 1 on socket 0 00:04:20.415 EAL: Detected lcore 2 as core 2 on socket 0 00:04:20.415 EAL: Detected lcore 3 as core 3 on socket 0 00:04:20.415 EAL: Detected lcore 4 as core 4 on socket 0 00:04:20.415 EAL: Detected lcore 5 as core 5 on socket 0 00:04:20.415 EAL: Detected lcore 6 as core 8 on socket 0 00:04:20.415 EAL: Detected lcore 7 as core 9 on socket 0 00:04:20.415 EAL: Detected lcore 8 as core 10 on socket 0 00:04:20.415 EAL: Detected lcore 9 as core 11 on socket 0 00:04:20.415 EAL: Detected lcore 10 as core 12 on socket 0 00:04:20.415 EAL: Detected lcore 11 as core 13 on socket 0 00:04:20.415 EAL: Detected lcore 12 as core 0 on socket 1 00:04:20.415 EAL: Detected lcore 13 as core 1 on socket 1 00:04:20.415 EAL: Detected lcore 14 as core 2 on socket 1 00:04:20.415 EAL: Detected lcore 15 as core 3 on socket 1 00:04:20.415 EAL: Detected lcore 16 as core 4 on socket 1 00:04:20.415 EAL: Detected lcore 17 as core 5 on socket 1 00:04:20.415 EAL: Detected lcore 18 as core 8 on socket 1 00:04:20.415 EAL: Detected lcore 19 as core 9 on socket 1 00:04:20.415 EAL: Detected lcore 20 as core 10 on socket 1 00:04:20.415 EAL: Detected lcore 21 as core 11 on socket 1 00:04:20.415 EAL: Detected lcore 22 as core 12 on socket 1 00:04:20.415 EAL: Detected lcore 23 as core 13 on socket 1 00:04:20.415 EAL: Detected lcore 24 as core 0 on socket 0 00:04:20.415 EAL: Detected lcore 25 as core 1 on socket 0 00:04:20.415 EAL: Detected lcore 26 as core 2 on socket 0 00:04:20.415 EAL: Detected lcore 27 as core 3 on socket 0 00:04:20.415 EAL: Detected lcore 28 as core 4 on socket 0 00:04:20.415 EAL: Detected lcore 29 as core 5 on socket 0 00:04:20.415 EAL: Detected lcore 30 as core 8 on socket 0 00:04:20.415 EAL: Detected lcore 31 as core 9 on socket 0 00:04:20.415 EAL: Detected lcore 32 as core 10 on socket 0 00:04:20.415 EAL: Detected lcore 33 as core 11 on socket 0 00:04:20.415 EAL: Detected lcore 34 as core 12 on socket 0 00:04:20.415 EAL: Detected lcore 35 as core 13 on socket 0 00:04:20.415 EAL: Detected lcore 36 as core 0 on socket 1 00:04:20.415 EAL: Detected lcore 37 as core 1 on socket 1 00:04:20.415 EAL: Detected lcore 38 as core 2 on socket 1 00:04:20.415 EAL: Detected lcore 39 as core 3 on socket 1 00:04:20.415 EAL: Detected lcore 40 as core 4 on socket 1 00:04:20.415 EAL: Detected lcore 41 as core 5 on socket 1 00:04:20.415 EAL: Detected lcore 42 as core 8 on socket 1 00:04:20.415 EAL: Detected lcore 43 as core 9 on socket 1 00:04:20.415 EAL: Detected lcore 44 as core 10 on socket 1 00:04:20.415 EAL: Detected lcore 45 as core 11 on socket 1 00:04:20.415 EAL: Detected lcore 46 as core 12 on socket 1 00:04:20.415 EAL: Detected lcore 47 as core 13 on socket 1 00:04:20.415 EAL: Maximum logical cores by configuration: 128 00:04:20.415 EAL: Detected CPU lcores: 48 00:04:20.415 EAL: Detected NUMA nodes: 2 00:04:20.415 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:04:20.415 EAL: Detected shared linkage of DPDK 00:04:20.415 EAL: No shared files mode enabled, IPC will be disabled 00:04:20.415 EAL: Bus pci wants IOVA as 'DC' 00:04:20.415 EAL: Buses did not request a specific IOVA mode. 00:04:20.415 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:20.415 EAL: Selected IOVA mode 'VA' 00:04:20.415 EAL: No free 2048 kB hugepages reported on node 1 00:04:20.415 EAL: Probing VFIO support... 00:04:20.415 EAL: IOMMU type 1 (Type 1) is supported 00:04:20.415 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:20.415 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:20.415 EAL: VFIO support initialized 00:04:20.415 EAL: Ask a virtual area of 0x2e000 bytes 00:04:20.415 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:20.415 EAL: Setting up physically contiguous memory... 00:04:20.415 EAL: Setting maximum number of open files to 524288 00:04:20.415 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:20.415 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:20.415 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:20.415 EAL: Ask a virtual area of 0x61000 bytes 00:04:20.415 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:20.415 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:20.415 EAL: Ask a virtual area of 0x400000000 bytes 00:04:20.415 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:20.415 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:20.415 EAL: Ask a virtual area of 0x61000 bytes 00:04:20.415 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:20.415 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:20.415 EAL: Ask a virtual area of 0x400000000 bytes 00:04:20.415 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:20.415 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:20.415 EAL: Ask a virtual area of 0x61000 bytes 00:04:20.415 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:20.415 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:20.415 EAL: Ask a virtual area of 0x400000000 bytes 00:04:20.415 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:20.415 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:20.415 EAL: Ask a virtual area of 0x61000 bytes 00:04:20.415 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:20.415 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:20.415 EAL: Ask a virtual area of 0x400000000 bytes 00:04:20.415 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:20.415 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:20.415 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:20.415 EAL: Ask a virtual area of 0x61000 bytes 00:04:20.415 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:20.415 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:20.415 EAL: Ask a virtual area of 0x400000000 bytes 00:04:20.415 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:20.415 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:20.415 EAL: Ask a virtual area of 0x61000 bytes 00:04:20.415 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:20.415 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:20.415 EAL: Ask a virtual area of 0x400000000 bytes 00:04:20.415 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:20.415 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:20.415 EAL: Ask a virtual area of 0x61000 bytes 00:04:20.415 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:20.415 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:20.415 EAL: Ask a virtual area of 0x400000000 bytes 00:04:20.415 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:20.415 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:20.415 EAL: Ask a virtual area of 0x61000 bytes 00:04:20.415 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:20.415 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:20.415 EAL: Ask a virtual area of 0x400000000 bytes 00:04:20.415 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:20.415 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:20.415 EAL: Hugepages will be freed exactly as allocated. 00:04:20.415 EAL: No shared files mode enabled, IPC is disabled 00:04:20.415 EAL: No shared files mode enabled, IPC is disabled 00:04:20.415 EAL: TSC frequency is ~2700000 KHz 00:04:20.415 EAL: Main lcore 0 is ready (tid=7f2f2e694a00;cpuset=[0]) 00:04:20.415 EAL: Trying to obtain current memory policy. 00:04:20.415 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:20.415 EAL: Restoring previous memory policy: 0 00:04:20.415 EAL: request: mp_malloc_sync 00:04:20.415 EAL: No shared files mode enabled, IPC is disabled 00:04:20.415 EAL: Heap on socket 0 was expanded by 2MB 00:04:20.415 EAL: No shared files mode enabled, IPC is disabled 00:04:20.415 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:20.416 EAL: Mem event callback 'spdk:(nil)' registered 00:04:20.416 00:04:20.416 00:04:20.416 CUnit - A unit testing framework for C - Version 2.1-3 00:04:20.416 http://cunit.sourceforge.net/ 00:04:20.416 00:04:20.416 00:04:20.416 Suite: components_suite 00:04:20.416 Test: vtophys_malloc_test ...passed 00:04:20.416 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:20.416 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:20.416 EAL: Restoring previous memory policy: 4 00:04:20.416 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.416 EAL: request: mp_malloc_sync 00:04:20.416 EAL: No shared files mode enabled, IPC is disabled 00:04:20.416 EAL: Heap on socket 0 was expanded by 4MB 00:04:20.416 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.416 EAL: request: mp_malloc_sync 00:04:20.416 EAL: No shared files mode enabled, IPC is disabled 00:04:20.416 EAL: Heap on socket 0 was shrunk by 4MB 00:04:20.416 EAL: Trying to obtain current memory policy. 00:04:20.416 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:20.416 EAL: Restoring previous memory policy: 4 00:04:20.416 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.416 EAL: request: mp_malloc_sync 00:04:20.416 EAL: No shared files mode enabled, IPC is disabled 00:04:20.416 EAL: Heap on socket 0 was expanded by 6MB 00:04:20.416 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.416 EAL: request: mp_malloc_sync 00:04:20.416 EAL: No shared files mode enabled, IPC is disabled 00:04:20.416 EAL: Heap on socket 0 was shrunk by 6MB 00:04:20.416 EAL: Trying to obtain current memory policy. 00:04:20.416 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:20.416 EAL: Restoring previous memory policy: 4 00:04:20.416 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.416 EAL: request: mp_malloc_sync 00:04:20.416 EAL: No shared files mode enabled, IPC is disabled 00:04:20.416 EAL: Heap on socket 0 was expanded by 10MB 00:04:20.416 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.416 EAL: request: mp_malloc_sync 00:04:20.416 EAL: No shared files mode enabled, IPC is disabled 00:04:20.416 EAL: Heap on socket 0 was shrunk by 10MB 00:04:20.416 EAL: Trying to obtain current memory policy. 00:04:20.416 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:20.416 EAL: Restoring previous memory policy: 4 00:04:20.416 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.416 EAL: request: mp_malloc_sync 00:04:20.416 EAL: No shared files mode enabled, IPC is disabled 00:04:20.416 EAL: Heap on socket 0 was expanded by 18MB 00:04:20.416 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.675 EAL: request: mp_malloc_sync 00:04:20.675 EAL: No shared files mode enabled, IPC is disabled 00:04:20.675 EAL: Heap on socket 0 was shrunk by 18MB 00:04:20.675 EAL: Trying to obtain current memory policy. 00:04:20.675 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:20.675 EAL: Restoring previous memory policy: 4 00:04:20.675 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.675 EAL: request: mp_malloc_sync 00:04:20.675 EAL: No shared files mode enabled, IPC is disabled 00:04:20.675 EAL: Heap on socket 0 was expanded by 34MB 00:04:20.675 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.675 EAL: request: mp_malloc_sync 00:04:20.675 EAL: No shared files mode enabled, IPC is disabled 00:04:20.675 EAL: Heap on socket 0 was shrunk by 34MB 00:04:20.675 EAL: Trying to obtain current memory policy. 00:04:20.675 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:20.675 EAL: Restoring previous memory policy: 4 00:04:20.675 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.675 EAL: request: mp_malloc_sync 00:04:20.675 EAL: No shared files mode enabled, IPC is disabled 00:04:20.675 EAL: Heap on socket 0 was expanded by 66MB 00:04:20.675 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.675 EAL: request: mp_malloc_sync 00:04:20.675 EAL: No shared files mode enabled, IPC is disabled 00:04:20.675 EAL: Heap on socket 0 was shrunk by 66MB 00:04:20.675 EAL: Trying to obtain current memory policy. 00:04:20.675 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:20.675 EAL: Restoring previous memory policy: 4 00:04:20.675 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.675 EAL: request: mp_malloc_sync 00:04:20.675 EAL: No shared files mode enabled, IPC is disabled 00:04:20.675 EAL: Heap on socket 0 was expanded by 130MB 00:04:20.675 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.675 EAL: request: mp_malloc_sync 00:04:20.675 EAL: No shared files mode enabled, IPC is disabled 00:04:20.675 EAL: Heap on socket 0 was shrunk by 130MB 00:04:20.675 EAL: Trying to obtain current memory policy. 00:04:20.675 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:20.675 EAL: Restoring previous memory policy: 4 00:04:20.675 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.675 EAL: request: mp_malloc_sync 00:04:20.675 EAL: No shared files mode enabled, IPC is disabled 00:04:20.675 EAL: Heap on socket 0 was expanded by 258MB 00:04:20.675 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.935 EAL: request: mp_malloc_sync 00:04:20.935 EAL: No shared files mode enabled, IPC is disabled 00:04:20.935 EAL: Heap on socket 0 was shrunk by 258MB 00:04:20.935 EAL: Trying to obtain current memory policy. 00:04:20.935 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:20.935 EAL: Restoring previous memory policy: 4 00:04:20.935 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.935 EAL: request: mp_malloc_sync 00:04:20.935 EAL: No shared files mode enabled, IPC is disabled 00:04:20.935 EAL: Heap on socket 0 was expanded by 514MB 00:04:21.195 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.195 EAL: request: mp_malloc_sync 00:04:21.195 EAL: No shared files mode enabled, IPC is disabled 00:04:21.195 EAL: Heap on socket 0 was shrunk by 514MB 00:04:21.195 EAL: Trying to obtain current memory policy. 00:04:21.195 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:21.456 EAL: Restoring previous memory policy: 4 00:04:21.456 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.456 EAL: request: mp_malloc_sync 00:04:21.456 EAL: No shared files mode enabled, IPC is disabled 00:04:21.456 EAL: Heap on socket 0 was expanded by 1026MB 00:04:21.715 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.974 EAL: request: mp_malloc_sync 00:04:21.974 EAL: No shared files mode enabled, IPC is disabled 00:04:21.974 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:21.974 passed 00:04:21.974 00:04:21.974 Run Summary: Type Total Ran Passed Failed Inactive 00:04:21.974 suites 1 1 n/a 0 0 00:04:21.974 tests 2 2 2 0 0 00:04:21.974 asserts 497 497 497 0 n/a 00:04:21.974 00:04:21.974 Elapsed time = 1.327 seconds 00:04:21.974 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.974 EAL: request: mp_malloc_sync 00:04:21.974 EAL: No shared files mode enabled, IPC is disabled 00:04:21.974 EAL: Heap on socket 0 was shrunk by 2MB 00:04:21.974 EAL: No shared files mode enabled, IPC is disabled 00:04:21.974 EAL: No shared files mode enabled, IPC is disabled 00:04:21.974 EAL: No shared files mode enabled, IPC is disabled 00:04:21.974 00:04:21.974 real 0m1.452s 00:04:21.974 user 0m0.820s 00:04:21.974 sys 0m0.591s 00:04:21.974 03:04:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:21.974 03:04:56 -- common/autotest_common.sh@10 -- # set +x 00:04:21.974 ************************************ 00:04:21.974 END TEST env_vtophys 00:04:21.974 ************************************ 00:04:21.974 03:04:56 -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:21.974 03:04:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:21.974 03:04:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:21.974 03:04:56 -- common/autotest_common.sh@10 -- # set +x 00:04:21.974 ************************************ 00:04:21.974 START TEST env_pci 00:04:21.974 ************************************ 00:04:21.974 03:04:56 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:21.974 00:04:21.974 00:04:21.974 CUnit - A unit testing framework for C - Version 2.1-3 00:04:21.974 http://cunit.sourceforge.net/ 00:04:21.974 00:04:21.974 00:04:21.974 Suite: pci 00:04:21.974 Test: pci_hook ...[2024-04-25 03:04:56.407786] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1370882 has claimed it 00:04:21.974 EAL: Cannot find device (10000:00:01.0) 00:04:21.974 EAL: Failed to attach device on primary process 00:04:21.974 passed 00:04:21.974 00:04:21.974 Run Summary: Type Total Ran Passed Failed Inactive 00:04:21.974 suites 1 1 n/a 0 0 00:04:21.974 tests 1 1 1 0 0 00:04:21.975 asserts 25 25 25 0 n/a 00:04:21.975 00:04:21.975 Elapsed time = 0.022 seconds 00:04:21.975 00:04:21.975 real 0m0.034s 00:04:21.975 user 0m0.011s 00:04:21.975 sys 0m0.023s 00:04:21.975 03:04:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:21.975 03:04:56 -- common/autotest_common.sh@10 -- # set +x 00:04:21.975 ************************************ 00:04:21.975 END TEST env_pci 00:04:21.975 ************************************ 00:04:21.975 03:04:56 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:21.975 03:04:56 -- env/env.sh@15 -- # uname 00:04:21.975 03:04:56 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:21.975 03:04:56 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:21.975 03:04:56 -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:21.975 03:04:56 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:04:21.975 03:04:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:21.975 03:04:56 -- common/autotest_common.sh@10 -- # set +x 00:04:22.233 ************************************ 00:04:22.233 START TEST env_dpdk_post_init 00:04:22.233 ************************************ 00:04:22.233 03:04:56 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:22.233 EAL: Detected CPU lcores: 48 00:04:22.233 EAL: Detected NUMA nodes: 2 00:04:22.233 EAL: Detected shared linkage of DPDK 00:04:22.233 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:22.233 EAL: Selected IOVA mode 'VA' 00:04:22.233 EAL: No free 2048 kB hugepages reported on node 1 00:04:22.233 EAL: VFIO support initialized 00:04:22.233 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:22.233 EAL: Using IOMMU type 1 (Type 1) 00:04:22.233 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:04:22.233 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:04:22.233 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:04:22.233 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:04:22.233 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:04:22.233 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:04:22.493 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:04:22.493 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:04:22.493 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:04:22.493 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:04:22.493 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:04:22.493 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:04:22.493 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:04:22.493 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:04:22.493 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:04:22.493 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:04:23.434 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:88:00.0 (socket 1) 00:04:26.723 EAL: Releasing PCI mapped resource for 0000:88:00.0 00:04:26.723 EAL: Calling pci_unmap_resource for 0000:88:00.0 at 0x202001040000 00:04:26.723 Starting DPDK initialization... 00:04:26.723 Starting SPDK post initialization... 00:04:26.723 SPDK NVMe probe 00:04:26.723 Attaching to 0000:88:00.0 00:04:26.723 Attached to 0000:88:00.0 00:04:26.723 Cleaning up... 00:04:26.723 00:04:26.723 real 0m4.391s 00:04:26.723 user 0m3.249s 00:04:26.723 sys 0m0.199s 00:04:26.723 03:05:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:26.723 03:05:00 -- common/autotest_common.sh@10 -- # set +x 00:04:26.723 ************************************ 00:04:26.723 END TEST env_dpdk_post_init 00:04:26.723 ************************************ 00:04:26.723 03:05:00 -- env/env.sh@26 -- # uname 00:04:26.723 03:05:00 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:26.723 03:05:00 -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:26.723 03:05:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:26.723 03:05:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:26.723 03:05:00 -- common/autotest_common.sh@10 -- # set +x 00:04:26.723 ************************************ 00:04:26.723 START TEST env_mem_callbacks 00:04:26.723 ************************************ 00:04:26.723 03:05:01 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:26.723 EAL: Detected CPU lcores: 48 00:04:26.723 EAL: Detected NUMA nodes: 2 00:04:26.723 EAL: Detected shared linkage of DPDK 00:04:26.723 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:26.723 EAL: Selected IOVA mode 'VA' 00:04:26.723 EAL: No free 2048 kB hugepages reported on node 1 00:04:26.723 EAL: VFIO support initialized 00:04:26.723 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:26.723 00:04:26.723 00:04:26.723 CUnit - A unit testing framework for C - Version 2.1-3 00:04:26.723 http://cunit.sourceforge.net/ 00:04:26.723 00:04:26.723 00:04:26.723 Suite: memory 00:04:26.723 Test: test ... 00:04:26.723 register 0x200000200000 2097152 00:04:26.723 malloc 3145728 00:04:26.723 register 0x200000400000 4194304 00:04:26.723 buf 0x200000500000 len 3145728 PASSED 00:04:26.723 malloc 64 00:04:26.723 buf 0x2000004fff40 len 64 PASSED 00:04:26.723 malloc 4194304 00:04:26.723 register 0x200000800000 6291456 00:04:26.723 buf 0x200000a00000 len 4194304 PASSED 00:04:26.723 free 0x200000500000 3145728 00:04:26.723 free 0x2000004fff40 64 00:04:26.723 unregister 0x200000400000 4194304 PASSED 00:04:26.723 free 0x200000a00000 4194304 00:04:26.723 unregister 0x200000800000 6291456 PASSED 00:04:26.723 malloc 8388608 00:04:26.723 register 0x200000400000 10485760 00:04:26.723 buf 0x200000600000 len 8388608 PASSED 00:04:26.723 free 0x200000600000 8388608 00:04:26.723 unregister 0x200000400000 10485760 PASSED 00:04:26.723 passed 00:04:26.723 00:04:26.723 Run Summary: Type Total Ran Passed Failed Inactive 00:04:26.723 suites 1 1 n/a 0 0 00:04:26.723 tests 1 1 1 0 0 00:04:26.723 asserts 15 15 15 0 n/a 00:04:26.723 00:04:26.723 Elapsed time = 0.005 seconds 00:04:26.723 00:04:26.723 real 0m0.049s 00:04:26.723 user 0m0.015s 00:04:26.723 sys 0m0.033s 00:04:26.723 03:05:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:26.723 03:05:01 -- common/autotest_common.sh@10 -- # set +x 00:04:26.723 ************************************ 00:04:26.723 END TEST env_mem_callbacks 00:04:26.723 ************************************ 00:04:26.723 00:04:26.723 real 0m6.747s 00:04:26.723 user 0m4.509s 00:04:26.723 sys 0m1.217s 00:04:26.723 03:05:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:26.723 03:05:01 -- common/autotest_common.sh@10 -- # set +x 00:04:26.723 ************************************ 00:04:26.723 END TEST env 00:04:26.723 ************************************ 00:04:26.723 03:05:01 -- spdk/autotest.sh@165 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:26.723 03:05:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:26.723 03:05:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:26.723 03:05:01 -- common/autotest_common.sh@10 -- # set +x 00:04:26.981 ************************************ 00:04:26.981 START TEST rpc 00:04:26.981 ************************************ 00:04:26.981 03:05:01 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:26.981 * Looking for test storage... 00:04:26.981 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:26.981 03:05:01 -- rpc/rpc.sh@65 -- # spdk_pid=1371563 00:04:26.981 03:05:01 -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:26.981 03:05:01 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:26.981 03:05:01 -- rpc/rpc.sh@67 -- # waitforlisten 1371563 00:04:26.981 03:05:01 -- common/autotest_common.sh@817 -- # '[' -z 1371563 ']' 00:04:26.981 03:05:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:26.981 03:05:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:26.981 03:05:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:26.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:26.981 03:05:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:26.981 03:05:01 -- common/autotest_common.sh@10 -- # set +x 00:04:26.981 [2024-04-25 03:05:01.346179] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:04:26.981 [2024-04-25 03:05:01.346259] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1371563 ] 00:04:26.981 EAL: No free 2048 kB hugepages reported on node 1 00:04:26.981 [2024-04-25 03:05:01.403170] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:27.240 [2024-04-25 03:05:01.510369] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:27.240 [2024-04-25 03:05:01.510423] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1371563' to capture a snapshot of events at runtime. 00:04:27.240 [2024-04-25 03:05:01.510436] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:27.240 [2024-04-25 03:05:01.510447] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:27.240 [2024-04-25 03:05:01.510457] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1371563 for offline analysis/debug. 00:04:27.240 [2024-04-25 03:05:01.510487] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:27.500 03:05:01 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:27.500 03:05:01 -- common/autotest_common.sh@850 -- # return 0 00:04:27.500 03:05:01 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:27.500 03:05:01 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:27.500 03:05:01 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:27.500 03:05:01 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:27.500 03:05:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:27.500 03:05:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:27.500 03:05:01 -- common/autotest_common.sh@10 -- # set +x 00:04:27.500 ************************************ 00:04:27.500 START TEST rpc_integrity 00:04:27.500 ************************************ 00:04:27.500 03:05:01 -- common/autotest_common.sh@1111 -- # rpc_integrity 00:04:27.500 03:05:01 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:27.500 03:05:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:27.500 03:05:01 -- common/autotest_common.sh@10 -- # set +x 00:04:27.500 03:05:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:27.500 03:05:01 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:27.500 03:05:01 -- rpc/rpc.sh@13 -- # jq length 00:04:27.500 03:05:01 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:27.500 03:05:01 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:27.500 03:05:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:27.500 03:05:01 -- common/autotest_common.sh@10 -- # set +x 00:04:27.500 03:05:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:27.500 03:05:01 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:27.500 03:05:01 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:27.500 03:05:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:27.500 03:05:01 -- common/autotest_common.sh@10 -- # set +x 00:04:27.500 03:05:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:27.500 03:05:01 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:27.500 { 00:04:27.500 "name": "Malloc0", 00:04:27.500 "aliases": [ 00:04:27.500 "bd0a8109-f1c9-41a0-9795-6e4731190255" 00:04:27.500 ], 00:04:27.500 "product_name": "Malloc disk", 00:04:27.500 "block_size": 512, 00:04:27.500 "num_blocks": 16384, 00:04:27.500 "uuid": "bd0a8109-f1c9-41a0-9795-6e4731190255", 00:04:27.500 "assigned_rate_limits": { 00:04:27.500 "rw_ios_per_sec": 0, 00:04:27.500 "rw_mbytes_per_sec": 0, 00:04:27.500 "r_mbytes_per_sec": 0, 00:04:27.500 "w_mbytes_per_sec": 0 00:04:27.500 }, 00:04:27.500 "claimed": false, 00:04:27.500 "zoned": false, 00:04:27.500 "supported_io_types": { 00:04:27.500 "read": true, 00:04:27.500 "write": true, 00:04:27.500 "unmap": true, 00:04:27.500 "write_zeroes": true, 00:04:27.500 "flush": true, 00:04:27.500 "reset": true, 00:04:27.500 "compare": false, 00:04:27.500 "compare_and_write": false, 00:04:27.500 "abort": true, 00:04:27.500 "nvme_admin": false, 00:04:27.500 "nvme_io": false 00:04:27.500 }, 00:04:27.500 "memory_domains": [ 00:04:27.500 { 00:04:27.500 "dma_device_id": "system", 00:04:27.500 "dma_device_type": 1 00:04:27.500 }, 00:04:27.500 { 00:04:27.500 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:27.500 "dma_device_type": 2 00:04:27.500 } 00:04:27.500 ], 00:04:27.500 "driver_specific": {} 00:04:27.500 } 00:04:27.500 ]' 00:04:27.500 03:05:01 -- rpc/rpc.sh@17 -- # jq length 00:04:27.500 03:05:01 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:27.500 03:05:01 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:27.500 03:05:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:27.500 03:05:01 -- common/autotest_common.sh@10 -- # set +x 00:04:27.500 [2024-04-25 03:05:01.980898] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:27.500 [2024-04-25 03:05:01.980963] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:27.500 [2024-04-25 03:05:01.980982] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x17a18e0 00:04:27.500 [2024-04-25 03:05:01.981009] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:27.500 [2024-04-25 03:05:01.982521] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:27.500 [2024-04-25 03:05:01.982549] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:27.500 Passthru0 00:04:27.500 03:05:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:27.500 03:05:01 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:27.500 03:05:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:27.500 03:05:01 -- common/autotest_common.sh@10 -- # set +x 00:04:27.760 03:05:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:27.760 03:05:02 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:27.760 { 00:04:27.760 "name": "Malloc0", 00:04:27.760 "aliases": [ 00:04:27.760 "bd0a8109-f1c9-41a0-9795-6e4731190255" 00:04:27.760 ], 00:04:27.760 "product_name": "Malloc disk", 00:04:27.760 "block_size": 512, 00:04:27.760 "num_blocks": 16384, 00:04:27.760 "uuid": "bd0a8109-f1c9-41a0-9795-6e4731190255", 00:04:27.760 "assigned_rate_limits": { 00:04:27.760 "rw_ios_per_sec": 0, 00:04:27.760 "rw_mbytes_per_sec": 0, 00:04:27.760 "r_mbytes_per_sec": 0, 00:04:27.760 "w_mbytes_per_sec": 0 00:04:27.760 }, 00:04:27.760 "claimed": true, 00:04:27.760 "claim_type": "exclusive_write", 00:04:27.760 "zoned": false, 00:04:27.760 "supported_io_types": { 00:04:27.760 "read": true, 00:04:27.760 "write": true, 00:04:27.760 "unmap": true, 00:04:27.760 "write_zeroes": true, 00:04:27.760 "flush": true, 00:04:27.760 "reset": true, 00:04:27.760 "compare": false, 00:04:27.760 "compare_and_write": false, 00:04:27.760 "abort": true, 00:04:27.760 "nvme_admin": false, 00:04:27.760 "nvme_io": false 00:04:27.760 }, 00:04:27.760 "memory_domains": [ 00:04:27.760 { 00:04:27.760 "dma_device_id": "system", 00:04:27.760 "dma_device_type": 1 00:04:27.760 }, 00:04:27.760 { 00:04:27.760 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:27.760 "dma_device_type": 2 00:04:27.760 } 00:04:27.760 ], 00:04:27.760 "driver_specific": {} 00:04:27.760 }, 00:04:27.760 { 00:04:27.760 "name": "Passthru0", 00:04:27.760 "aliases": [ 00:04:27.760 "ebb0d80b-98bc-5b14-afa5-7f396afa2ac0" 00:04:27.760 ], 00:04:27.760 "product_name": "passthru", 00:04:27.760 "block_size": 512, 00:04:27.760 "num_blocks": 16384, 00:04:27.760 "uuid": "ebb0d80b-98bc-5b14-afa5-7f396afa2ac0", 00:04:27.760 "assigned_rate_limits": { 00:04:27.760 "rw_ios_per_sec": 0, 00:04:27.760 "rw_mbytes_per_sec": 0, 00:04:27.760 "r_mbytes_per_sec": 0, 00:04:27.760 "w_mbytes_per_sec": 0 00:04:27.760 }, 00:04:27.760 "claimed": false, 00:04:27.760 "zoned": false, 00:04:27.760 "supported_io_types": { 00:04:27.760 "read": true, 00:04:27.760 "write": true, 00:04:27.760 "unmap": true, 00:04:27.760 "write_zeroes": true, 00:04:27.760 "flush": true, 00:04:27.760 "reset": true, 00:04:27.760 "compare": false, 00:04:27.760 "compare_and_write": false, 00:04:27.760 "abort": true, 00:04:27.760 "nvme_admin": false, 00:04:27.760 "nvme_io": false 00:04:27.760 }, 00:04:27.760 "memory_domains": [ 00:04:27.760 { 00:04:27.760 "dma_device_id": "system", 00:04:27.760 "dma_device_type": 1 00:04:27.760 }, 00:04:27.760 { 00:04:27.760 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:27.760 "dma_device_type": 2 00:04:27.760 } 00:04:27.760 ], 00:04:27.760 "driver_specific": { 00:04:27.760 "passthru": { 00:04:27.760 "name": "Passthru0", 00:04:27.760 "base_bdev_name": "Malloc0" 00:04:27.760 } 00:04:27.760 } 00:04:27.760 } 00:04:27.760 ]' 00:04:27.760 03:05:02 -- rpc/rpc.sh@21 -- # jq length 00:04:27.760 03:05:02 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:27.760 03:05:02 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:27.760 03:05:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:27.760 03:05:02 -- common/autotest_common.sh@10 -- # set +x 00:04:27.760 03:05:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:27.760 03:05:02 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:27.760 03:05:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:27.760 03:05:02 -- common/autotest_common.sh@10 -- # set +x 00:04:27.760 03:05:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:27.760 03:05:02 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:27.760 03:05:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:27.760 03:05:02 -- common/autotest_common.sh@10 -- # set +x 00:04:27.760 03:05:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:27.760 03:05:02 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:27.760 03:05:02 -- rpc/rpc.sh@26 -- # jq length 00:04:27.760 03:05:02 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:27.760 00:04:27.760 real 0m0.229s 00:04:27.760 user 0m0.158s 00:04:27.760 sys 0m0.017s 00:04:27.760 03:05:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:27.760 03:05:02 -- common/autotest_common.sh@10 -- # set +x 00:04:27.760 ************************************ 00:04:27.760 END TEST rpc_integrity 00:04:27.760 ************************************ 00:04:27.760 03:05:02 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:27.760 03:05:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:27.760 03:05:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:27.760 03:05:02 -- common/autotest_common.sh@10 -- # set +x 00:04:27.761 ************************************ 00:04:27.761 START TEST rpc_plugins 00:04:27.761 ************************************ 00:04:27.761 03:05:02 -- common/autotest_common.sh@1111 -- # rpc_plugins 00:04:27.761 03:05:02 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:27.761 03:05:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:27.761 03:05:02 -- common/autotest_common.sh@10 -- # set +x 00:04:27.761 03:05:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:27.761 03:05:02 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:27.761 03:05:02 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:27.761 03:05:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:27.761 03:05:02 -- common/autotest_common.sh@10 -- # set +x 00:04:27.761 03:05:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:27.761 03:05:02 -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:27.761 { 00:04:27.761 "name": "Malloc1", 00:04:27.761 "aliases": [ 00:04:27.761 "20dcd0c3-6628-466e-8f25-5c906160fd8c" 00:04:27.761 ], 00:04:27.761 "product_name": "Malloc disk", 00:04:27.761 "block_size": 4096, 00:04:27.761 "num_blocks": 256, 00:04:27.761 "uuid": "20dcd0c3-6628-466e-8f25-5c906160fd8c", 00:04:27.761 "assigned_rate_limits": { 00:04:27.761 "rw_ios_per_sec": 0, 00:04:27.761 "rw_mbytes_per_sec": 0, 00:04:27.761 "r_mbytes_per_sec": 0, 00:04:27.761 "w_mbytes_per_sec": 0 00:04:27.761 }, 00:04:27.761 "claimed": false, 00:04:27.761 "zoned": false, 00:04:27.761 "supported_io_types": { 00:04:27.761 "read": true, 00:04:27.761 "write": true, 00:04:27.761 "unmap": true, 00:04:27.761 "write_zeroes": true, 00:04:27.761 "flush": true, 00:04:27.761 "reset": true, 00:04:27.761 "compare": false, 00:04:27.761 "compare_and_write": false, 00:04:27.761 "abort": true, 00:04:27.761 "nvme_admin": false, 00:04:27.761 "nvme_io": false 00:04:27.761 }, 00:04:27.761 "memory_domains": [ 00:04:27.761 { 00:04:27.761 "dma_device_id": "system", 00:04:27.761 "dma_device_type": 1 00:04:27.761 }, 00:04:27.761 { 00:04:27.761 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:27.761 "dma_device_type": 2 00:04:27.761 } 00:04:27.761 ], 00:04:27.761 "driver_specific": {} 00:04:27.761 } 00:04:27.761 ]' 00:04:27.761 03:05:02 -- rpc/rpc.sh@32 -- # jq length 00:04:28.020 03:05:02 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:28.020 03:05:02 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:28.020 03:05:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:28.020 03:05:02 -- common/autotest_common.sh@10 -- # set +x 00:04:28.020 03:05:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:28.020 03:05:02 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:28.020 03:05:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:28.020 03:05:02 -- common/autotest_common.sh@10 -- # set +x 00:04:28.020 03:05:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:28.020 03:05:02 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:28.020 03:05:02 -- rpc/rpc.sh@36 -- # jq length 00:04:28.020 03:05:02 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:28.020 00:04:28.020 real 0m0.119s 00:04:28.020 user 0m0.080s 00:04:28.020 sys 0m0.009s 00:04:28.020 03:05:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:28.020 03:05:02 -- common/autotest_common.sh@10 -- # set +x 00:04:28.020 ************************************ 00:04:28.020 END TEST rpc_plugins 00:04:28.020 ************************************ 00:04:28.020 03:05:02 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:28.020 03:05:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:28.020 03:05:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:28.020 03:05:02 -- common/autotest_common.sh@10 -- # set +x 00:04:28.020 ************************************ 00:04:28.020 START TEST rpc_trace_cmd_test 00:04:28.020 ************************************ 00:04:28.020 03:05:02 -- common/autotest_common.sh@1111 -- # rpc_trace_cmd_test 00:04:28.020 03:05:02 -- rpc/rpc.sh@40 -- # local info 00:04:28.020 03:05:02 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:28.020 03:05:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:28.020 03:05:02 -- common/autotest_common.sh@10 -- # set +x 00:04:28.020 03:05:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:28.020 03:05:02 -- rpc/rpc.sh@42 -- # info='{ 00:04:28.020 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1371563", 00:04:28.020 "tpoint_group_mask": "0x8", 00:04:28.020 "iscsi_conn": { 00:04:28.020 "mask": "0x2", 00:04:28.020 "tpoint_mask": "0x0" 00:04:28.020 }, 00:04:28.020 "scsi": { 00:04:28.020 "mask": "0x4", 00:04:28.020 "tpoint_mask": "0x0" 00:04:28.020 }, 00:04:28.020 "bdev": { 00:04:28.020 "mask": "0x8", 00:04:28.020 "tpoint_mask": "0xffffffffffffffff" 00:04:28.020 }, 00:04:28.020 "nvmf_rdma": { 00:04:28.020 "mask": "0x10", 00:04:28.020 "tpoint_mask": "0x0" 00:04:28.020 }, 00:04:28.020 "nvmf_tcp": { 00:04:28.020 "mask": "0x20", 00:04:28.020 "tpoint_mask": "0x0" 00:04:28.020 }, 00:04:28.020 "ftl": { 00:04:28.020 "mask": "0x40", 00:04:28.020 "tpoint_mask": "0x0" 00:04:28.020 }, 00:04:28.020 "blobfs": { 00:04:28.020 "mask": "0x80", 00:04:28.020 "tpoint_mask": "0x0" 00:04:28.020 }, 00:04:28.020 "dsa": { 00:04:28.020 "mask": "0x200", 00:04:28.020 "tpoint_mask": "0x0" 00:04:28.020 }, 00:04:28.020 "thread": { 00:04:28.020 "mask": "0x400", 00:04:28.020 "tpoint_mask": "0x0" 00:04:28.020 }, 00:04:28.020 "nvme_pcie": { 00:04:28.020 "mask": "0x800", 00:04:28.020 "tpoint_mask": "0x0" 00:04:28.020 }, 00:04:28.020 "iaa": { 00:04:28.020 "mask": "0x1000", 00:04:28.020 "tpoint_mask": "0x0" 00:04:28.020 }, 00:04:28.020 "nvme_tcp": { 00:04:28.020 "mask": "0x2000", 00:04:28.020 "tpoint_mask": "0x0" 00:04:28.020 }, 00:04:28.020 "bdev_nvme": { 00:04:28.020 "mask": "0x4000", 00:04:28.020 "tpoint_mask": "0x0" 00:04:28.020 }, 00:04:28.020 "sock": { 00:04:28.020 "mask": "0x8000", 00:04:28.020 "tpoint_mask": "0x0" 00:04:28.020 } 00:04:28.020 }' 00:04:28.020 03:05:02 -- rpc/rpc.sh@43 -- # jq length 00:04:28.020 03:05:02 -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:28.020 03:05:02 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:28.279 03:05:02 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:28.279 03:05:02 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:28.279 03:05:02 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:28.279 03:05:02 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:28.279 03:05:02 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:28.279 03:05:02 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:28.279 03:05:02 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:28.279 00:04:28.279 real 0m0.194s 00:04:28.279 user 0m0.170s 00:04:28.279 sys 0m0.017s 00:04:28.279 03:05:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:28.279 03:05:02 -- common/autotest_common.sh@10 -- # set +x 00:04:28.279 ************************************ 00:04:28.279 END TEST rpc_trace_cmd_test 00:04:28.279 ************************************ 00:04:28.279 03:05:02 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:28.279 03:05:02 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:28.279 03:05:02 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:28.279 03:05:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:28.279 03:05:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:28.279 03:05:02 -- common/autotest_common.sh@10 -- # set +x 00:04:28.279 ************************************ 00:04:28.279 START TEST rpc_daemon_integrity 00:04:28.279 ************************************ 00:04:28.279 03:05:02 -- common/autotest_common.sh@1111 -- # rpc_integrity 00:04:28.279 03:05:02 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:28.279 03:05:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:28.279 03:05:02 -- common/autotest_common.sh@10 -- # set +x 00:04:28.537 03:05:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:28.537 03:05:02 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:28.537 03:05:02 -- rpc/rpc.sh@13 -- # jq length 00:04:28.537 03:05:02 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:28.537 03:05:02 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:28.537 03:05:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:28.537 03:05:02 -- common/autotest_common.sh@10 -- # set +x 00:04:28.537 03:05:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:28.537 03:05:02 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:28.537 03:05:02 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:28.537 03:05:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:28.537 03:05:02 -- common/autotest_common.sh@10 -- # set +x 00:04:28.537 03:05:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:28.537 03:05:02 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:28.537 { 00:04:28.537 "name": "Malloc2", 00:04:28.537 "aliases": [ 00:04:28.537 "98e94764-d9c6-4674-bcf5-d3560b8b282d" 00:04:28.537 ], 00:04:28.537 "product_name": "Malloc disk", 00:04:28.537 "block_size": 512, 00:04:28.537 "num_blocks": 16384, 00:04:28.537 "uuid": "98e94764-d9c6-4674-bcf5-d3560b8b282d", 00:04:28.537 "assigned_rate_limits": { 00:04:28.537 "rw_ios_per_sec": 0, 00:04:28.537 "rw_mbytes_per_sec": 0, 00:04:28.537 "r_mbytes_per_sec": 0, 00:04:28.537 "w_mbytes_per_sec": 0 00:04:28.537 }, 00:04:28.537 "claimed": false, 00:04:28.537 "zoned": false, 00:04:28.537 "supported_io_types": { 00:04:28.537 "read": true, 00:04:28.537 "write": true, 00:04:28.537 "unmap": true, 00:04:28.537 "write_zeroes": true, 00:04:28.537 "flush": true, 00:04:28.537 "reset": true, 00:04:28.537 "compare": false, 00:04:28.537 "compare_and_write": false, 00:04:28.538 "abort": true, 00:04:28.538 "nvme_admin": false, 00:04:28.538 "nvme_io": false 00:04:28.538 }, 00:04:28.538 "memory_domains": [ 00:04:28.538 { 00:04:28.538 "dma_device_id": "system", 00:04:28.538 "dma_device_type": 1 00:04:28.538 }, 00:04:28.538 { 00:04:28.538 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:28.538 "dma_device_type": 2 00:04:28.538 } 00:04:28.538 ], 00:04:28.538 "driver_specific": {} 00:04:28.538 } 00:04:28.538 ]' 00:04:28.538 03:05:02 -- rpc/rpc.sh@17 -- # jq length 00:04:28.538 03:05:02 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:28.538 03:05:02 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:28.538 03:05:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:28.538 03:05:02 -- common/autotest_common.sh@10 -- # set +x 00:04:28.538 [2024-04-25 03:05:02.880138] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:28.538 [2024-04-25 03:05:02.880184] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:28.538 [2024-04-25 03:05:02.880212] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x17991c0 00:04:28.538 [2024-04-25 03:05:02.880228] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:28.538 [2024-04-25 03:05:02.881595] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:28.538 [2024-04-25 03:05:02.881625] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:28.538 Passthru0 00:04:28.538 03:05:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:28.538 03:05:02 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:28.538 03:05:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:28.538 03:05:02 -- common/autotest_common.sh@10 -- # set +x 00:04:28.538 03:05:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:28.538 03:05:02 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:28.538 { 00:04:28.538 "name": "Malloc2", 00:04:28.538 "aliases": [ 00:04:28.538 "98e94764-d9c6-4674-bcf5-d3560b8b282d" 00:04:28.538 ], 00:04:28.538 "product_name": "Malloc disk", 00:04:28.538 "block_size": 512, 00:04:28.538 "num_blocks": 16384, 00:04:28.538 "uuid": "98e94764-d9c6-4674-bcf5-d3560b8b282d", 00:04:28.538 "assigned_rate_limits": { 00:04:28.538 "rw_ios_per_sec": 0, 00:04:28.538 "rw_mbytes_per_sec": 0, 00:04:28.538 "r_mbytes_per_sec": 0, 00:04:28.538 "w_mbytes_per_sec": 0 00:04:28.538 }, 00:04:28.538 "claimed": true, 00:04:28.538 "claim_type": "exclusive_write", 00:04:28.538 "zoned": false, 00:04:28.538 "supported_io_types": { 00:04:28.538 "read": true, 00:04:28.538 "write": true, 00:04:28.538 "unmap": true, 00:04:28.538 "write_zeroes": true, 00:04:28.538 "flush": true, 00:04:28.538 "reset": true, 00:04:28.538 "compare": false, 00:04:28.538 "compare_and_write": false, 00:04:28.538 "abort": true, 00:04:28.538 "nvme_admin": false, 00:04:28.538 "nvme_io": false 00:04:28.538 }, 00:04:28.538 "memory_domains": [ 00:04:28.538 { 00:04:28.538 "dma_device_id": "system", 00:04:28.538 "dma_device_type": 1 00:04:28.538 }, 00:04:28.538 { 00:04:28.538 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:28.538 "dma_device_type": 2 00:04:28.538 } 00:04:28.538 ], 00:04:28.538 "driver_specific": {} 00:04:28.538 }, 00:04:28.538 { 00:04:28.538 "name": "Passthru0", 00:04:28.538 "aliases": [ 00:04:28.538 "9668ab90-2633-54d4-8a53-0be6c86046f4" 00:04:28.538 ], 00:04:28.538 "product_name": "passthru", 00:04:28.538 "block_size": 512, 00:04:28.538 "num_blocks": 16384, 00:04:28.538 "uuid": "9668ab90-2633-54d4-8a53-0be6c86046f4", 00:04:28.538 "assigned_rate_limits": { 00:04:28.538 "rw_ios_per_sec": 0, 00:04:28.538 "rw_mbytes_per_sec": 0, 00:04:28.538 "r_mbytes_per_sec": 0, 00:04:28.538 "w_mbytes_per_sec": 0 00:04:28.538 }, 00:04:28.538 "claimed": false, 00:04:28.538 "zoned": false, 00:04:28.538 "supported_io_types": { 00:04:28.538 "read": true, 00:04:28.538 "write": true, 00:04:28.538 "unmap": true, 00:04:28.538 "write_zeroes": true, 00:04:28.538 "flush": true, 00:04:28.538 "reset": true, 00:04:28.538 "compare": false, 00:04:28.538 "compare_and_write": false, 00:04:28.538 "abort": true, 00:04:28.538 "nvme_admin": false, 00:04:28.538 "nvme_io": false 00:04:28.538 }, 00:04:28.538 "memory_domains": [ 00:04:28.538 { 00:04:28.538 "dma_device_id": "system", 00:04:28.538 "dma_device_type": 1 00:04:28.538 }, 00:04:28.538 { 00:04:28.538 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:28.538 "dma_device_type": 2 00:04:28.538 } 00:04:28.538 ], 00:04:28.538 "driver_specific": { 00:04:28.538 "passthru": { 00:04:28.538 "name": "Passthru0", 00:04:28.538 "base_bdev_name": "Malloc2" 00:04:28.538 } 00:04:28.538 } 00:04:28.538 } 00:04:28.538 ]' 00:04:28.538 03:05:02 -- rpc/rpc.sh@21 -- # jq length 00:04:28.538 03:05:02 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:28.538 03:05:02 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:28.538 03:05:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:28.538 03:05:02 -- common/autotest_common.sh@10 -- # set +x 00:04:28.538 03:05:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:28.538 03:05:02 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:28.538 03:05:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:28.538 03:05:02 -- common/autotest_common.sh@10 -- # set +x 00:04:28.538 03:05:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:28.538 03:05:02 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:28.538 03:05:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:28.538 03:05:02 -- common/autotest_common.sh@10 -- # set +x 00:04:28.538 03:05:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:28.538 03:05:02 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:28.538 03:05:02 -- rpc/rpc.sh@26 -- # jq length 00:04:28.538 03:05:02 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:28.538 00:04:28.538 real 0m0.230s 00:04:28.538 user 0m0.150s 00:04:28.538 sys 0m0.020s 00:04:28.538 03:05:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:28.538 03:05:02 -- common/autotest_common.sh@10 -- # set +x 00:04:28.538 ************************************ 00:04:28.538 END TEST rpc_daemon_integrity 00:04:28.538 ************************************ 00:04:28.538 03:05:03 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:28.538 03:05:03 -- rpc/rpc.sh@84 -- # killprocess 1371563 00:04:28.538 03:05:03 -- common/autotest_common.sh@936 -- # '[' -z 1371563 ']' 00:04:28.538 03:05:03 -- common/autotest_common.sh@940 -- # kill -0 1371563 00:04:28.538 03:05:03 -- common/autotest_common.sh@941 -- # uname 00:04:28.538 03:05:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:28.538 03:05:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1371563 00:04:28.798 03:05:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:28.798 03:05:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:28.798 03:05:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1371563' 00:04:28.798 killing process with pid 1371563 00:04:28.798 03:05:03 -- common/autotest_common.sh@955 -- # kill 1371563 00:04:28.798 03:05:03 -- common/autotest_common.sh@960 -- # wait 1371563 00:04:29.056 00:04:29.056 real 0m2.262s 00:04:29.056 user 0m2.848s 00:04:29.056 sys 0m0.725s 00:04:29.056 03:05:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:29.056 03:05:03 -- common/autotest_common.sh@10 -- # set +x 00:04:29.056 ************************************ 00:04:29.056 END TEST rpc 00:04:29.056 ************************************ 00:04:29.056 03:05:03 -- spdk/autotest.sh@166 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:29.056 03:05:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:29.056 03:05:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:29.056 03:05:03 -- common/autotest_common.sh@10 -- # set +x 00:04:29.314 ************************************ 00:04:29.314 START TEST skip_rpc 00:04:29.314 ************************************ 00:04:29.314 03:05:03 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:29.314 * Looking for test storage... 00:04:29.314 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:29.314 03:05:03 -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:29.314 03:05:03 -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:29.314 03:05:03 -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:29.314 03:05:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:29.314 03:05:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:29.314 03:05:03 -- common/autotest_common.sh@10 -- # set +x 00:04:29.314 ************************************ 00:04:29.314 START TEST skip_rpc 00:04:29.314 ************************************ 00:04:29.314 03:05:03 -- common/autotest_common.sh@1111 -- # test_skip_rpc 00:04:29.314 03:05:03 -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1372052 00:04:29.314 03:05:03 -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:29.314 03:05:03 -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:29.314 03:05:03 -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:29.576 [2024-04-25 03:05:03.816210] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:04:29.576 [2024-04-25 03:05:03.816278] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1372052 ] 00:04:29.576 EAL: No free 2048 kB hugepages reported on node 1 00:04:29.576 [2024-04-25 03:05:03.877295] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:29.576 [2024-04-25 03:05:03.991865] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:34.860 03:05:08 -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:34.860 03:05:08 -- common/autotest_common.sh@638 -- # local es=0 00:04:34.860 03:05:08 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:34.860 03:05:08 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:04:34.860 03:05:08 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:34.860 03:05:08 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:04:34.860 03:05:08 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:34.860 03:05:08 -- common/autotest_common.sh@641 -- # rpc_cmd spdk_get_version 00:04:34.860 03:05:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:34.860 03:05:08 -- common/autotest_common.sh@10 -- # set +x 00:04:34.860 03:05:08 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:04:34.860 03:05:08 -- common/autotest_common.sh@641 -- # es=1 00:04:34.860 03:05:08 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:04:34.860 03:05:08 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:04:34.860 03:05:08 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:04:34.860 03:05:08 -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:34.860 03:05:08 -- rpc/skip_rpc.sh@23 -- # killprocess 1372052 00:04:34.860 03:05:08 -- common/autotest_common.sh@936 -- # '[' -z 1372052 ']' 00:04:34.860 03:05:08 -- common/autotest_common.sh@940 -- # kill -0 1372052 00:04:34.860 03:05:08 -- common/autotest_common.sh@941 -- # uname 00:04:34.860 03:05:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:34.860 03:05:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1372052 00:04:34.860 03:05:08 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:34.860 03:05:08 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:34.860 03:05:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1372052' 00:04:34.860 killing process with pid 1372052 00:04:34.860 03:05:08 -- common/autotest_common.sh@955 -- # kill 1372052 00:04:34.860 03:05:08 -- common/autotest_common.sh@960 -- # wait 1372052 00:04:34.860 00:04:34.860 real 0m5.498s 00:04:34.860 user 0m5.174s 00:04:34.860 sys 0m0.322s 00:04:34.860 03:05:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:34.860 03:05:09 -- common/autotest_common.sh@10 -- # set +x 00:04:34.860 ************************************ 00:04:34.860 END TEST skip_rpc 00:04:34.860 ************************************ 00:04:34.860 03:05:09 -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:34.860 03:05:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:34.860 03:05:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:34.860 03:05:09 -- common/autotest_common.sh@10 -- # set +x 00:04:35.120 ************************************ 00:04:35.120 START TEST skip_rpc_with_json 00:04:35.120 ************************************ 00:04:35.120 03:05:09 -- common/autotest_common.sh@1111 -- # test_skip_rpc_with_json 00:04:35.120 03:05:09 -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:35.120 03:05:09 -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1372753 00:04:35.120 03:05:09 -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:35.120 03:05:09 -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:35.120 03:05:09 -- rpc/skip_rpc.sh@31 -- # waitforlisten 1372753 00:04:35.120 03:05:09 -- common/autotest_common.sh@817 -- # '[' -z 1372753 ']' 00:04:35.120 03:05:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:35.120 03:05:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:35.120 03:05:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:35.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:35.120 03:05:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:35.120 03:05:09 -- common/autotest_common.sh@10 -- # set +x 00:04:35.120 [2024-04-25 03:05:09.436152] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:04:35.120 [2024-04-25 03:05:09.436233] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1372753 ] 00:04:35.120 EAL: No free 2048 kB hugepages reported on node 1 00:04:35.120 [2024-04-25 03:05:09.493129] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:35.120 [2024-04-25 03:05:09.601019] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:35.378 03:05:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:35.378 03:05:09 -- common/autotest_common.sh@850 -- # return 0 00:04:35.378 03:05:09 -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:35.378 03:05:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:35.378 03:05:09 -- common/autotest_common.sh@10 -- # set +x 00:04:35.378 [2024-04-25 03:05:09.861418] nvmf_rpc.c:2513:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:35.378 request: 00:04:35.378 { 00:04:35.378 "trtype": "tcp", 00:04:35.378 "method": "nvmf_get_transports", 00:04:35.378 "req_id": 1 00:04:35.378 } 00:04:35.378 Got JSON-RPC error response 00:04:35.378 response: 00:04:35.378 { 00:04:35.378 "code": -19, 00:04:35.378 "message": "No such device" 00:04:35.378 } 00:04:35.378 03:05:09 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:04:35.378 03:05:09 -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:35.378 03:05:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:35.378 03:05:09 -- common/autotest_common.sh@10 -- # set +x 00:04:35.378 [2024-04-25 03:05:09.869526] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:35.378 03:05:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:35.378 03:05:09 -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:35.378 03:05:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:35.378 03:05:09 -- common/autotest_common.sh@10 -- # set +x 00:04:35.639 03:05:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:35.639 03:05:10 -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:35.639 { 00:04:35.639 "subsystems": [ 00:04:35.639 { 00:04:35.639 "subsystem": "keyring", 00:04:35.639 "config": [] 00:04:35.639 }, 00:04:35.639 { 00:04:35.639 "subsystem": "iobuf", 00:04:35.639 "config": [ 00:04:35.639 { 00:04:35.639 "method": "iobuf_set_options", 00:04:35.639 "params": { 00:04:35.639 "small_pool_count": 8192, 00:04:35.639 "large_pool_count": 1024, 00:04:35.639 "small_bufsize": 8192, 00:04:35.639 "large_bufsize": 135168 00:04:35.639 } 00:04:35.639 } 00:04:35.639 ] 00:04:35.639 }, 00:04:35.639 { 00:04:35.639 "subsystem": "sock", 00:04:35.639 "config": [ 00:04:35.639 { 00:04:35.639 "method": "sock_impl_set_options", 00:04:35.639 "params": { 00:04:35.639 "impl_name": "posix", 00:04:35.639 "recv_buf_size": 2097152, 00:04:35.639 "send_buf_size": 2097152, 00:04:35.639 "enable_recv_pipe": true, 00:04:35.639 "enable_quickack": false, 00:04:35.639 "enable_placement_id": 0, 00:04:35.639 "enable_zerocopy_send_server": true, 00:04:35.639 "enable_zerocopy_send_client": false, 00:04:35.639 "zerocopy_threshold": 0, 00:04:35.639 "tls_version": 0, 00:04:35.639 "enable_ktls": false 00:04:35.639 } 00:04:35.639 }, 00:04:35.639 { 00:04:35.639 "method": "sock_impl_set_options", 00:04:35.639 "params": { 00:04:35.639 "impl_name": "ssl", 00:04:35.639 "recv_buf_size": 4096, 00:04:35.639 "send_buf_size": 4096, 00:04:35.639 "enable_recv_pipe": true, 00:04:35.639 "enable_quickack": false, 00:04:35.639 "enable_placement_id": 0, 00:04:35.639 "enable_zerocopy_send_server": true, 00:04:35.639 "enable_zerocopy_send_client": false, 00:04:35.639 "zerocopy_threshold": 0, 00:04:35.639 "tls_version": 0, 00:04:35.639 "enable_ktls": false 00:04:35.639 } 00:04:35.639 } 00:04:35.639 ] 00:04:35.639 }, 00:04:35.639 { 00:04:35.639 "subsystem": "vmd", 00:04:35.639 "config": [] 00:04:35.639 }, 00:04:35.639 { 00:04:35.639 "subsystem": "accel", 00:04:35.639 "config": [ 00:04:35.639 { 00:04:35.639 "method": "accel_set_options", 00:04:35.639 "params": { 00:04:35.639 "small_cache_size": 128, 00:04:35.639 "large_cache_size": 16, 00:04:35.639 "task_count": 2048, 00:04:35.639 "sequence_count": 2048, 00:04:35.639 "buf_count": 2048 00:04:35.639 } 00:04:35.639 } 00:04:35.639 ] 00:04:35.639 }, 00:04:35.639 { 00:04:35.639 "subsystem": "bdev", 00:04:35.639 "config": [ 00:04:35.639 { 00:04:35.639 "method": "bdev_set_options", 00:04:35.639 "params": { 00:04:35.639 "bdev_io_pool_size": 65535, 00:04:35.639 "bdev_io_cache_size": 256, 00:04:35.639 "bdev_auto_examine": true, 00:04:35.639 "iobuf_small_cache_size": 128, 00:04:35.639 "iobuf_large_cache_size": 16 00:04:35.639 } 00:04:35.639 }, 00:04:35.639 { 00:04:35.639 "method": "bdev_raid_set_options", 00:04:35.639 "params": { 00:04:35.639 "process_window_size_kb": 1024 00:04:35.639 } 00:04:35.639 }, 00:04:35.639 { 00:04:35.639 "method": "bdev_iscsi_set_options", 00:04:35.639 "params": { 00:04:35.639 "timeout_sec": 30 00:04:35.639 } 00:04:35.639 }, 00:04:35.639 { 00:04:35.639 "method": "bdev_nvme_set_options", 00:04:35.639 "params": { 00:04:35.639 "action_on_timeout": "none", 00:04:35.639 "timeout_us": 0, 00:04:35.639 "timeout_admin_us": 0, 00:04:35.639 "keep_alive_timeout_ms": 10000, 00:04:35.639 "arbitration_burst": 0, 00:04:35.639 "low_priority_weight": 0, 00:04:35.639 "medium_priority_weight": 0, 00:04:35.639 "high_priority_weight": 0, 00:04:35.639 "nvme_adminq_poll_period_us": 10000, 00:04:35.639 "nvme_ioq_poll_period_us": 0, 00:04:35.639 "io_queue_requests": 0, 00:04:35.639 "delay_cmd_submit": true, 00:04:35.639 "transport_retry_count": 4, 00:04:35.639 "bdev_retry_count": 3, 00:04:35.639 "transport_ack_timeout": 0, 00:04:35.639 "ctrlr_loss_timeout_sec": 0, 00:04:35.639 "reconnect_delay_sec": 0, 00:04:35.639 "fast_io_fail_timeout_sec": 0, 00:04:35.639 "disable_auto_failback": false, 00:04:35.639 "generate_uuids": false, 00:04:35.639 "transport_tos": 0, 00:04:35.639 "nvme_error_stat": false, 00:04:35.639 "rdma_srq_size": 0, 00:04:35.639 "io_path_stat": false, 00:04:35.639 "allow_accel_sequence": false, 00:04:35.639 "rdma_max_cq_size": 0, 00:04:35.639 "rdma_cm_event_timeout_ms": 0, 00:04:35.639 "dhchap_digests": [ 00:04:35.639 "sha256", 00:04:35.639 "sha384", 00:04:35.639 "sha512" 00:04:35.639 ], 00:04:35.639 "dhchap_dhgroups": [ 00:04:35.639 "null", 00:04:35.639 "ffdhe2048", 00:04:35.639 "ffdhe3072", 00:04:35.639 "ffdhe4096", 00:04:35.639 "ffdhe6144", 00:04:35.639 "ffdhe8192" 00:04:35.639 ] 00:04:35.639 } 00:04:35.639 }, 00:04:35.639 { 00:04:35.639 "method": "bdev_nvme_set_hotplug", 00:04:35.639 "params": { 00:04:35.639 "period_us": 100000, 00:04:35.639 "enable": false 00:04:35.639 } 00:04:35.639 }, 00:04:35.639 { 00:04:35.639 "method": "bdev_wait_for_examine" 00:04:35.639 } 00:04:35.639 ] 00:04:35.639 }, 00:04:35.639 { 00:04:35.639 "subsystem": "scsi", 00:04:35.639 "config": null 00:04:35.639 }, 00:04:35.639 { 00:04:35.639 "subsystem": "scheduler", 00:04:35.639 "config": [ 00:04:35.639 { 00:04:35.639 "method": "framework_set_scheduler", 00:04:35.639 "params": { 00:04:35.639 "name": "static" 00:04:35.639 } 00:04:35.639 } 00:04:35.639 ] 00:04:35.639 }, 00:04:35.639 { 00:04:35.639 "subsystem": "vhost_scsi", 00:04:35.639 "config": [] 00:04:35.639 }, 00:04:35.639 { 00:04:35.639 "subsystem": "vhost_blk", 00:04:35.639 "config": [] 00:04:35.639 }, 00:04:35.639 { 00:04:35.639 "subsystem": "ublk", 00:04:35.639 "config": [] 00:04:35.639 }, 00:04:35.639 { 00:04:35.639 "subsystem": "nbd", 00:04:35.639 "config": [] 00:04:35.639 }, 00:04:35.639 { 00:04:35.639 "subsystem": "nvmf", 00:04:35.639 "config": [ 00:04:35.639 { 00:04:35.639 "method": "nvmf_set_config", 00:04:35.639 "params": { 00:04:35.639 "discovery_filter": "match_any", 00:04:35.639 "admin_cmd_passthru": { 00:04:35.639 "identify_ctrlr": false 00:04:35.639 } 00:04:35.639 } 00:04:35.639 }, 00:04:35.639 { 00:04:35.639 "method": "nvmf_set_max_subsystems", 00:04:35.639 "params": { 00:04:35.639 "max_subsystems": 1024 00:04:35.639 } 00:04:35.639 }, 00:04:35.639 { 00:04:35.639 "method": "nvmf_set_crdt", 00:04:35.639 "params": { 00:04:35.639 "crdt1": 0, 00:04:35.639 "crdt2": 0, 00:04:35.639 "crdt3": 0 00:04:35.639 } 00:04:35.639 }, 00:04:35.640 { 00:04:35.640 "method": "nvmf_create_transport", 00:04:35.640 "params": { 00:04:35.640 "trtype": "TCP", 00:04:35.640 "max_queue_depth": 128, 00:04:35.640 "max_io_qpairs_per_ctrlr": 127, 00:04:35.640 "in_capsule_data_size": 4096, 00:04:35.640 "max_io_size": 131072, 00:04:35.640 "io_unit_size": 131072, 00:04:35.640 "max_aq_depth": 128, 00:04:35.640 "num_shared_buffers": 511, 00:04:35.640 "buf_cache_size": 4294967295, 00:04:35.640 "dif_insert_or_strip": false, 00:04:35.640 "zcopy": false, 00:04:35.640 "c2h_success": true, 00:04:35.640 "sock_priority": 0, 00:04:35.640 "abort_timeout_sec": 1, 00:04:35.640 "ack_timeout": 0, 00:04:35.640 "data_wr_pool_size": 0 00:04:35.640 } 00:04:35.640 } 00:04:35.640 ] 00:04:35.640 }, 00:04:35.640 { 00:04:35.640 "subsystem": "iscsi", 00:04:35.640 "config": [ 00:04:35.640 { 00:04:35.640 "method": "iscsi_set_options", 00:04:35.640 "params": { 00:04:35.640 "node_base": "iqn.2016-06.io.spdk", 00:04:35.640 "max_sessions": 128, 00:04:35.640 "max_connections_per_session": 2, 00:04:35.640 "max_queue_depth": 64, 00:04:35.640 "default_time2wait": 2, 00:04:35.640 "default_time2retain": 20, 00:04:35.640 "first_burst_length": 8192, 00:04:35.640 "immediate_data": true, 00:04:35.640 "allow_duplicated_isid": false, 00:04:35.640 "error_recovery_level": 0, 00:04:35.640 "nop_timeout": 60, 00:04:35.640 "nop_in_interval": 30, 00:04:35.640 "disable_chap": false, 00:04:35.640 "require_chap": false, 00:04:35.640 "mutual_chap": false, 00:04:35.640 "chap_group": 0, 00:04:35.640 "max_large_datain_per_connection": 64, 00:04:35.640 "max_r2t_per_connection": 4, 00:04:35.640 "pdu_pool_size": 36864, 00:04:35.640 "immediate_data_pool_size": 16384, 00:04:35.640 "data_out_pool_size": 2048 00:04:35.640 } 00:04:35.640 } 00:04:35.640 ] 00:04:35.640 } 00:04:35.640 ] 00:04:35.640 } 00:04:35.640 03:05:10 -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:35.640 03:05:10 -- rpc/skip_rpc.sh@40 -- # killprocess 1372753 00:04:35.640 03:05:10 -- common/autotest_common.sh@936 -- # '[' -z 1372753 ']' 00:04:35.640 03:05:10 -- common/autotest_common.sh@940 -- # kill -0 1372753 00:04:35.640 03:05:10 -- common/autotest_common.sh@941 -- # uname 00:04:35.640 03:05:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:35.640 03:05:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1372753 00:04:35.640 03:05:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:35.640 03:05:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:35.640 03:05:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1372753' 00:04:35.640 killing process with pid 1372753 00:04:35.640 03:05:10 -- common/autotest_common.sh@955 -- # kill 1372753 00:04:35.640 03:05:10 -- common/autotest_common.sh@960 -- # wait 1372753 00:04:36.209 03:05:10 -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1372892 00:04:36.209 03:05:10 -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:36.209 03:05:10 -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:41.487 03:05:15 -- rpc/skip_rpc.sh@50 -- # killprocess 1372892 00:04:41.487 03:05:15 -- common/autotest_common.sh@936 -- # '[' -z 1372892 ']' 00:04:41.487 03:05:15 -- common/autotest_common.sh@940 -- # kill -0 1372892 00:04:41.487 03:05:15 -- common/autotest_common.sh@941 -- # uname 00:04:41.487 03:05:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:41.487 03:05:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1372892 00:04:41.487 03:05:15 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:41.487 03:05:15 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:41.487 03:05:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1372892' 00:04:41.487 killing process with pid 1372892 00:04:41.487 03:05:15 -- common/autotest_common.sh@955 -- # kill 1372892 00:04:41.487 03:05:15 -- common/autotest_common.sh@960 -- # wait 1372892 00:04:41.487 03:05:15 -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:41.487 03:05:15 -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:41.487 00:04:41.487 real 0m6.591s 00:04:41.487 user 0m6.182s 00:04:41.487 sys 0m0.695s 00:04:41.487 03:05:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:41.487 03:05:15 -- common/autotest_common.sh@10 -- # set +x 00:04:41.487 ************************************ 00:04:41.487 END TEST skip_rpc_with_json 00:04:41.487 ************************************ 00:04:41.746 03:05:15 -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:41.746 03:05:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:41.746 03:05:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:41.746 03:05:15 -- common/autotest_common.sh@10 -- # set +x 00:04:41.746 ************************************ 00:04:41.746 START TEST skip_rpc_with_delay 00:04:41.746 ************************************ 00:04:41.746 03:05:16 -- common/autotest_common.sh@1111 -- # test_skip_rpc_with_delay 00:04:41.747 03:05:16 -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:41.747 03:05:16 -- common/autotest_common.sh@638 -- # local es=0 00:04:41.747 03:05:16 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:41.747 03:05:16 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:41.747 03:05:16 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:41.747 03:05:16 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:41.747 03:05:16 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:41.747 03:05:16 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:41.747 03:05:16 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:41.747 03:05:16 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:41.747 03:05:16 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:41.747 03:05:16 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:41.747 [2024-04-25 03:05:16.150467] app.c: 751:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:41.747 [2024-04-25 03:05:16.150567] app.c: 630:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:41.747 03:05:16 -- common/autotest_common.sh@641 -- # es=1 00:04:41.747 03:05:16 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:04:41.747 03:05:16 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:04:41.747 03:05:16 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:04:41.747 00:04:41.747 real 0m0.065s 00:04:41.747 user 0m0.045s 00:04:41.747 sys 0m0.019s 00:04:41.747 03:05:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:41.747 03:05:16 -- common/autotest_common.sh@10 -- # set +x 00:04:41.747 ************************************ 00:04:41.747 END TEST skip_rpc_with_delay 00:04:41.747 ************************************ 00:04:41.747 03:05:16 -- rpc/skip_rpc.sh@77 -- # uname 00:04:41.747 03:05:16 -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:41.747 03:05:16 -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:41.747 03:05:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:41.747 03:05:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:41.747 03:05:16 -- common/autotest_common.sh@10 -- # set +x 00:04:42.007 ************************************ 00:04:42.007 START TEST exit_on_failed_rpc_init 00:04:42.007 ************************************ 00:04:42.007 03:05:16 -- common/autotest_common.sh@1111 -- # test_exit_on_failed_rpc_init 00:04:42.007 03:05:16 -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1373623 00:04:42.007 03:05:16 -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:42.007 03:05:16 -- rpc/skip_rpc.sh@63 -- # waitforlisten 1373623 00:04:42.007 03:05:16 -- common/autotest_common.sh@817 -- # '[' -z 1373623 ']' 00:04:42.007 03:05:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:42.007 03:05:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:42.007 03:05:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:42.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:42.007 03:05:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:42.007 03:05:16 -- common/autotest_common.sh@10 -- # set +x 00:04:42.007 [2024-04-25 03:05:16.328432] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:04:42.007 [2024-04-25 03:05:16.328511] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1373623 ] 00:04:42.007 EAL: No free 2048 kB hugepages reported on node 1 00:04:42.007 [2024-04-25 03:05:16.385044] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.007 [2024-04-25 03:05:16.492165] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.267 03:05:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:42.267 03:05:16 -- common/autotest_common.sh@850 -- # return 0 00:04:42.267 03:05:16 -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:42.267 03:05:16 -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:42.267 03:05:16 -- common/autotest_common.sh@638 -- # local es=0 00:04:42.267 03:05:16 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:42.267 03:05:16 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:42.267 03:05:16 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:42.267 03:05:16 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:42.267 03:05:16 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:42.267 03:05:16 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:42.267 03:05:16 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:42.267 03:05:16 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:42.267 03:05:16 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:42.267 03:05:16 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:42.527 [2024-04-25 03:05:16.811960] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:04:42.527 [2024-04-25 03:05:16.812034] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1373743 ] 00:04:42.527 EAL: No free 2048 kB hugepages reported on node 1 00:04:42.527 [2024-04-25 03:05:16.872124] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.527 [2024-04-25 03:05:16.987799] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:42.527 [2024-04-25 03:05:16.987917] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:42.527 [2024-04-25 03:05:16.987935] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:42.527 [2024-04-25 03:05:16.987945] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:42.786 03:05:17 -- common/autotest_common.sh@641 -- # es=234 00:04:42.786 03:05:17 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:04:42.786 03:05:17 -- common/autotest_common.sh@650 -- # es=106 00:04:42.786 03:05:17 -- common/autotest_common.sh@651 -- # case "$es" in 00:04:42.786 03:05:17 -- common/autotest_common.sh@658 -- # es=1 00:04:42.786 03:05:17 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:04:42.786 03:05:17 -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:42.786 03:05:17 -- rpc/skip_rpc.sh@70 -- # killprocess 1373623 00:04:42.786 03:05:17 -- common/autotest_common.sh@936 -- # '[' -z 1373623 ']' 00:04:42.786 03:05:17 -- common/autotest_common.sh@940 -- # kill -0 1373623 00:04:42.786 03:05:17 -- common/autotest_common.sh@941 -- # uname 00:04:42.786 03:05:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:42.786 03:05:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1373623 00:04:42.786 03:05:17 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:42.786 03:05:17 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:42.786 03:05:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1373623' 00:04:42.786 killing process with pid 1373623 00:04:42.786 03:05:17 -- common/autotest_common.sh@955 -- # kill 1373623 00:04:42.786 03:05:17 -- common/autotest_common.sh@960 -- # wait 1373623 00:04:43.355 00:04:43.355 real 0m1.322s 00:04:43.355 user 0m1.476s 00:04:43.355 sys 0m0.441s 00:04:43.355 03:05:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:43.355 03:05:17 -- common/autotest_common.sh@10 -- # set +x 00:04:43.355 ************************************ 00:04:43.355 END TEST exit_on_failed_rpc_init 00:04:43.355 ************************************ 00:04:43.355 03:05:17 -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:43.355 00:04:43.355 real 0m13.999s 00:04:43.355 user 0m13.075s 00:04:43.355 sys 0m1.777s 00:04:43.355 03:05:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:43.355 03:05:17 -- common/autotest_common.sh@10 -- # set +x 00:04:43.355 ************************************ 00:04:43.355 END TEST skip_rpc 00:04:43.355 ************************************ 00:04:43.355 03:05:17 -- spdk/autotest.sh@167 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:43.355 03:05:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:43.355 03:05:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:43.355 03:05:17 -- common/autotest_common.sh@10 -- # set +x 00:04:43.355 ************************************ 00:04:43.355 START TEST rpc_client 00:04:43.355 ************************************ 00:04:43.355 03:05:17 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:43.355 * Looking for test storage... 00:04:43.355 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:43.355 03:05:17 -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:43.355 OK 00:04:43.355 03:05:17 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:43.355 00:04:43.355 real 0m0.066s 00:04:43.355 user 0m0.024s 00:04:43.355 sys 0m0.048s 00:04:43.355 03:05:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:43.355 03:05:17 -- common/autotest_common.sh@10 -- # set +x 00:04:43.355 ************************************ 00:04:43.355 END TEST rpc_client 00:04:43.355 ************************************ 00:04:43.355 03:05:17 -- spdk/autotest.sh@168 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:43.355 03:05:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:43.355 03:05:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:43.355 03:05:17 -- common/autotest_common.sh@10 -- # set +x 00:04:43.615 ************************************ 00:04:43.615 START TEST json_config 00:04:43.615 ************************************ 00:04:43.615 03:05:17 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:43.615 03:05:17 -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:43.615 03:05:17 -- nvmf/common.sh@7 -- # uname -s 00:04:43.615 03:05:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:43.615 03:05:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:43.615 03:05:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:43.615 03:05:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:43.615 03:05:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:43.615 03:05:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:43.615 03:05:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:43.615 03:05:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:43.615 03:05:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:43.615 03:05:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:43.615 03:05:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:04:43.615 03:05:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:04:43.615 03:05:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:43.615 03:05:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:43.615 03:05:17 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:43.615 03:05:17 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:43.615 03:05:17 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:43.615 03:05:17 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:43.615 03:05:17 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:43.615 03:05:17 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:43.615 03:05:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:43.615 03:05:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:43.615 03:05:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:43.615 03:05:17 -- paths/export.sh@5 -- # export PATH 00:04:43.615 03:05:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:43.615 03:05:17 -- nvmf/common.sh@47 -- # : 0 00:04:43.615 03:05:17 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:43.615 03:05:17 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:43.615 03:05:17 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:43.615 03:05:17 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:43.615 03:05:17 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:43.615 03:05:17 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:43.615 03:05:17 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:43.615 03:05:17 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:43.615 03:05:17 -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:43.615 03:05:17 -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:43.615 03:05:17 -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:43.615 03:05:17 -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:43.615 03:05:17 -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:43.615 03:05:17 -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:43.615 03:05:17 -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:43.615 03:05:17 -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:43.615 03:05:17 -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:43.615 03:05:17 -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:43.615 03:05:17 -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:43.615 03:05:17 -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:43.615 03:05:17 -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:43.615 03:05:17 -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:43.615 03:05:17 -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:43.615 03:05:17 -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:04:43.615 INFO: JSON configuration test init 00:04:43.615 03:05:17 -- json_config/json_config.sh@357 -- # json_config_test_init 00:04:43.615 03:05:17 -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:04:43.615 03:05:17 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:43.615 03:05:17 -- common/autotest_common.sh@10 -- # set +x 00:04:43.615 03:05:17 -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:04:43.615 03:05:17 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:43.615 03:05:17 -- common/autotest_common.sh@10 -- # set +x 00:04:43.615 03:05:17 -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:04:43.615 03:05:17 -- json_config/common.sh@9 -- # local app=target 00:04:43.615 03:05:17 -- json_config/common.sh@10 -- # shift 00:04:43.615 03:05:17 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:43.615 03:05:17 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:43.615 03:05:17 -- json_config/common.sh@15 -- # local app_extra_params= 00:04:43.615 03:05:17 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:43.615 03:05:17 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:43.615 03:05:17 -- json_config/common.sh@22 -- # app_pid["$app"]=1374003 00:04:43.615 03:05:17 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:43.615 03:05:17 -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:43.615 Waiting for target to run... 00:04:43.615 03:05:17 -- json_config/common.sh@25 -- # waitforlisten 1374003 /var/tmp/spdk_tgt.sock 00:04:43.615 03:05:17 -- common/autotest_common.sh@817 -- # '[' -z 1374003 ']' 00:04:43.615 03:05:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:43.615 03:05:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:43.615 03:05:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:43.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:43.615 03:05:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:43.615 03:05:17 -- common/autotest_common.sh@10 -- # set +x 00:04:43.615 [2024-04-25 03:05:18.030418] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:04:43.615 [2024-04-25 03:05:18.030504] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1374003 ] 00:04:43.615 EAL: No free 2048 kB hugepages reported on node 1 00:04:44.185 [2024-04-25 03:05:18.399353] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:44.185 [2024-04-25 03:05:18.485651] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.754 03:05:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:44.754 03:05:18 -- common/autotest_common.sh@850 -- # return 0 00:04:44.754 03:05:18 -- json_config/common.sh@26 -- # echo '' 00:04:44.754 00:04:44.754 03:05:18 -- json_config/json_config.sh@269 -- # create_accel_config 00:04:44.754 03:05:18 -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:04:44.754 03:05:18 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:44.754 03:05:18 -- common/autotest_common.sh@10 -- # set +x 00:04:44.754 03:05:18 -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:04:44.754 03:05:18 -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:04:44.754 03:05:18 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:44.754 03:05:18 -- common/autotest_common.sh@10 -- # set +x 00:04:44.754 03:05:18 -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:44.754 03:05:18 -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:04:44.754 03:05:18 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:48.056 03:05:22 -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:04:48.056 03:05:22 -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:48.056 03:05:22 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:48.056 03:05:22 -- common/autotest_common.sh@10 -- # set +x 00:04:48.056 03:05:22 -- json_config/json_config.sh@45 -- # local ret=0 00:04:48.056 03:05:22 -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:48.056 03:05:22 -- json_config/json_config.sh@46 -- # local enabled_types 00:04:48.056 03:05:22 -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:04:48.056 03:05:22 -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:04:48.056 03:05:22 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:48.056 03:05:22 -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:48.056 03:05:22 -- json_config/json_config.sh@48 -- # local get_types 00:04:48.056 03:05:22 -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:04:48.056 03:05:22 -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:04:48.056 03:05:22 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:48.056 03:05:22 -- common/autotest_common.sh@10 -- # set +x 00:04:48.056 03:05:22 -- json_config/json_config.sh@55 -- # return 0 00:04:48.056 03:05:22 -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:04:48.056 03:05:22 -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:04:48.056 03:05:22 -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:04:48.056 03:05:22 -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:04:48.056 03:05:22 -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:04:48.056 03:05:22 -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:04:48.056 03:05:22 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:48.056 03:05:22 -- common/autotest_common.sh@10 -- # set +x 00:04:48.056 03:05:22 -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:48.056 03:05:22 -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:04:48.056 03:05:22 -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:04:48.056 03:05:22 -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:48.056 03:05:22 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:48.314 MallocForNvmf0 00:04:48.314 03:05:22 -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:48.314 03:05:22 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:48.573 MallocForNvmf1 00:04:48.573 03:05:22 -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:48.573 03:05:22 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:48.830 [2024-04-25 03:05:23.134248] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:48.830 03:05:23 -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:48.830 03:05:23 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:49.088 03:05:23 -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:49.088 03:05:23 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:49.347 03:05:23 -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:49.347 03:05:23 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:49.605 03:05:23 -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:49.605 03:05:23 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:49.605 [2024-04-25 03:05:24.101447] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:49.864 03:05:24 -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:04:49.864 03:05:24 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:49.864 03:05:24 -- common/autotest_common.sh@10 -- # set +x 00:04:49.864 03:05:24 -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:04:49.864 03:05:24 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:49.864 03:05:24 -- common/autotest_common.sh@10 -- # set +x 00:04:49.864 03:05:24 -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:04:49.864 03:05:24 -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:49.864 03:05:24 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:50.122 MallocBdevForConfigChangeCheck 00:04:50.122 03:05:24 -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:04:50.122 03:05:24 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:50.122 03:05:24 -- common/autotest_common.sh@10 -- # set +x 00:04:50.122 03:05:24 -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:04:50.122 03:05:24 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:50.379 03:05:24 -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:04:50.380 INFO: shutting down applications... 00:04:50.380 03:05:24 -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:04:50.380 03:05:24 -- json_config/json_config.sh@368 -- # json_config_clear target 00:04:50.380 03:05:24 -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:04:50.380 03:05:24 -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:52.289 Calling clear_iscsi_subsystem 00:04:52.289 Calling clear_nvmf_subsystem 00:04:52.289 Calling clear_nbd_subsystem 00:04:52.289 Calling clear_ublk_subsystem 00:04:52.289 Calling clear_vhost_blk_subsystem 00:04:52.289 Calling clear_vhost_scsi_subsystem 00:04:52.289 Calling clear_bdev_subsystem 00:04:52.289 03:05:26 -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:52.289 03:05:26 -- json_config/json_config.sh@343 -- # count=100 00:04:52.289 03:05:26 -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:04:52.289 03:05:26 -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:52.289 03:05:26 -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:52.289 03:05:26 -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:52.549 03:05:26 -- json_config/json_config.sh@345 -- # break 00:04:52.549 03:05:26 -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:04:52.549 03:05:26 -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:04:52.549 03:05:26 -- json_config/common.sh@31 -- # local app=target 00:04:52.549 03:05:26 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:52.549 03:05:26 -- json_config/common.sh@35 -- # [[ -n 1374003 ]] 00:04:52.549 03:05:26 -- json_config/common.sh@38 -- # kill -SIGINT 1374003 00:04:52.549 03:05:26 -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:52.549 03:05:26 -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:52.549 03:05:26 -- json_config/common.sh@41 -- # kill -0 1374003 00:04:52.549 03:05:26 -- json_config/common.sh@45 -- # sleep 0.5 00:04:52.810 03:05:27 -- json_config/common.sh@40 -- # (( i++ )) 00:04:52.810 03:05:27 -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:52.810 03:05:27 -- json_config/common.sh@41 -- # kill -0 1374003 00:04:52.810 03:05:27 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:52.810 03:05:27 -- json_config/common.sh@43 -- # break 00:04:52.810 03:05:27 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:52.810 03:05:27 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:52.810 SPDK target shutdown done 00:04:52.810 03:05:27 -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:04:52.810 INFO: relaunching applications... 00:04:52.810 03:05:27 -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:52.810 03:05:27 -- json_config/common.sh@9 -- # local app=target 00:04:52.810 03:05:27 -- json_config/common.sh@10 -- # shift 00:04:52.810 03:05:27 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:52.810 03:05:27 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:52.810 03:05:27 -- json_config/common.sh@15 -- # local app_extra_params= 00:04:52.810 03:05:27 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:52.810 03:05:27 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:52.810 03:05:27 -- json_config/common.sh@22 -- # app_pid["$app"]=1375200 00:04:52.810 03:05:27 -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:52.810 03:05:27 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:52.810 Waiting for target to run... 00:04:52.810 03:05:27 -- json_config/common.sh@25 -- # waitforlisten 1375200 /var/tmp/spdk_tgt.sock 00:04:52.810 03:05:27 -- common/autotest_common.sh@817 -- # '[' -z 1375200 ']' 00:04:52.810 03:05:27 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:52.810 03:05:27 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:52.810 03:05:27 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:52.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:52.810 03:05:27 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:52.810 03:05:27 -- common/autotest_common.sh@10 -- # set +x 00:04:53.068 [2024-04-25 03:05:27.353010] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:04:53.068 [2024-04-25 03:05:27.353091] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1375200 ] 00:04:53.068 EAL: No free 2048 kB hugepages reported on node 1 00:04:53.327 [2024-04-25 03:05:27.699184] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:53.327 [2024-04-25 03:05:27.786155] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.623 [2024-04-25 03:05:30.818582] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:56.623 [2024-04-25 03:05:30.851044] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:56.623 03:05:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:56.623 03:05:30 -- common/autotest_common.sh@850 -- # return 0 00:04:56.623 03:05:30 -- json_config/common.sh@26 -- # echo '' 00:04:56.623 00:04:56.623 03:05:30 -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:04:56.623 03:05:30 -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:56.623 INFO: Checking if target configuration is the same... 00:04:56.623 03:05:30 -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:56.623 03:05:30 -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:04:56.623 03:05:30 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:56.623 + '[' 2 -ne 2 ']' 00:04:56.623 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:56.623 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:56.623 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:56.623 +++ basename /dev/fd/62 00:04:56.623 ++ mktemp /tmp/62.XXX 00:04:56.623 + tmp_file_1=/tmp/62.ISf 00:04:56.623 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:56.623 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:56.624 + tmp_file_2=/tmp/spdk_tgt_config.json.ddk 00:04:56.624 + ret=0 00:04:56.624 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:56.884 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:56.884 + diff -u /tmp/62.ISf /tmp/spdk_tgt_config.json.ddk 00:04:56.884 + echo 'INFO: JSON config files are the same' 00:04:56.884 INFO: JSON config files are the same 00:04:56.884 + rm /tmp/62.ISf /tmp/spdk_tgt_config.json.ddk 00:04:56.884 + exit 0 00:04:56.884 03:05:31 -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:04:56.884 03:05:31 -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:56.884 INFO: changing configuration and checking if this can be detected... 00:04:56.884 03:05:31 -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:56.884 03:05:31 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:57.143 03:05:31 -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:57.143 03:05:31 -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:04:57.143 03:05:31 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:57.143 + '[' 2 -ne 2 ']' 00:04:57.143 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:57.143 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:57.143 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:57.143 +++ basename /dev/fd/62 00:04:57.143 ++ mktemp /tmp/62.XXX 00:04:57.143 + tmp_file_1=/tmp/62.5Na 00:04:57.143 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:57.143 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:57.143 + tmp_file_2=/tmp/spdk_tgt_config.json.gsS 00:04:57.143 + ret=0 00:04:57.143 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:57.711 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:57.711 + diff -u /tmp/62.5Na /tmp/spdk_tgt_config.json.gsS 00:04:57.711 + ret=1 00:04:57.711 + echo '=== Start of file: /tmp/62.5Na ===' 00:04:57.711 + cat /tmp/62.5Na 00:04:57.711 + echo '=== End of file: /tmp/62.5Na ===' 00:04:57.711 + echo '' 00:04:57.711 + echo '=== Start of file: /tmp/spdk_tgt_config.json.gsS ===' 00:04:57.711 + cat /tmp/spdk_tgt_config.json.gsS 00:04:57.711 + echo '=== End of file: /tmp/spdk_tgt_config.json.gsS ===' 00:04:57.711 + echo '' 00:04:57.711 + rm /tmp/62.5Na /tmp/spdk_tgt_config.json.gsS 00:04:57.711 + exit 1 00:04:57.711 03:05:32 -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:04:57.711 INFO: configuration change detected. 00:04:57.711 03:05:32 -- json_config/json_config.sh@394 -- # json_config_test_fini 00:04:57.711 03:05:32 -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:04:57.711 03:05:32 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:57.711 03:05:32 -- common/autotest_common.sh@10 -- # set +x 00:04:57.711 03:05:32 -- json_config/json_config.sh@307 -- # local ret=0 00:04:57.711 03:05:32 -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:04:57.711 03:05:32 -- json_config/json_config.sh@317 -- # [[ -n 1375200 ]] 00:04:57.711 03:05:32 -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:04:57.711 03:05:32 -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:04:57.711 03:05:32 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:57.711 03:05:32 -- common/autotest_common.sh@10 -- # set +x 00:04:57.711 03:05:32 -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:04:57.711 03:05:32 -- json_config/json_config.sh@193 -- # uname -s 00:04:57.711 03:05:32 -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:04:57.711 03:05:32 -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:04:57.711 03:05:32 -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:04:57.711 03:05:32 -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:04:57.711 03:05:32 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:57.711 03:05:32 -- common/autotest_common.sh@10 -- # set +x 00:04:57.711 03:05:32 -- json_config/json_config.sh@323 -- # killprocess 1375200 00:04:57.712 03:05:32 -- common/autotest_common.sh@936 -- # '[' -z 1375200 ']' 00:04:57.712 03:05:32 -- common/autotest_common.sh@940 -- # kill -0 1375200 00:04:57.712 03:05:32 -- common/autotest_common.sh@941 -- # uname 00:04:57.712 03:05:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:57.712 03:05:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1375200 00:04:57.712 03:05:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:57.712 03:05:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:57.712 03:05:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1375200' 00:04:57.712 killing process with pid 1375200 00:04:57.712 03:05:32 -- common/autotest_common.sh@955 -- # kill 1375200 00:04:57.712 03:05:32 -- common/autotest_common.sh@960 -- # wait 1375200 00:04:59.615 03:05:33 -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:59.615 03:05:33 -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:04:59.615 03:05:33 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:59.615 03:05:33 -- common/autotest_common.sh@10 -- # set +x 00:04:59.615 03:05:33 -- json_config/json_config.sh@328 -- # return 0 00:04:59.615 03:05:33 -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:04:59.615 INFO: Success 00:04:59.615 00:04:59.615 real 0m15.791s 00:04:59.615 user 0m17.778s 00:04:59.615 sys 0m1.791s 00:04:59.615 03:05:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:59.615 03:05:33 -- common/autotest_common.sh@10 -- # set +x 00:04:59.615 ************************************ 00:04:59.615 END TEST json_config 00:04:59.615 ************************************ 00:04:59.615 03:05:33 -- spdk/autotest.sh@169 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:59.615 03:05:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:59.615 03:05:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:59.615 03:05:33 -- common/autotest_common.sh@10 -- # set +x 00:04:59.615 ************************************ 00:04:59.615 START TEST json_config_extra_key 00:04:59.615 ************************************ 00:04:59.615 03:05:33 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:59.615 03:05:33 -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:59.615 03:05:33 -- nvmf/common.sh@7 -- # uname -s 00:04:59.615 03:05:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:59.615 03:05:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:59.615 03:05:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:59.615 03:05:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:59.615 03:05:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:59.615 03:05:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:59.615 03:05:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:59.615 03:05:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:59.615 03:05:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:59.615 03:05:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:59.615 03:05:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:04:59.615 03:05:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:04:59.615 03:05:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:59.615 03:05:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:59.615 03:05:33 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:59.615 03:05:33 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:59.615 03:05:33 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:59.615 03:05:33 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:59.615 03:05:33 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:59.615 03:05:33 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:59.615 03:05:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.615 03:05:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.615 03:05:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.615 03:05:33 -- paths/export.sh@5 -- # export PATH 00:04:59.615 03:05:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.615 03:05:33 -- nvmf/common.sh@47 -- # : 0 00:04:59.615 03:05:33 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:59.615 03:05:33 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:59.615 03:05:33 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:59.615 03:05:33 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:59.615 03:05:33 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:59.615 03:05:33 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:59.615 03:05:33 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:59.615 03:05:33 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:59.615 03:05:33 -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:59.615 03:05:33 -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:59.615 03:05:33 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:59.615 03:05:33 -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:59.615 03:05:33 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:59.615 03:05:33 -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:59.615 03:05:33 -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:59.615 03:05:33 -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:59.615 03:05:33 -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:59.615 03:05:33 -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:59.615 03:05:33 -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:59.615 INFO: launching applications... 00:04:59.615 03:05:33 -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:59.615 03:05:33 -- json_config/common.sh@9 -- # local app=target 00:04:59.615 03:05:33 -- json_config/common.sh@10 -- # shift 00:04:59.615 03:05:33 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:59.615 03:05:33 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:59.615 03:05:33 -- json_config/common.sh@15 -- # local app_extra_params= 00:04:59.615 03:05:33 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:59.615 03:05:33 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:59.615 03:05:33 -- json_config/common.sh@22 -- # app_pid["$app"]=1376117 00:04:59.615 03:05:33 -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:59.615 03:05:33 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:59.615 Waiting for target to run... 00:04:59.615 03:05:33 -- json_config/common.sh@25 -- # waitforlisten 1376117 /var/tmp/spdk_tgt.sock 00:04:59.615 03:05:33 -- common/autotest_common.sh@817 -- # '[' -z 1376117 ']' 00:04:59.615 03:05:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:59.615 03:05:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:59.615 03:05:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:59.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:59.615 03:05:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:59.615 03:05:33 -- common/autotest_common.sh@10 -- # set +x 00:04:59.615 [2024-04-25 03:05:33.942595] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:04:59.615 [2024-04-25 03:05:33.942701] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1376117 ] 00:04:59.615 EAL: No free 2048 kB hugepages reported on node 1 00:04:59.873 [2024-04-25 03:05:34.282182] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.873 [2024-04-25 03:05:34.369269] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.441 03:05:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:00.441 03:05:34 -- common/autotest_common.sh@850 -- # return 0 00:05:00.441 03:05:34 -- json_config/common.sh@26 -- # echo '' 00:05:00.441 00:05:00.441 03:05:34 -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:00.441 INFO: shutting down applications... 00:05:00.441 03:05:34 -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:00.441 03:05:34 -- json_config/common.sh@31 -- # local app=target 00:05:00.441 03:05:34 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:00.441 03:05:34 -- json_config/common.sh@35 -- # [[ -n 1376117 ]] 00:05:00.441 03:05:34 -- json_config/common.sh@38 -- # kill -SIGINT 1376117 00:05:00.441 03:05:34 -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:00.441 03:05:34 -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:00.441 03:05:34 -- json_config/common.sh@41 -- # kill -0 1376117 00:05:00.441 03:05:34 -- json_config/common.sh@45 -- # sleep 0.5 00:05:01.017 03:05:35 -- json_config/common.sh@40 -- # (( i++ )) 00:05:01.017 03:05:35 -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:01.017 03:05:35 -- json_config/common.sh@41 -- # kill -0 1376117 00:05:01.017 03:05:35 -- json_config/common.sh@45 -- # sleep 0.5 00:05:01.583 03:05:35 -- json_config/common.sh@40 -- # (( i++ )) 00:05:01.583 03:05:35 -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:01.583 03:05:35 -- json_config/common.sh@41 -- # kill -0 1376117 00:05:01.583 03:05:35 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:01.583 03:05:35 -- json_config/common.sh@43 -- # break 00:05:01.583 03:05:35 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:01.583 03:05:35 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:01.583 SPDK target shutdown done 00:05:01.583 03:05:35 -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:01.583 Success 00:05:01.583 00:05:01.583 real 0m2.036s 00:05:01.583 user 0m1.536s 00:05:01.583 sys 0m0.445s 00:05:01.583 03:05:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:01.583 03:05:35 -- common/autotest_common.sh@10 -- # set +x 00:05:01.583 ************************************ 00:05:01.583 END TEST json_config_extra_key 00:05:01.583 ************************************ 00:05:01.583 03:05:35 -- spdk/autotest.sh@170 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:01.583 03:05:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:01.583 03:05:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:01.584 03:05:35 -- common/autotest_common.sh@10 -- # set +x 00:05:01.584 ************************************ 00:05:01.584 START TEST alias_rpc 00:05:01.584 ************************************ 00:05:01.584 03:05:36 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:01.584 * Looking for test storage... 00:05:01.584 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:01.584 03:05:36 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:01.584 03:05:36 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1376438 00:05:01.584 03:05:36 -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:01.584 03:05:36 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1376438 00:05:01.584 03:05:36 -- common/autotest_common.sh@817 -- # '[' -z 1376438 ']' 00:05:01.584 03:05:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:01.584 03:05:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:01.584 03:05:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:01.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:01.584 03:05:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:01.584 03:05:36 -- common/autotest_common.sh@10 -- # set +x 00:05:01.842 [2024-04-25 03:05:36.105414] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:05:01.842 [2024-04-25 03:05:36.105510] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1376438 ] 00:05:01.842 EAL: No free 2048 kB hugepages reported on node 1 00:05:01.842 [2024-04-25 03:05:36.163674] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.842 [2024-04-25 03:05:36.266450] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.099 03:05:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:02.099 03:05:36 -- common/autotest_common.sh@850 -- # return 0 00:05:02.099 03:05:36 -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:02.357 03:05:36 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1376438 00:05:02.357 03:05:36 -- common/autotest_common.sh@936 -- # '[' -z 1376438 ']' 00:05:02.357 03:05:36 -- common/autotest_common.sh@940 -- # kill -0 1376438 00:05:02.357 03:05:36 -- common/autotest_common.sh@941 -- # uname 00:05:02.357 03:05:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:02.357 03:05:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1376438 00:05:02.357 03:05:36 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:02.357 03:05:36 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:02.357 03:05:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1376438' 00:05:02.357 killing process with pid 1376438 00:05:02.357 03:05:36 -- common/autotest_common.sh@955 -- # kill 1376438 00:05:02.357 03:05:36 -- common/autotest_common.sh@960 -- # wait 1376438 00:05:02.926 00:05:02.926 real 0m1.268s 00:05:02.926 user 0m1.331s 00:05:02.926 sys 0m0.422s 00:05:02.926 03:05:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:02.926 03:05:37 -- common/autotest_common.sh@10 -- # set +x 00:05:02.926 ************************************ 00:05:02.926 END TEST alias_rpc 00:05:02.926 ************************************ 00:05:02.926 03:05:37 -- spdk/autotest.sh@172 -- # [[ 0 -eq 0 ]] 00:05:02.926 03:05:37 -- spdk/autotest.sh@173 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:02.926 03:05:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:02.926 03:05:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:02.926 03:05:37 -- common/autotest_common.sh@10 -- # set +x 00:05:02.926 ************************************ 00:05:02.926 START TEST spdkcli_tcp 00:05:02.926 ************************************ 00:05:02.926 03:05:37 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:03.187 * Looking for test storage... 00:05:03.187 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:03.187 03:05:37 -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:03.187 03:05:37 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:03.187 03:05:37 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:03.187 03:05:37 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:03.187 03:05:37 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:03.187 03:05:37 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:03.187 03:05:37 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:03.187 03:05:37 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:03.187 03:05:37 -- common/autotest_common.sh@10 -- # set +x 00:05:03.187 03:05:37 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1376638 00:05:03.187 03:05:37 -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:03.187 03:05:37 -- spdkcli/tcp.sh@27 -- # waitforlisten 1376638 00:05:03.187 03:05:37 -- common/autotest_common.sh@817 -- # '[' -z 1376638 ']' 00:05:03.187 03:05:37 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:03.187 03:05:37 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:03.187 03:05:37 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:03.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:03.187 03:05:37 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:03.187 03:05:37 -- common/autotest_common.sh@10 -- # set +x 00:05:03.187 [2024-04-25 03:05:37.506897] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:05:03.187 [2024-04-25 03:05:37.507002] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1376638 ] 00:05:03.187 EAL: No free 2048 kB hugepages reported on node 1 00:05:03.187 [2024-04-25 03:05:37.564040] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:03.187 [2024-04-25 03:05:37.668046] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:03.187 [2024-04-25 03:05:37.668049] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.445 03:05:37 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:03.445 03:05:37 -- common/autotest_common.sh@850 -- # return 0 00:05:03.445 03:05:37 -- spdkcli/tcp.sh@31 -- # socat_pid=1376663 00:05:03.445 03:05:37 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:03.445 03:05:37 -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:03.705 [ 00:05:03.705 "bdev_malloc_delete", 00:05:03.705 "bdev_malloc_create", 00:05:03.705 "bdev_null_resize", 00:05:03.705 "bdev_null_delete", 00:05:03.705 "bdev_null_create", 00:05:03.705 "bdev_nvme_cuse_unregister", 00:05:03.705 "bdev_nvme_cuse_register", 00:05:03.705 "bdev_opal_new_user", 00:05:03.705 "bdev_opal_set_lock_state", 00:05:03.705 "bdev_opal_delete", 00:05:03.705 "bdev_opal_get_info", 00:05:03.705 "bdev_opal_create", 00:05:03.705 "bdev_nvme_opal_revert", 00:05:03.705 "bdev_nvme_opal_init", 00:05:03.705 "bdev_nvme_send_cmd", 00:05:03.705 "bdev_nvme_get_path_iostat", 00:05:03.705 "bdev_nvme_get_mdns_discovery_info", 00:05:03.705 "bdev_nvme_stop_mdns_discovery", 00:05:03.705 "bdev_nvme_start_mdns_discovery", 00:05:03.705 "bdev_nvme_set_multipath_policy", 00:05:03.705 "bdev_nvme_set_preferred_path", 00:05:03.705 "bdev_nvme_get_io_paths", 00:05:03.705 "bdev_nvme_remove_error_injection", 00:05:03.705 "bdev_nvme_add_error_injection", 00:05:03.705 "bdev_nvme_get_discovery_info", 00:05:03.705 "bdev_nvme_stop_discovery", 00:05:03.705 "bdev_nvme_start_discovery", 00:05:03.705 "bdev_nvme_get_controller_health_info", 00:05:03.705 "bdev_nvme_disable_controller", 00:05:03.705 "bdev_nvme_enable_controller", 00:05:03.705 "bdev_nvme_reset_controller", 00:05:03.705 "bdev_nvme_get_transport_statistics", 00:05:03.705 "bdev_nvme_apply_firmware", 00:05:03.705 "bdev_nvme_detach_controller", 00:05:03.705 "bdev_nvme_get_controllers", 00:05:03.705 "bdev_nvme_attach_controller", 00:05:03.705 "bdev_nvme_set_hotplug", 00:05:03.705 "bdev_nvme_set_options", 00:05:03.705 "bdev_passthru_delete", 00:05:03.705 "bdev_passthru_create", 00:05:03.705 "bdev_lvol_grow_lvstore", 00:05:03.705 "bdev_lvol_get_lvols", 00:05:03.705 "bdev_lvol_get_lvstores", 00:05:03.705 "bdev_lvol_delete", 00:05:03.705 "bdev_lvol_set_read_only", 00:05:03.705 "bdev_lvol_resize", 00:05:03.705 "bdev_lvol_decouple_parent", 00:05:03.705 "bdev_lvol_inflate", 00:05:03.705 "bdev_lvol_rename", 00:05:03.705 "bdev_lvol_clone_bdev", 00:05:03.705 "bdev_lvol_clone", 00:05:03.705 "bdev_lvol_snapshot", 00:05:03.705 "bdev_lvol_create", 00:05:03.705 "bdev_lvol_delete_lvstore", 00:05:03.705 "bdev_lvol_rename_lvstore", 00:05:03.705 "bdev_lvol_create_lvstore", 00:05:03.705 "bdev_raid_set_options", 00:05:03.705 "bdev_raid_remove_base_bdev", 00:05:03.705 "bdev_raid_add_base_bdev", 00:05:03.705 "bdev_raid_delete", 00:05:03.705 "bdev_raid_create", 00:05:03.705 "bdev_raid_get_bdevs", 00:05:03.705 "bdev_error_inject_error", 00:05:03.705 "bdev_error_delete", 00:05:03.705 "bdev_error_create", 00:05:03.705 "bdev_split_delete", 00:05:03.705 "bdev_split_create", 00:05:03.705 "bdev_delay_delete", 00:05:03.705 "bdev_delay_create", 00:05:03.705 "bdev_delay_update_latency", 00:05:03.705 "bdev_zone_block_delete", 00:05:03.705 "bdev_zone_block_create", 00:05:03.705 "blobfs_create", 00:05:03.705 "blobfs_detect", 00:05:03.705 "blobfs_set_cache_size", 00:05:03.705 "bdev_aio_delete", 00:05:03.705 "bdev_aio_rescan", 00:05:03.705 "bdev_aio_create", 00:05:03.705 "bdev_ftl_set_property", 00:05:03.705 "bdev_ftl_get_properties", 00:05:03.705 "bdev_ftl_get_stats", 00:05:03.705 "bdev_ftl_unmap", 00:05:03.705 "bdev_ftl_unload", 00:05:03.705 "bdev_ftl_delete", 00:05:03.705 "bdev_ftl_load", 00:05:03.705 "bdev_ftl_create", 00:05:03.705 "bdev_virtio_attach_controller", 00:05:03.706 "bdev_virtio_scsi_get_devices", 00:05:03.706 "bdev_virtio_detach_controller", 00:05:03.706 "bdev_virtio_blk_set_hotplug", 00:05:03.706 "bdev_iscsi_delete", 00:05:03.706 "bdev_iscsi_create", 00:05:03.706 "bdev_iscsi_set_options", 00:05:03.706 "accel_error_inject_error", 00:05:03.706 "ioat_scan_accel_module", 00:05:03.706 "dsa_scan_accel_module", 00:05:03.706 "iaa_scan_accel_module", 00:05:03.706 "keyring_file_remove_key", 00:05:03.706 "keyring_file_add_key", 00:05:03.706 "iscsi_get_histogram", 00:05:03.706 "iscsi_enable_histogram", 00:05:03.706 "iscsi_set_options", 00:05:03.706 "iscsi_get_auth_groups", 00:05:03.706 "iscsi_auth_group_remove_secret", 00:05:03.706 "iscsi_auth_group_add_secret", 00:05:03.706 "iscsi_delete_auth_group", 00:05:03.706 "iscsi_create_auth_group", 00:05:03.706 "iscsi_set_discovery_auth", 00:05:03.706 "iscsi_get_options", 00:05:03.706 "iscsi_target_node_request_logout", 00:05:03.706 "iscsi_target_node_set_redirect", 00:05:03.706 "iscsi_target_node_set_auth", 00:05:03.706 "iscsi_target_node_add_lun", 00:05:03.706 "iscsi_get_stats", 00:05:03.706 "iscsi_get_connections", 00:05:03.706 "iscsi_portal_group_set_auth", 00:05:03.706 "iscsi_start_portal_group", 00:05:03.706 "iscsi_delete_portal_group", 00:05:03.706 "iscsi_create_portal_group", 00:05:03.706 "iscsi_get_portal_groups", 00:05:03.706 "iscsi_delete_target_node", 00:05:03.706 "iscsi_target_node_remove_pg_ig_maps", 00:05:03.706 "iscsi_target_node_add_pg_ig_maps", 00:05:03.706 "iscsi_create_target_node", 00:05:03.706 "iscsi_get_target_nodes", 00:05:03.706 "iscsi_delete_initiator_group", 00:05:03.706 "iscsi_initiator_group_remove_initiators", 00:05:03.706 "iscsi_initiator_group_add_initiators", 00:05:03.706 "iscsi_create_initiator_group", 00:05:03.706 "iscsi_get_initiator_groups", 00:05:03.706 "nvmf_set_crdt", 00:05:03.706 "nvmf_set_config", 00:05:03.706 "nvmf_set_max_subsystems", 00:05:03.706 "nvmf_subsystem_get_listeners", 00:05:03.706 "nvmf_subsystem_get_qpairs", 00:05:03.706 "nvmf_subsystem_get_controllers", 00:05:03.706 "nvmf_get_stats", 00:05:03.706 "nvmf_get_transports", 00:05:03.706 "nvmf_create_transport", 00:05:03.706 "nvmf_get_targets", 00:05:03.706 "nvmf_delete_target", 00:05:03.706 "nvmf_create_target", 00:05:03.706 "nvmf_subsystem_allow_any_host", 00:05:03.706 "nvmf_subsystem_remove_host", 00:05:03.706 "nvmf_subsystem_add_host", 00:05:03.706 "nvmf_ns_remove_host", 00:05:03.706 "nvmf_ns_add_host", 00:05:03.706 "nvmf_subsystem_remove_ns", 00:05:03.706 "nvmf_subsystem_add_ns", 00:05:03.706 "nvmf_subsystem_listener_set_ana_state", 00:05:03.706 "nvmf_discovery_get_referrals", 00:05:03.706 "nvmf_discovery_remove_referral", 00:05:03.706 "nvmf_discovery_add_referral", 00:05:03.706 "nvmf_subsystem_remove_listener", 00:05:03.706 "nvmf_subsystem_add_listener", 00:05:03.706 "nvmf_delete_subsystem", 00:05:03.706 "nvmf_create_subsystem", 00:05:03.706 "nvmf_get_subsystems", 00:05:03.706 "env_dpdk_get_mem_stats", 00:05:03.706 "nbd_get_disks", 00:05:03.706 "nbd_stop_disk", 00:05:03.706 "nbd_start_disk", 00:05:03.706 "ublk_recover_disk", 00:05:03.706 "ublk_get_disks", 00:05:03.706 "ublk_stop_disk", 00:05:03.706 "ublk_start_disk", 00:05:03.706 "ublk_destroy_target", 00:05:03.706 "ublk_create_target", 00:05:03.706 "virtio_blk_create_transport", 00:05:03.706 "virtio_blk_get_transports", 00:05:03.706 "vhost_controller_set_coalescing", 00:05:03.706 "vhost_get_controllers", 00:05:03.706 "vhost_delete_controller", 00:05:03.706 "vhost_create_blk_controller", 00:05:03.706 "vhost_scsi_controller_remove_target", 00:05:03.706 "vhost_scsi_controller_add_target", 00:05:03.706 "vhost_start_scsi_controller", 00:05:03.706 "vhost_create_scsi_controller", 00:05:03.706 "thread_set_cpumask", 00:05:03.706 "framework_get_scheduler", 00:05:03.706 "framework_set_scheduler", 00:05:03.706 "framework_get_reactors", 00:05:03.706 "thread_get_io_channels", 00:05:03.706 "thread_get_pollers", 00:05:03.706 "thread_get_stats", 00:05:03.706 "framework_monitor_context_switch", 00:05:03.706 "spdk_kill_instance", 00:05:03.706 "log_enable_timestamps", 00:05:03.706 "log_get_flags", 00:05:03.706 "log_clear_flag", 00:05:03.706 "log_set_flag", 00:05:03.706 "log_get_level", 00:05:03.706 "log_set_level", 00:05:03.706 "log_get_print_level", 00:05:03.706 "log_set_print_level", 00:05:03.706 "framework_enable_cpumask_locks", 00:05:03.706 "framework_disable_cpumask_locks", 00:05:03.706 "framework_wait_init", 00:05:03.706 "framework_start_init", 00:05:03.706 "scsi_get_devices", 00:05:03.706 "bdev_get_histogram", 00:05:03.706 "bdev_enable_histogram", 00:05:03.706 "bdev_set_qos_limit", 00:05:03.706 "bdev_set_qd_sampling_period", 00:05:03.706 "bdev_get_bdevs", 00:05:03.706 "bdev_reset_iostat", 00:05:03.706 "bdev_get_iostat", 00:05:03.706 "bdev_examine", 00:05:03.706 "bdev_wait_for_examine", 00:05:03.706 "bdev_set_options", 00:05:03.706 "notify_get_notifications", 00:05:03.706 "notify_get_types", 00:05:03.706 "accel_get_stats", 00:05:03.706 "accel_set_options", 00:05:03.706 "accel_set_driver", 00:05:03.706 "accel_crypto_key_destroy", 00:05:03.706 "accel_crypto_keys_get", 00:05:03.706 "accel_crypto_key_create", 00:05:03.706 "accel_assign_opc", 00:05:03.706 "accel_get_module_info", 00:05:03.706 "accel_get_opc_assignments", 00:05:03.706 "vmd_rescan", 00:05:03.706 "vmd_remove_device", 00:05:03.706 "vmd_enable", 00:05:03.706 "sock_set_default_impl", 00:05:03.706 "sock_impl_set_options", 00:05:03.706 "sock_impl_get_options", 00:05:03.706 "iobuf_get_stats", 00:05:03.706 "iobuf_set_options", 00:05:03.706 "framework_get_pci_devices", 00:05:03.706 "framework_get_config", 00:05:03.706 "framework_get_subsystems", 00:05:03.706 "trace_get_info", 00:05:03.706 "trace_get_tpoint_group_mask", 00:05:03.706 "trace_disable_tpoint_group", 00:05:03.706 "trace_enable_tpoint_group", 00:05:03.706 "trace_clear_tpoint_mask", 00:05:03.706 "trace_set_tpoint_mask", 00:05:03.706 "keyring_get_keys", 00:05:03.706 "spdk_get_version", 00:05:03.706 "rpc_get_methods" 00:05:03.706 ] 00:05:03.706 03:05:38 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:03.706 03:05:38 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:03.706 03:05:38 -- common/autotest_common.sh@10 -- # set +x 00:05:03.966 03:05:38 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:03.966 03:05:38 -- spdkcli/tcp.sh@38 -- # killprocess 1376638 00:05:03.966 03:05:38 -- common/autotest_common.sh@936 -- # '[' -z 1376638 ']' 00:05:03.966 03:05:38 -- common/autotest_common.sh@940 -- # kill -0 1376638 00:05:03.966 03:05:38 -- common/autotest_common.sh@941 -- # uname 00:05:03.966 03:05:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:03.966 03:05:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1376638 00:05:03.966 03:05:38 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:03.966 03:05:38 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:03.966 03:05:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1376638' 00:05:03.966 killing process with pid 1376638 00:05:03.966 03:05:38 -- common/autotest_common.sh@955 -- # kill 1376638 00:05:03.966 03:05:38 -- common/autotest_common.sh@960 -- # wait 1376638 00:05:04.225 00:05:04.225 real 0m1.306s 00:05:04.225 user 0m2.250s 00:05:04.225 sys 0m0.458s 00:05:04.225 03:05:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:04.225 03:05:38 -- common/autotest_common.sh@10 -- # set +x 00:05:04.225 ************************************ 00:05:04.225 END TEST spdkcli_tcp 00:05:04.225 ************************************ 00:05:04.484 03:05:38 -- spdk/autotest.sh@176 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:04.484 03:05:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:04.484 03:05:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:04.484 03:05:38 -- common/autotest_common.sh@10 -- # set +x 00:05:04.484 ************************************ 00:05:04.484 START TEST dpdk_mem_utility 00:05:04.484 ************************************ 00:05:04.484 03:05:38 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:04.484 * Looking for test storage... 00:05:04.484 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:04.484 03:05:38 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:04.484 03:05:38 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1376845 00:05:04.484 03:05:38 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:04.484 03:05:38 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1376845 00:05:04.484 03:05:38 -- common/autotest_common.sh@817 -- # '[' -z 1376845 ']' 00:05:04.484 03:05:38 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:04.484 03:05:38 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:04.484 03:05:38 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:04.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:04.484 03:05:38 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:04.484 03:05:38 -- common/autotest_common.sh@10 -- # set +x 00:05:04.484 [2024-04-25 03:05:38.939598] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:05:04.484 [2024-04-25 03:05:38.939710] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1376845 ] 00:05:04.484 EAL: No free 2048 kB hugepages reported on node 1 00:05:04.744 [2024-04-25 03:05:38.997575] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:04.744 [2024-04-25 03:05:39.110803] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.005 03:05:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:05.005 03:05:39 -- common/autotest_common.sh@850 -- # return 0 00:05:05.005 03:05:39 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:05.005 03:05:39 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:05.005 03:05:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:05.005 03:05:39 -- common/autotest_common.sh@10 -- # set +x 00:05:05.005 { 00:05:05.005 "filename": "/tmp/spdk_mem_dump.txt" 00:05:05.005 } 00:05:05.005 03:05:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:05.005 03:05:39 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:05.005 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:05.005 1 heaps totaling size 814.000000 MiB 00:05:05.005 size: 814.000000 MiB heap id: 0 00:05:05.005 end heaps---------- 00:05:05.005 8 mempools totaling size 598.116089 MiB 00:05:05.005 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:05.005 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:05.005 size: 84.521057 MiB name: bdev_io_1376845 00:05:05.005 size: 51.011292 MiB name: evtpool_1376845 00:05:05.005 size: 50.003479 MiB name: msgpool_1376845 00:05:05.005 size: 21.763794 MiB name: PDU_Pool 00:05:05.005 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:05.005 size: 0.026123 MiB name: Session_Pool 00:05:05.005 end mempools------- 00:05:05.005 6 memzones totaling size 4.142822 MiB 00:05:05.005 size: 1.000366 MiB name: RG_ring_0_1376845 00:05:05.005 size: 1.000366 MiB name: RG_ring_1_1376845 00:05:05.005 size: 1.000366 MiB name: RG_ring_4_1376845 00:05:05.005 size: 1.000366 MiB name: RG_ring_5_1376845 00:05:05.005 size: 0.125366 MiB name: RG_ring_2_1376845 00:05:05.005 size: 0.015991 MiB name: RG_ring_3_1376845 00:05:05.005 end memzones------- 00:05:05.005 03:05:39 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:05.005 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:05.005 list of free elements. size: 12.519348 MiB 00:05:05.005 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:05.005 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:05.005 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:05.005 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:05.005 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:05.005 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:05.005 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:05.005 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:05.005 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:05.005 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:05.005 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:05.005 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:05.005 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:05.005 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:05.006 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:05.006 list of standard malloc elements. size: 199.218079 MiB 00:05:05.006 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:05.006 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:05.006 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:05.006 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:05.006 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:05.006 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:05.006 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:05.006 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:05.006 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:05.006 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:05.006 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:05.006 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:05.006 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:05.006 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:05.006 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:05.006 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:05.006 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:05.006 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:05.006 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:05.006 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:05.006 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:05.006 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:05.006 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:05.006 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:05.006 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:05.006 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:05.006 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:05.006 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:05.006 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:05.006 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:05.006 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:05.006 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:05.006 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:05.006 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:05.006 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:05.006 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:05.006 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:05.006 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:05.006 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:05.006 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:05.006 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:05.006 list of memzone associated elements. size: 602.262573 MiB 00:05:05.006 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:05.006 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:05.006 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:05.006 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:05.006 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:05.006 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_1376845_0 00:05:05.006 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:05.006 associated memzone info: size: 48.002930 MiB name: MP_evtpool_1376845_0 00:05:05.006 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:05.006 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1376845_0 00:05:05.006 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:05.006 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:05.006 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:05.006 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:05.006 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:05.006 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_1376845 00:05:05.006 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:05.006 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1376845 00:05:05.006 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:05.006 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1376845 00:05:05.006 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:05.006 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:05.006 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:05.006 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:05.006 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:05.006 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:05.006 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:05.006 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:05.006 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:05.006 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1376845 00:05:05.006 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:05.006 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1376845 00:05:05.006 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:05.006 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1376845 00:05:05.006 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:05.006 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1376845 00:05:05.006 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:05.006 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1376845 00:05:05.006 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:05.006 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:05.006 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:05.006 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:05.006 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:05.006 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:05.006 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:05.006 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1376845 00:05:05.006 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:05.006 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:05.006 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:05.006 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:05.006 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:05.006 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1376845 00:05:05.006 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:05.006 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:05.006 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:05.006 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1376845 00:05:05.006 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:05.006 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1376845 00:05:05.006 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:05.006 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:05.006 03:05:39 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:05.006 03:05:39 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1376845 00:05:05.006 03:05:39 -- common/autotest_common.sh@936 -- # '[' -z 1376845 ']' 00:05:05.006 03:05:39 -- common/autotest_common.sh@940 -- # kill -0 1376845 00:05:05.006 03:05:39 -- common/autotest_common.sh@941 -- # uname 00:05:05.006 03:05:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:05.006 03:05:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1376845 00:05:05.267 03:05:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:05.267 03:05:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:05.267 03:05:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1376845' 00:05:05.267 killing process with pid 1376845 00:05:05.267 03:05:39 -- common/autotest_common.sh@955 -- # kill 1376845 00:05:05.267 03:05:39 -- common/autotest_common.sh@960 -- # wait 1376845 00:05:05.526 00:05:05.526 real 0m1.152s 00:05:05.526 user 0m1.125s 00:05:05.526 sys 0m0.402s 00:05:05.526 03:05:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:05.526 03:05:39 -- common/autotest_common.sh@10 -- # set +x 00:05:05.526 ************************************ 00:05:05.526 END TEST dpdk_mem_utility 00:05:05.526 ************************************ 00:05:05.526 03:05:40 -- spdk/autotest.sh@177 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:05.526 03:05:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:05.526 03:05:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:05.526 03:05:40 -- common/autotest_common.sh@10 -- # set +x 00:05:05.784 ************************************ 00:05:05.784 START TEST event 00:05:05.784 ************************************ 00:05:05.784 03:05:40 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:05.784 * Looking for test storage... 00:05:05.784 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:05.784 03:05:40 -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:05.784 03:05:40 -- bdev/nbd_common.sh@6 -- # set -e 00:05:05.784 03:05:40 -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:05.784 03:05:40 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:05:05.784 03:05:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:05.784 03:05:40 -- common/autotest_common.sh@10 -- # set +x 00:05:05.784 ************************************ 00:05:05.784 START TEST event_perf 00:05:05.784 ************************************ 00:05:05.784 03:05:40 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:05.784 Running I/O for 1 seconds...[2024-04-25 03:05:40.274034] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:05:05.784 [2024-04-25 03:05:40.274094] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1377167 ] 00:05:06.044 EAL: No free 2048 kB hugepages reported on node 1 00:05:06.044 [2024-04-25 03:05:40.340998] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:06.044 [2024-04-25 03:05:40.460368] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:06.044 [2024-04-25 03:05:40.460433] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:06.044 [2024-04-25 03:05:40.460524] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:06.044 [2024-04-25 03:05:40.460527] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.424 Running I/O for 1 seconds... 00:05:07.424 lcore 0: 237282 00:05:07.424 lcore 1: 237279 00:05:07.424 lcore 2: 237280 00:05:07.424 lcore 3: 237281 00:05:07.424 done. 00:05:07.424 00:05:07.424 real 0m1.325s 00:05:07.424 user 0m4.230s 00:05:07.424 sys 0m0.088s 00:05:07.424 03:05:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:07.424 03:05:41 -- common/autotest_common.sh@10 -- # set +x 00:05:07.424 ************************************ 00:05:07.424 END TEST event_perf 00:05:07.424 ************************************ 00:05:07.424 03:05:41 -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:07.424 03:05:41 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:07.424 03:05:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:07.424 03:05:41 -- common/autotest_common.sh@10 -- # set +x 00:05:07.424 ************************************ 00:05:07.424 START TEST event_reactor 00:05:07.424 ************************************ 00:05:07.424 03:05:41 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:07.424 [2024-04-25 03:05:41.723832] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:05:07.424 [2024-04-25 03:05:41.723896] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1377343 ] 00:05:07.424 EAL: No free 2048 kB hugepages reported on node 1 00:05:07.424 [2024-04-25 03:05:41.785204] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.424 [2024-04-25 03:05:41.902269] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.799 test_start 00:05:08.799 oneshot 00:05:08.799 tick 100 00:05:08.799 tick 100 00:05:08.799 tick 250 00:05:08.799 tick 100 00:05:08.799 tick 100 00:05:08.799 tick 100 00:05:08.799 tick 250 00:05:08.799 tick 500 00:05:08.799 tick 100 00:05:08.799 tick 100 00:05:08.799 tick 250 00:05:08.799 tick 100 00:05:08.799 tick 100 00:05:08.799 test_end 00:05:08.799 00:05:08.799 real 0m1.315s 00:05:08.799 user 0m1.226s 00:05:08.799 sys 0m0.084s 00:05:08.799 03:05:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:08.799 03:05:43 -- common/autotest_common.sh@10 -- # set +x 00:05:08.799 ************************************ 00:05:08.799 END TEST event_reactor 00:05:08.799 ************************************ 00:05:08.799 03:05:43 -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:08.799 03:05:43 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:08.799 03:05:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:08.799 03:05:43 -- common/autotest_common.sh@10 -- # set +x 00:05:08.799 ************************************ 00:05:08.799 START TEST event_reactor_perf 00:05:08.799 ************************************ 00:05:08.799 03:05:43 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:08.799 [2024-04-25 03:05:43.163437] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:05:08.800 [2024-04-25 03:05:43.163504] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1377503 ] 00:05:08.800 EAL: No free 2048 kB hugepages reported on node 1 00:05:08.800 [2024-04-25 03:05:43.225502] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.057 [2024-04-25 03:05:43.342853] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.991 test_start 00:05:09.991 test_end 00:05:09.991 Performance: 354325 events per second 00:05:09.991 00:05:09.991 real 0m1.316s 00:05:09.991 user 0m1.227s 00:05:09.991 sys 0m0.084s 00:05:09.991 03:05:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:09.991 03:05:44 -- common/autotest_common.sh@10 -- # set +x 00:05:09.991 ************************************ 00:05:09.991 END TEST event_reactor_perf 00:05:09.991 ************************************ 00:05:09.991 03:05:44 -- event/event.sh@49 -- # uname -s 00:05:09.991 03:05:44 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:09.991 03:05:44 -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:09.991 03:05:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:09.992 03:05:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:09.992 03:05:44 -- common/autotest_common.sh@10 -- # set +x 00:05:10.250 ************************************ 00:05:10.250 START TEST event_scheduler 00:05:10.250 ************************************ 00:05:10.250 03:05:44 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:10.250 * Looking for test storage... 00:05:10.250 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:10.250 03:05:44 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:10.250 03:05:44 -- scheduler/scheduler.sh@35 -- # scheduler_pid=1377694 00:05:10.250 03:05:44 -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:10.250 03:05:44 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:10.250 03:05:44 -- scheduler/scheduler.sh@37 -- # waitforlisten 1377694 00:05:10.250 03:05:44 -- common/autotest_common.sh@817 -- # '[' -z 1377694 ']' 00:05:10.250 03:05:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:10.250 03:05:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:10.250 03:05:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:10.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:10.250 03:05:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:10.250 03:05:44 -- common/autotest_common.sh@10 -- # set +x 00:05:10.250 [2024-04-25 03:05:44.682592] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:05:10.250 [2024-04-25 03:05:44.682713] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1377694 ] 00:05:10.250 EAL: No free 2048 kB hugepages reported on node 1 00:05:10.250 [2024-04-25 03:05:44.744284] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:10.509 [2024-04-25 03:05:44.857452] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.509 [2024-04-25 03:05:44.857507] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:10.509 [2024-04-25 03:05:44.857574] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:10.509 [2024-04-25 03:05:44.857578] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:10.509 03:05:44 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:10.509 03:05:44 -- common/autotest_common.sh@850 -- # return 0 00:05:10.509 03:05:44 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:10.509 03:05:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:10.509 03:05:44 -- common/autotest_common.sh@10 -- # set +x 00:05:10.509 POWER: Env isn't set yet! 00:05:10.509 POWER: Attempting to initialise ACPI cpufreq power management... 00:05:10.509 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_available_frequencies 00:05:10.509 POWER: Cannot get available frequencies of lcore 0 00:05:10.509 POWER: Attempting to initialise PSTAT power management... 00:05:10.509 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:05:10.509 POWER: Initialized successfully for lcore 0 power management 00:05:10.509 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:05:10.509 POWER: Initialized successfully for lcore 1 power management 00:05:10.509 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:05:10.509 POWER: Initialized successfully for lcore 2 power management 00:05:10.509 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:05:10.509 POWER: Initialized successfully for lcore 3 power management 00:05:10.509 03:05:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:10.509 03:05:44 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:10.509 03:05:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:10.509 03:05:44 -- common/autotest_common.sh@10 -- # set +x 00:05:10.767 [2024-04-25 03:05:45.046726] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:10.767 03:05:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:10.767 03:05:45 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:10.767 03:05:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:10.767 03:05:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:10.767 03:05:45 -- common/autotest_common.sh@10 -- # set +x 00:05:10.767 ************************************ 00:05:10.767 START TEST scheduler_create_thread 00:05:10.767 ************************************ 00:05:10.767 03:05:45 -- common/autotest_common.sh@1111 -- # scheduler_create_thread 00:05:10.767 03:05:45 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:10.767 03:05:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:10.767 03:05:45 -- common/autotest_common.sh@10 -- # set +x 00:05:10.767 2 00:05:10.767 03:05:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:10.767 03:05:45 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:10.767 03:05:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:10.767 03:05:45 -- common/autotest_common.sh@10 -- # set +x 00:05:10.767 3 00:05:10.767 03:05:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:10.767 03:05:45 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:10.767 03:05:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:10.767 03:05:45 -- common/autotest_common.sh@10 -- # set +x 00:05:10.767 4 00:05:10.767 03:05:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:10.767 03:05:45 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:10.767 03:05:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:10.767 03:05:45 -- common/autotest_common.sh@10 -- # set +x 00:05:10.767 5 00:05:10.768 03:05:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:10.768 03:05:45 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:10.768 03:05:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:10.768 03:05:45 -- common/autotest_common.sh@10 -- # set +x 00:05:10.768 6 00:05:10.768 03:05:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:10.768 03:05:45 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:10.768 03:05:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:10.768 03:05:45 -- common/autotest_common.sh@10 -- # set +x 00:05:10.768 7 00:05:10.768 03:05:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:10.768 03:05:45 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:10.768 03:05:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:10.768 03:05:45 -- common/autotest_common.sh@10 -- # set +x 00:05:10.768 8 00:05:10.768 03:05:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:10.768 03:05:45 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:10.768 03:05:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:10.768 03:05:45 -- common/autotest_common.sh@10 -- # set +x 00:05:10.768 9 00:05:10.768 03:05:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:10.768 03:05:45 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:10.768 03:05:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:10.768 03:05:45 -- common/autotest_common.sh@10 -- # set +x 00:05:10.768 10 00:05:10.768 03:05:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:10.768 03:05:45 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:10.768 03:05:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:10.768 03:05:45 -- common/autotest_common.sh@10 -- # set +x 00:05:10.768 03:05:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:10.768 03:05:45 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:10.768 03:05:45 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:10.768 03:05:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:10.768 03:05:45 -- common/autotest_common.sh@10 -- # set +x 00:05:11.334 03:05:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:11.334 03:05:45 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:11.334 03:05:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:11.334 03:05:45 -- common/autotest_common.sh@10 -- # set +x 00:05:12.712 03:05:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:12.712 03:05:47 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:12.712 03:05:47 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:12.712 03:05:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:12.712 03:05:47 -- common/autotest_common.sh@10 -- # set +x 00:05:14.086 03:05:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:14.086 00:05:14.086 real 0m3.098s 00:05:14.086 user 0m0.009s 00:05:14.086 sys 0m0.004s 00:05:14.086 03:05:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:14.086 03:05:48 -- common/autotest_common.sh@10 -- # set +x 00:05:14.086 ************************************ 00:05:14.086 END TEST scheduler_create_thread 00:05:14.086 ************************************ 00:05:14.086 03:05:48 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:14.086 03:05:48 -- scheduler/scheduler.sh@46 -- # killprocess 1377694 00:05:14.086 03:05:48 -- common/autotest_common.sh@936 -- # '[' -z 1377694 ']' 00:05:14.086 03:05:48 -- common/autotest_common.sh@940 -- # kill -0 1377694 00:05:14.086 03:05:48 -- common/autotest_common.sh@941 -- # uname 00:05:14.086 03:05:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:14.086 03:05:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1377694 00:05:14.086 03:05:48 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:05:14.086 03:05:48 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:05:14.086 03:05:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1377694' 00:05:14.086 killing process with pid 1377694 00:05:14.086 03:05:48 -- common/autotest_common.sh@955 -- # kill 1377694 00:05:14.086 03:05:48 -- common/autotest_common.sh@960 -- # wait 1377694 00:05:14.344 [2024-04-25 03:05:48.634937] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:14.344 POWER: Power management governor of lcore 0 has been set to 'userspace' successfully 00:05:14.344 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:05:14.344 POWER: Power management governor of lcore 1 has been set to 'schedutil' successfully 00:05:14.344 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:05:14.344 POWER: Power management governor of lcore 2 has been set to 'schedutil' successfully 00:05:14.344 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:05:14.344 POWER: Power management governor of lcore 3 has been set to 'schedutil' successfully 00:05:14.344 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:05:14.603 00:05:14.603 real 0m4.340s 00:05:14.603 user 0m7.084s 00:05:14.603 sys 0m0.394s 00:05:14.603 03:05:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:14.603 03:05:48 -- common/autotest_common.sh@10 -- # set +x 00:05:14.603 ************************************ 00:05:14.603 END TEST event_scheduler 00:05:14.603 ************************************ 00:05:14.603 03:05:48 -- event/event.sh@51 -- # modprobe -n nbd 00:05:14.603 03:05:48 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:14.603 03:05:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:14.603 03:05:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:14.603 03:05:48 -- common/autotest_common.sh@10 -- # set +x 00:05:14.603 ************************************ 00:05:14.603 START TEST app_repeat 00:05:14.603 ************************************ 00:05:14.603 03:05:49 -- common/autotest_common.sh@1111 -- # app_repeat_test 00:05:14.603 03:05:49 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:14.603 03:05:49 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:14.603 03:05:49 -- event/event.sh@13 -- # local nbd_list 00:05:14.603 03:05:49 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:14.603 03:05:49 -- event/event.sh@14 -- # local bdev_list 00:05:14.603 03:05:49 -- event/event.sh@15 -- # local repeat_times=4 00:05:14.603 03:05:49 -- event/event.sh@17 -- # modprobe nbd 00:05:14.603 03:05:49 -- event/event.sh@19 -- # repeat_pid=1378287 00:05:14.603 03:05:49 -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:14.603 03:05:49 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:14.603 03:05:49 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1378287' 00:05:14.603 Process app_repeat pid: 1378287 00:05:14.603 03:05:49 -- event/event.sh@23 -- # for i in {0..2} 00:05:14.603 03:05:49 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:14.603 spdk_app_start Round 0 00:05:14.603 03:05:49 -- event/event.sh@25 -- # waitforlisten 1378287 /var/tmp/spdk-nbd.sock 00:05:14.603 03:05:49 -- common/autotest_common.sh@817 -- # '[' -z 1378287 ']' 00:05:14.603 03:05:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:14.603 03:05:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:14.603 03:05:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:14.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:14.603 03:05:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:14.603 03:05:49 -- common/autotest_common.sh@10 -- # set +x 00:05:14.603 [2024-04-25 03:05:49.081988] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:05:14.603 [2024-04-25 03:05:49.082056] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1378287 ] 00:05:14.861 EAL: No free 2048 kB hugepages reported on node 1 00:05:14.861 [2024-04-25 03:05:49.149572] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:14.861 [2024-04-25 03:05:49.270609] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:14.862 [2024-04-25 03:05:49.270611] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.120 03:05:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:15.120 03:05:49 -- common/autotest_common.sh@850 -- # return 0 00:05:15.120 03:05:49 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:15.378 Malloc0 00:05:15.378 03:05:49 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:15.637 Malloc1 00:05:15.637 03:05:49 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:15.637 03:05:49 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:15.637 03:05:49 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:15.637 03:05:49 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:15.637 03:05:49 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:15.637 03:05:49 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:15.637 03:05:49 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:15.637 03:05:49 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:15.637 03:05:49 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:15.637 03:05:49 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:15.637 03:05:49 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:15.637 03:05:49 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:15.637 03:05:49 -- bdev/nbd_common.sh@12 -- # local i 00:05:15.637 03:05:49 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:15.637 03:05:49 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:15.637 03:05:49 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:15.895 /dev/nbd0 00:05:15.895 03:05:50 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:15.895 03:05:50 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:15.895 03:05:50 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:05:15.895 03:05:50 -- common/autotest_common.sh@855 -- # local i 00:05:15.895 03:05:50 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:15.895 03:05:50 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:15.895 03:05:50 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:05:15.895 03:05:50 -- common/autotest_common.sh@859 -- # break 00:05:15.895 03:05:50 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:15.895 03:05:50 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:15.895 03:05:50 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:15.895 1+0 records in 00:05:15.895 1+0 records out 00:05:15.895 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000163994 s, 25.0 MB/s 00:05:15.895 03:05:50 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:15.895 03:05:50 -- common/autotest_common.sh@872 -- # size=4096 00:05:15.895 03:05:50 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:15.895 03:05:50 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:15.895 03:05:50 -- common/autotest_common.sh@875 -- # return 0 00:05:15.895 03:05:50 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:15.895 03:05:50 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:15.895 03:05:50 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:16.153 /dev/nbd1 00:05:16.153 03:05:50 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:16.153 03:05:50 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:16.153 03:05:50 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:05:16.153 03:05:50 -- common/autotest_common.sh@855 -- # local i 00:05:16.153 03:05:50 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:16.153 03:05:50 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:16.153 03:05:50 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:05:16.153 03:05:50 -- common/autotest_common.sh@859 -- # break 00:05:16.153 03:05:50 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:16.153 03:05:50 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:16.153 03:05:50 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:16.153 1+0 records in 00:05:16.153 1+0 records out 00:05:16.153 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000216779 s, 18.9 MB/s 00:05:16.153 03:05:50 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:16.153 03:05:50 -- common/autotest_common.sh@872 -- # size=4096 00:05:16.153 03:05:50 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:16.153 03:05:50 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:16.153 03:05:50 -- common/autotest_common.sh@875 -- # return 0 00:05:16.153 03:05:50 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:16.153 03:05:50 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:16.153 03:05:50 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:16.153 03:05:50 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:16.153 03:05:50 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:16.411 03:05:50 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:16.411 { 00:05:16.411 "nbd_device": "/dev/nbd0", 00:05:16.411 "bdev_name": "Malloc0" 00:05:16.411 }, 00:05:16.411 { 00:05:16.411 "nbd_device": "/dev/nbd1", 00:05:16.411 "bdev_name": "Malloc1" 00:05:16.411 } 00:05:16.411 ]' 00:05:16.411 03:05:50 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:16.411 { 00:05:16.411 "nbd_device": "/dev/nbd0", 00:05:16.411 "bdev_name": "Malloc0" 00:05:16.411 }, 00:05:16.411 { 00:05:16.411 "nbd_device": "/dev/nbd1", 00:05:16.411 "bdev_name": "Malloc1" 00:05:16.411 } 00:05:16.411 ]' 00:05:16.411 03:05:50 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:16.411 03:05:50 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:16.411 /dev/nbd1' 00:05:16.411 03:05:50 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:16.411 /dev/nbd1' 00:05:16.411 03:05:50 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:16.411 03:05:50 -- bdev/nbd_common.sh@65 -- # count=2 00:05:16.411 03:05:50 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:16.411 03:05:50 -- bdev/nbd_common.sh@95 -- # count=2 00:05:16.411 03:05:50 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:16.411 03:05:50 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:16.411 03:05:50 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:16.411 03:05:50 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:16.411 03:05:50 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:16.411 03:05:50 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:16.411 03:05:50 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:16.411 03:05:50 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:16.411 256+0 records in 00:05:16.411 256+0 records out 00:05:16.411 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00502335 s, 209 MB/s 00:05:16.411 03:05:50 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:16.411 03:05:50 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:16.411 256+0 records in 00:05:16.411 256+0 records out 00:05:16.411 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0239939 s, 43.7 MB/s 00:05:16.411 03:05:50 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:16.411 03:05:50 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:16.411 256+0 records in 00:05:16.412 256+0 records out 00:05:16.412 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.022687 s, 46.2 MB/s 00:05:16.412 03:05:50 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:16.412 03:05:50 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:16.412 03:05:50 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:16.412 03:05:50 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:16.412 03:05:50 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:16.412 03:05:50 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:16.412 03:05:50 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:16.412 03:05:50 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:16.412 03:05:50 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:16.412 03:05:50 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:16.412 03:05:50 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:16.412 03:05:50 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:16.412 03:05:50 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:16.412 03:05:50 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:16.412 03:05:50 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:16.412 03:05:50 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:16.412 03:05:50 -- bdev/nbd_common.sh@51 -- # local i 00:05:16.412 03:05:50 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:16.412 03:05:50 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:16.670 03:05:51 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:16.670 03:05:51 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:16.670 03:05:51 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:16.670 03:05:51 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:16.670 03:05:51 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:16.670 03:05:51 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:16.670 03:05:51 -- bdev/nbd_common.sh@41 -- # break 00:05:16.670 03:05:51 -- bdev/nbd_common.sh@45 -- # return 0 00:05:16.671 03:05:51 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:16.671 03:05:51 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:16.928 03:05:51 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:16.928 03:05:51 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:16.928 03:05:51 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:16.928 03:05:51 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:16.929 03:05:51 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:16.929 03:05:51 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:16.929 03:05:51 -- bdev/nbd_common.sh@41 -- # break 00:05:16.929 03:05:51 -- bdev/nbd_common.sh@45 -- # return 0 00:05:16.929 03:05:51 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:16.929 03:05:51 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:16.929 03:05:51 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:17.186 03:05:51 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:17.186 03:05:51 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:17.186 03:05:51 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:17.186 03:05:51 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:17.186 03:05:51 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:17.186 03:05:51 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:17.186 03:05:51 -- bdev/nbd_common.sh@65 -- # true 00:05:17.186 03:05:51 -- bdev/nbd_common.sh@65 -- # count=0 00:05:17.186 03:05:51 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:17.186 03:05:51 -- bdev/nbd_common.sh@104 -- # count=0 00:05:17.186 03:05:51 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:17.186 03:05:51 -- bdev/nbd_common.sh@109 -- # return 0 00:05:17.186 03:05:51 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:17.445 03:05:51 -- event/event.sh@35 -- # sleep 3 00:05:17.702 [2024-04-25 03:05:52.172180] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:17.960 [2024-04-25 03:05:52.285542] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.960 [2024-04-25 03:05:52.285545] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:17.960 [2024-04-25 03:05:52.347229] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:17.960 [2024-04-25 03:05:52.347305] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:20.483 03:05:54 -- event/event.sh@23 -- # for i in {0..2} 00:05:20.483 03:05:54 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:20.483 spdk_app_start Round 1 00:05:20.483 03:05:54 -- event/event.sh@25 -- # waitforlisten 1378287 /var/tmp/spdk-nbd.sock 00:05:20.483 03:05:54 -- common/autotest_common.sh@817 -- # '[' -z 1378287 ']' 00:05:20.483 03:05:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:20.483 03:05:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:20.483 03:05:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:20.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:20.483 03:05:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:20.483 03:05:54 -- common/autotest_common.sh@10 -- # set +x 00:05:20.740 03:05:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:20.740 03:05:55 -- common/autotest_common.sh@850 -- # return 0 00:05:20.740 03:05:55 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:20.998 Malloc0 00:05:20.998 03:05:55 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:21.256 Malloc1 00:05:21.256 03:05:55 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:21.256 03:05:55 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:21.256 03:05:55 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:21.256 03:05:55 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:21.256 03:05:55 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:21.256 03:05:55 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:21.256 03:05:55 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:21.256 03:05:55 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:21.256 03:05:55 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:21.256 03:05:55 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:21.256 03:05:55 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:21.256 03:05:55 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:21.256 03:05:55 -- bdev/nbd_common.sh@12 -- # local i 00:05:21.256 03:05:55 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:21.256 03:05:55 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:21.256 03:05:55 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:21.514 /dev/nbd0 00:05:21.514 03:05:55 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:21.514 03:05:55 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:21.514 03:05:55 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:05:21.514 03:05:55 -- common/autotest_common.sh@855 -- # local i 00:05:21.514 03:05:55 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:21.514 03:05:55 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:21.514 03:05:55 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:05:21.514 03:05:55 -- common/autotest_common.sh@859 -- # break 00:05:21.514 03:05:55 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:21.514 03:05:55 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:21.514 03:05:55 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:21.514 1+0 records in 00:05:21.514 1+0 records out 00:05:21.514 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000206861 s, 19.8 MB/s 00:05:21.514 03:05:55 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:21.514 03:05:55 -- common/autotest_common.sh@872 -- # size=4096 00:05:21.514 03:05:55 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:21.514 03:05:55 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:21.514 03:05:55 -- common/autotest_common.sh@875 -- # return 0 00:05:21.514 03:05:55 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:21.514 03:05:55 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:21.514 03:05:55 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:21.773 /dev/nbd1 00:05:21.773 03:05:56 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:21.773 03:05:56 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:21.773 03:05:56 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:05:21.773 03:05:56 -- common/autotest_common.sh@855 -- # local i 00:05:21.773 03:05:56 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:21.773 03:05:56 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:21.773 03:05:56 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:05:21.773 03:05:56 -- common/autotest_common.sh@859 -- # break 00:05:21.773 03:05:56 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:21.773 03:05:56 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:21.773 03:05:56 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:21.773 1+0 records in 00:05:21.773 1+0 records out 00:05:21.773 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000205197 s, 20.0 MB/s 00:05:21.773 03:05:56 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:21.773 03:05:56 -- common/autotest_common.sh@872 -- # size=4096 00:05:21.773 03:05:56 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:21.773 03:05:56 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:21.773 03:05:56 -- common/autotest_common.sh@875 -- # return 0 00:05:21.773 03:05:56 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:21.773 03:05:56 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:21.773 03:05:56 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:21.773 03:05:56 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:21.773 03:05:56 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:22.031 03:05:56 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:22.031 { 00:05:22.031 "nbd_device": "/dev/nbd0", 00:05:22.031 "bdev_name": "Malloc0" 00:05:22.031 }, 00:05:22.031 { 00:05:22.031 "nbd_device": "/dev/nbd1", 00:05:22.031 "bdev_name": "Malloc1" 00:05:22.031 } 00:05:22.031 ]' 00:05:22.031 03:05:56 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:22.031 { 00:05:22.031 "nbd_device": "/dev/nbd0", 00:05:22.031 "bdev_name": "Malloc0" 00:05:22.031 }, 00:05:22.031 { 00:05:22.031 "nbd_device": "/dev/nbd1", 00:05:22.031 "bdev_name": "Malloc1" 00:05:22.031 } 00:05:22.031 ]' 00:05:22.031 03:05:56 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:22.031 03:05:56 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:22.031 /dev/nbd1' 00:05:22.031 03:05:56 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:22.031 /dev/nbd1' 00:05:22.031 03:05:56 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:22.031 03:05:56 -- bdev/nbd_common.sh@65 -- # count=2 00:05:22.031 03:05:56 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:22.031 03:05:56 -- bdev/nbd_common.sh@95 -- # count=2 00:05:22.031 03:05:56 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:22.031 03:05:56 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:22.031 03:05:56 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.031 03:05:56 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:22.031 03:05:56 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:22.031 03:05:56 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:22.031 03:05:56 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:22.031 03:05:56 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:22.031 256+0 records in 00:05:22.031 256+0 records out 00:05:22.031 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00506915 s, 207 MB/s 00:05:22.031 03:05:56 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:22.031 03:05:56 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:22.031 256+0 records in 00:05:22.031 256+0 records out 00:05:22.031 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0201294 s, 52.1 MB/s 00:05:22.031 03:05:56 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:22.031 03:05:56 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:22.031 256+0 records in 00:05:22.031 256+0 records out 00:05:22.031 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0246572 s, 42.5 MB/s 00:05:22.031 03:05:56 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:22.031 03:05:56 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.032 03:05:56 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:22.032 03:05:56 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:22.032 03:05:56 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:22.032 03:05:56 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:22.032 03:05:56 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:22.032 03:05:56 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:22.032 03:05:56 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:22.290 03:05:56 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:22.290 03:05:56 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:22.290 03:05:56 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:22.290 03:05:56 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:22.290 03:05:56 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.290 03:05:56 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.290 03:05:56 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:22.290 03:05:56 -- bdev/nbd_common.sh@51 -- # local i 00:05:22.290 03:05:56 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:22.290 03:05:56 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:22.548 03:05:56 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:22.548 03:05:56 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:22.548 03:05:56 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:22.548 03:05:56 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:22.548 03:05:56 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:22.548 03:05:56 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:22.548 03:05:56 -- bdev/nbd_common.sh@41 -- # break 00:05:22.548 03:05:56 -- bdev/nbd_common.sh@45 -- # return 0 00:05:22.548 03:05:56 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:22.548 03:05:56 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:22.806 03:05:57 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:22.806 03:05:57 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:22.806 03:05:57 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:22.806 03:05:57 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:22.806 03:05:57 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:22.806 03:05:57 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:22.806 03:05:57 -- bdev/nbd_common.sh@41 -- # break 00:05:22.806 03:05:57 -- bdev/nbd_common.sh@45 -- # return 0 00:05:22.806 03:05:57 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:22.806 03:05:57 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.806 03:05:57 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:23.065 03:05:57 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:23.065 03:05:57 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:23.065 03:05:57 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:23.065 03:05:57 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:23.065 03:05:57 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:23.065 03:05:57 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:23.065 03:05:57 -- bdev/nbd_common.sh@65 -- # true 00:05:23.065 03:05:57 -- bdev/nbd_common.sh@65 -- # count=0 00:05:23.065 03:05:57 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:23.065 03:05:57 -- bdev/nbd_common.sh@104 -- # count=0 00:05:23.065 03:05:57 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:23.065 03:05:57 -- bdev/nbd_common.sh@109 -- # return 0 00:05:23.065 03:05:57 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:23.323 03:05:57 -- event/event.sh@35 -- # sleep 3 00:05:23.582 [2024-04-25 03:05:57.894717] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:23.582 [2024-04-25 03:05:58.007552] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:23.582 [2024-04-25 03:05:58.007556] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.582 [2024-04-25 03:05:58.070423] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:23.582 [2024-04-25 03:05:58.070501] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:26.866 03:06:00 -- event/event.sh@23 -- # for i in {0..2} 00:05:26.866 03:06:00 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:26.866 spdk_app_start Round 2 00:05:26.866 03:06:00 -- event/event.sh@25 -- # waitforlisten 1378287 /var/tmp/spdk-nbd.sock 00:05:26.866 03:06:00 -- common/autotest_common.sh@817 -- # '[' -z 1378287 ']' 00:05:26.866 03:06:00 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:26.866 03:06:00 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:26.866 03:06:00 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:26.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:26.866 03:06:00 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:26.866 03:06:00 -- common/autotest_common.sh@10 -- # set +x 00:05:26.866 03:06:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:26.866 03:06:00 -- common/autotest_common.sh@850 -- # return 0 00:05:26.866 03:06:00 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:26.866 Malloc0 00:05:26.866 03:06:01 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:27.124 Malloc1 00:05:27.124 03:06:01 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:27.124 03:06:01 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:27.124 03:06:01 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:27.124 03:06:01 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:27.124 03:06:01 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:27.124 03:06:01 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:27.124 03:06:01 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:27.124 03:06:01 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:27.124 03:06:01 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:27.124 03:06:01 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:27.124 03:06:01 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:27.124 03:06:01 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:27.124 03:06:01 -- bdev/nbd_common.sh@12 -- # local i 00:05:27.124 03:06:01 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:27.124 03:06:01 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:27.124 03:06:01 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:27.382 /dev/nbd0 00:05:27.382 03:06:01 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:27.382 03:06:01 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:27.382 03:06:01 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:05:27.382 03:06:01 -- common/autotest_common.sh@855 -- # local i 00:05:27.382 03:06:01 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:27.382 03:06:01 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:27.382 03:06:01 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:05:27.382 03:06:01 -- common/autotest_common.sh@859 -- # break 00:05:27.382 03:06:01 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:27.382 03:06:01 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:27.382 03:06:01 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:27.382 1+0 records in 00:05:27.382 1+0 records out 00:05:27.382 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000182341 s, 22.5 MB/s 00:05:27.382 03:06:01 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:27.382 03:06:01 -- common/autotest_common.sh@872 -- # size=4096 00:05:27.382 03:06:01 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:27.382 03:06:01 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:27.382 03:06:01 -- common/autotest_common.sh@875 -- # return 0 00:05:27.382 03:06:01 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:27.382 03:06:01 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:27.382 03:06:01 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:27.640 /dev/nbd1 00:05:27.640 03:06:01 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:27.640 03:06:01 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:27.640 03:06:01 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:05:27.640 03:06:01 -- common/autotest_common.sh@855 -- # local i 00:05:27.640 03:06:01 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:27.640 03:06:01 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:27.640 03:06:01 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:05:27.640 03:06:01 -- common/autotest_common.sh@859 -- # break 00:05:27.640 03:06:01 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:27.640 03:06:01 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:27.640 03:06:01 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:27.640 1+0 records in 00:05:27.640 1+0 records out 00:05:27.640 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000168204 s, 24.4 MB/s 00:05:27.640 03:06:01 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:27.640 03:06:01 -- common/autotest_common.sh@872 -- # size=4096 00:05:27.640 03:06:01 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:27.640 03:06:01 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:27.640 03:06:01 -- common/autotest_common.sh@875 -- # return 0 00:05:27.640 03:06:01 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:27.640 03:06:01 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:27.640 03:06:01 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:27.640 03:06:01 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:27.640 03:06:01 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:27.899 03:06:02 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:27.899 { 00:05:27.899 "nbd_device": "/dev/nbd0", 00:05:27.899 "bdev_name": "Malloc0" 00:05:27.899 }, 00:05:27.899 { 00:05:27.899 "nbd_device": "/dev/nbd1", 00:05:27.899 "bdev_name": "Malloc1" 00:05:27.899 } 00:05:27.899 ]' 00:05:27.899 03:06:02 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:27.899 { 00:05:27.899 "nbd_device": "/dev/nbd0", 00:05:27.899 "bdev_name": "Malloc0" 00:05:27.899 }, 00:05:27.899 { 00:05:27.899 "nbd_device": "/dev/nbd1", 00:05:27.899 "bdev_name": "Malloc1" 00:05:27.899 } 00:05:27.899 ]' 00:05:27.899 03:06:02 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:27.899 03:06:02 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:27.899 /dev/nbd1' 00:05:27.899 03:06:02 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:27.899 /dev/nbd1' 00:05:27.899 03:06:02 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:27.899 03:06:02 -- bdev/nbd_common.sh@65 -- # count=2 00:05:27.899 03:06:02 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:27.899 03:06:02 -- bdev/nbd_common.sh@95 -- # count=2 00:05:27.899 03:06:02 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:27.899 03:06:02 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:27.899 03:06:02 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:27.899 03:06:02 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:27.899 03:06:02 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:27.899 03:06:02 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:27.899 03:06:02 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:27.899 03:06:02 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:27.899 256+0 records in 00:05:27.899 256+0 records out 00:05:27.899 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00498582 s, 210 MB/s 00:05:27.899 03:06:02 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:27.899 03:06:02 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:27.899 256+0 records in 00:05:27.899 256+0 records out 00:05:27.899 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0238158 s, 44.0 MB/s 00:05:27.899 03:06:02 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:27.899 03:06:02 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:27.899 256+0 records in 00:05:27.899 256+0 records out 00:05:27.899 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0222992 s, 47.0 MB/s 00:05:27.899 03:06:02 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:27.899 03:06:02 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:27.899 03:06:02 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:27.899 03:06:02 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:27.899 03:06:02 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:27.899 03:06:02 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:27.899 03:06:02 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:27.899 03:06:02 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:27.899 03:06:02 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:27.899 03:06:02 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:27.899 03:06:02 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:27.899 03:06:02 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:27.899 03:06:02 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:27.899 03:06:02 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:27.899 03:06:02 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:27.899 03:06:02 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:27.899 03:06:02 -- bdev/nbd_common.sh@51 -- # local i 00:05:27.899 03:06:02 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:27.899 03:06:02 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:28.158 03:06:02 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:28.158 03:06:02 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:28.158 03:06:02 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:28.158 03:06:02 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:28.158 03:06:02 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:28.158 03:06:02 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:28.158 03:06:02 -- bdev/nbd_common.sh@41 -- # break 00:05:28.158 03:06:02 -- bdev/nbd_common.sh@45 -- # return 0 00:05:28.158 03:06:02 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:28.158 03:06:02 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:28.416 03:06:02 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:28.416 03:06:02 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:28.416 03:06:02 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:28.416 03:06:02 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:28.416 03:06:02 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:28.417 03:06:02 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:28.417 03:06:02 -- bdev/nbd_common.sh@41 -- # break 00:05:28.417 03:06:02 -- bdev/nbd_common.sh@45 -- # return 0 00:05:28.417 03:06:02 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:28.417 03:06:02 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:28.417 03:06:02 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:28.673 03:06:03 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:28.673 03:06:03 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:28.673 03:06:03 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:28.930 03:06:03 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:28.930 03:06:03 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:28.930 03:06:03 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:28.930 03:06:03 -- bdev/nbd_common.sh@65 -- # true 00:05:28.930 03:06:03 -- bdev/nbd_common.sh@65 -- # count=0 00:05:28.930 03:06:03 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:28.930 03:06:03 -- bdev/nbd_common.sh@104 -- # count=0 00:05:28.930 03:06:03 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:28.930 03:06:03 -- bdev/nbd_common.sh@109 -- # return 0 00:05:28.930 03:06:03 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:29.188 03:06:03 -- event/event.sh@35 -- # sleep 3 00:05:29.446 [2024-04-25 03:06:03.740758] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:29.446 [2024-04-25 03:06:03.861093] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.446 [2024-04-25 03:06:03.861094] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:29.446 [2024-04-25 03:06:03.917496] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:29.446 [2024-04-25 03:06:03.917565] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:31.974 03:06:06 -- event/event.sh@38 -- # waitforlisten 1378287 /var/tmp/spdk-nbd.sock 00:05:31.974 03:06:06 -- common/autotest_common.sh@817 -- # '[' -z 1378287 ']' 00:05:31.974 03:06:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:31.974 03:06:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:31.974 03:06:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:31.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:31.974 03:06:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:31.974 03:06:06 -- common/autotest_common.sh@10 -- # set +x 00:05:32.232 03:06:06 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:32.232 03:06:06 -- common/autotest_common.sh@850 -- # return 0 00:05:32.232 03:06:06 -- event/event.sh@39 -- # killprocess 1378287 00:05:32.232 03:06:06 -- common/autotest_common.sh@936 -- # '[' -z 1378287 ']' 00:05:32.232 03:06:06 -- common/autotest_common.sh@940 -- # kill -0 1378287 00:05:32.232 03:06:06 -- common/autotest_common.sh@941 -- # uname 00:05:32.232 03:06:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:32.232 03:06:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1378287 00:05:32.491 03:06:06 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:32.491 03:06:06 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:32.491 03:06:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1378287' 00:05:32.491 killing process with pid 1378287 00:05:32.491 03:06:06 -- common/autotest_common.sh@955 -- # kill 1378287 00:05:32.491 03:06:06 -- common/autotest_common.sh@960 -- # wait 1378287 00:05:32.491 spdk_app_start is called in Round 0. 00:05:32.491 Shutdown signal received, stop current app iteration 00:05:32.491 Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 reinitialization... 00:05:32.491 spdk_app_start is called in Round 1. 00:05:32.491 Shutdown signal received, stop current app iteration 00:05:32.491 Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 reinitialization... 00:05:32.491 spdk_app_start is called in Round 2. 00:05:32.491 Shutdown signal received, stop current app iteration 00:05:32.491 Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 reinitialization... 00:05:32.491 spdk_app_start is called in Round 3. 00:05:32.491 Shutdown signal received, stop current app iteration 00:05:32.491 03:06:06 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:32.491 03:06:06 -- event/event.sh@42 -- # return 0 00:05:32.491 00:05:32.491 real 0m17.920s 00:05:32.491 user 0m38.763s 00:05:32.491 sys 0m3.176s 00:05:32.491 03:06:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:32.491 03:06:06 -- common/autotest_common.sh@10 -- # set +x 00:05:32.491 ************************************ 00:05:32.491 END TEST app_repeat 00:05:32.491 ************************************ 00:05:32.749 03:06:07 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:32.749 03:06:07 -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:32.749 03:06:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:32.749 03:06:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:32.749 03:06:07 -- common/autotest_common.sh@10 -- # set +x 00:05:32.749 ************************************ 00:05:32.749 START TEST cpu_locks 00:05:32.749 ************************************ 00:05:32.749 03:06:07 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:32.749 * Looking for test storage... 00:05:32.749 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:32.749 03:06:07 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:32.749 03:06:07 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:32.749 03:06:07 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:32.749 03:06:07 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:32.749 03:06:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:32.749 03:06:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:32.749 03:06:07 -- common/autotest_common.sh@10 -- # set +x 00:05:32.749 ************************************ 00:05:32.749 START TEST default_locks 00:05:32.749 ************************************ 00:05:32.749 03:06:07 -- common/autotest_common.sh@1111 -- # default_locks 00:05:32.749 03:06:07 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1381042 00:05:32.749 03:06:07 -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:32.749 03:06:07 -- event/cpu_locks.sh@47 -- # waitforlisten 1381042 00:05:32.749 03:06:07 -- common/autotest_common.sh@817 -- # '[' -z 1381042 ']' 00:05:32.749 03:06:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:32.749 03:06:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:32.749 03:06:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:32.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:32.749 03:06:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:33.008 03:06:07 -- common/autotest_common.sh@10 -- # set +x 00:05:33.008 [2024-04-25 03:06:07.294570] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:05:33.008 [2024-04-25 03:06:07.294666] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1381042 ] 00:05:33.008 EAL: No free 2048 kB hugepages reported on node 1 00:05:33.008 [2024-04-25 03:06:07.357840] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.008 [2024-04-25 03:06:07.464977] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.267 03:06:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:33.267 03:06:07 -- common/autotest_common.sh@850 -- # return 0 00:05:33.267 03:06:07 -- event/cpu_locks.sh@49 -- # locks_exist 1381042 00:05:33.267 03:06:07 -- event/cpu_locks.sh@22 -- # lslocks -p 1381042 00:05:33.267 03:06:07 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:33.833 lslocks: write error 00:05:33.833 03:06:08 -- event/cpu_locks.sh@50 -- # killprocess 1381042 00:05:33.833 03:06:08 -- common/autotest_common.sh@936 -- # '[' -z 1381042 ']' 00:05:33.833 03:06:08 -- common/autotest_common.sh@940 -- # kill -0 1381042 00:05:33.833 03:06:08 -- common/autotest_common.sh@941 -- # uname 00:05:33.833 03:06:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:33.833 03:06:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1381042 00:05:33.833 03:06:08 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:33.833 03:06:08 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:33.833 03:06:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1381042' 00:05:33.833 killing process with pid 1381042 00:05:33.833 03:06:08 -- common/autotest_common.sh@955 -- # kill 1381042 00:05:33.833 03:06:08 -- common/autotest_common.sh@960 -- # wait 1381042 00:05:34.093 03:06:08 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1381042 00:05:34.093 03:06:08 -- common/autotest_common.sh@638 -- # local es=0 00:05:34.093 03:06:08 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 1381042 00:05:34.093 03:06:08 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:05:34.093 03:06:08 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:34.093 03:06:08 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:05:34.093 03:06:08 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:34.093 03:06:08 -- common/autotest_common.sh@641 -- # waitforlisten 1381042 00:05:34.093 03:06:08 -- common/autotest_common.sh@817 -- # '[' -z 1381042 ']' 00:05:34.093 03:06:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:34.093 03:06:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:34.093 03:06:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:34.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:34.093 03:06:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:34.093 03:06:08 -- common/autotest_common.sh@10 -- # set +x 00:05:34.093 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 832: kill: (1381042) - No such process 00:05:34.093 ERROR: process (pid: 1381042) is no longer running 00:05:34.093 03:06:08 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:34.093 03:06:08 -- common/autotest_common.sh@850 -- # return 1 00:05:34.093 03:06:08 -- common/autotest_common.sh@641 -- # es=1 00:05:34.093 03:06:08 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:34.093 03:06:08 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:34.093 03:06:08 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:34.093 03:06:08 -- event/cpu_locks.sh@54 -- # no_locks 00:05:34.093 03:06:08 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:34.093 03:06:08 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:34.093 03:06:08 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:34.093 00:05:34.093 real 0m1.296s 00:05:34.093 user 0m1.218s 00:05:34.093 sys 0m0.551s 00:05:34.093 03:06:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:34.093 03:06:08 -- common/autotest_common.sh@10 -- # set +x 00:05:34.093 ************************************ 00:05:34.093 END TEST default_locks 00:05:34.093 ************************************ 00:05:34.093 03:06:08 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:34.093 03:06:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:34.093 03:06:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:34.093 03:06:08 -- common/autotest_common.sh@10 -- # set +x 00:05:34.351 ************************************ 00:05:34.351 START TEST default_locks_via_rpc 00:05:34.351 ************************************ 00:05:34.351 03:06:08 -- common/autotest_common.sh@1111 -- # default_locks_via_rpc 00:05:34.351 03:06:08 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1381518 00:05:34.351 03:06:08 -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:34.351 03:06:08 -- event/cpu_locks.sh@63 -- # waitforlisten 1381518 00:05:34.351 03:06:08 -- common/autotest_common.sh@817 -- # '[' -z 1381518 ']' 00:05:34.351 03:06:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:34.351 03:06:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:34.352 03:06:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:34.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:34.352 03:06:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:34.352 03:06:08 -- common/autotest_common.sh@10 -- # set +x 00:05:34.352 [2024-04-25 03:06:08.713198] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:05:34.352 [2024-04-25 03:06:08.713274] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1381518 ] 00:05:34.352 EAL: No free 2048 kB hugepages reported on node 1 00:05:34.352 [2024-04-25 03:06:08.774462] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.611 [2024-04-25 03:06:08.890206] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.178 03:06:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:35.178 03:06:09 -- common/autotest_common.sh@850 -- # return 0 00:05:35.178 03:06:09 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:35.178 03:06:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:35.178 03:06:09 -- common/autotest_common.sh@10 -- # set +x 00:05:35.178 03:06:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:35.178 03:06:09 -- event/cpu_locks.sh@67 -- # no_locks 00:05:35.178 03:06:09 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:35.178 03:06:09 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:35.178 03:06:09 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:35.178 03:06:09 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:35.178 03:06:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:35.178 03:06:09 -- common/autotest_common.sh@10 -- # set +x 00:05:35.178 03:06:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:35.178 03:06:09 -- event/cpu_locks.sh@71 -- # locks_exist 1381518 00:05:35.178 03:06:09 -- event/cpu_locks.sh@22 -- # lslocks -p 1381518 00:05:35.178 03:06:09 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:35.435 03:06:09 -- event/cpu_locks.sh@73 -- # killprocess 1381518 00:05:35.435 03:06:09 -- common/autotest_common.sh@936 -- # '[' -z 1381518 ']' 00:05:35.435 03:06:09 -- common/autotest_common.sh@940 -- # kill -0 1381518 00:05:35.435 03:06:09 -- common/autotest_common.sh@941 -- # uname 00:05:35.435 03:06:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:35.435 03:06:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1381518 00:05:35.693 03:06:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:35.693 03:06:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:35.693 03:06:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1381518' 00:05:35.693 killing process with pid 1381518 00:05:35.693 03:06:09 -- common/autotest_common.sh@955 -- # kill 1381518 00:05:35.693 03:06:09 -- common/autotest_common.sh@960 -- # wait 1381518 00:05:35.951 00:05:35.951 real 0m1.746s 00:05:35.951 user 0m1.872s 00:05:35.951 sys 0m0.548s 00:05:35.951 03:06:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:35.951 03:06:10 -- common/autotest_common.sh@10 -- # set +x 00:05:35.951 ************************************ 00:05:35.951 END TEST default_locks_via_rpc 00:05:35.951 ************************************ 00:05:35.951 03:06:10 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:35.951 03:06:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:35.951 03:06:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:35.951 03:06:10 -- common/autotest_common.sh@10 -- # set +x 00:05:36.210 ************************************ 00:05:36.210 START TEST non_locking_app_on_locked_coremask 00:05:36.210 ************************************ 00:05:36.210 03:06:10 -- common/autotest_common.sh@1111 -- # non_locking_app_on_locked_coremask 00:05:36.210 03:06:10 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1381772 00:05:36.210 03:06:10 -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:36.210 03:06:10 -- event/cpu_locks.sh@81 -- # waitforlisten 1381772 /var/tmp/spdk.sock 00:05:36.210 03:06:10 -- common/autotest_common.sh@817 -- # '[' -z 1381772 ']' 00:05:36.210 03:06:10 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:36.210 03:06:10 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:36.210 03:06:10 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:36.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:36.210 03:06:10 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:36.210 03:06:10 -- common/autotest_common.sh@10 -- # set +x 00:05:36.210 [2024-04-25 03:06:10.569839] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:05:36.210 [2024-04-25 03:06:10.569919] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1381772 ] 00:05:36.210 EAL: No free 2048 kB hugepages reported on node 1 00:05:36.210 [2024-04-25 03:06:10.629106] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.468 [2024-04-25 03:06:10.735814] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.729 03:06:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:36.729 03:06:10 -- common/autotest_common.sh@850 -- # return 0 00:05:36.729 03:06:10 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1381872 00:05:36.729 03:06:10 -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:36.729 03:06:10 -- event/cpu_locks.sh@85 -- # waitforlisten 1381872 /var/tmp/spdk2.sock 00:05:36.729 03:06:10 -- common/autotest_common.sh@817 -- # '[' -z 1381872 ']' 00:05:36.729 03:06:10 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:36.729 03:06:10 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:36.729 03:06:10 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:36.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:36.729 03:06:10 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:36.729 03:06:10 -- common/autotest_common.sh@10 -- # set +x 00:05:36.729 [2024-04-25 03:06:11.041977] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:05:36.729 [2024-04-25 03:06:11.042060] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1381872 ] 00:05:36.729 EAL: No free 2048 kB hugepages reported on node 1 00:05:36.729 [2024-04-25 03:06:11.139045] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:36.729 [2024-04-25 03:06:11.139078] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.990 [2024-04-25 03:06:11.372578] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.557 03:06:11 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:37.557 03:06:11 -- common/autotest_common.sh@850 -- # return 0 00:05:37.557 03:06:11 -- event/cpu_locks.sh@87 -- # locks_exist 1381772 00:05:37.557 03:06:11 -- event/cpu_locks.sh@22 -- # lslocks -p 1381772 00:05:37.557 03:06:11 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:37.815 lslocks: write error 00:05:37.815 03:06:12 -- event/cpu_locks.sh@89 -- # killprocess 1381772 00:05:37.815 03:06:12 -- common/autotest_common.sh@936 -- # '[' -z 1381772 ']' 00:05:37.815 03:06:12 -- common/autotest_common.sh@940 -- # kill -0 1381772 00:05:37.815 03:06:12 -- common/autotest_common.sh@941 -- # uname 00:05:38.073 03:06:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:38.073 03:06:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1381772 00:05:38.073 03:06:12 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:38.073 03:06:12 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:38.073 03:06:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1381772' 00:05:38.073 killing process with pid 1381772 00:05:38.073 03:06:12 -- common/autotest_common.sh@955 -- # kill 1381772 00:05:38.073 03:06:12 -- common/autotest_common.sh@960 -- # wait 1381772 00:05:39.008 03:06:13 -- event/cpu_locks.sh@90 -- # killprocess 1381872 00:05:39.008 03:06:13 -- common/autotest_common.sh@936 -- # '[' -z 1381872 ']' 00:05:39.008 03:06:13 -- common/autotest_common.sh@940 -- # kill -0 1381872 00:05:39.008 03:06:13 -- common/autotest_common.sh@941 -- # uname 00:05:39.008 03:06:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:39.008 03:06:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1381872 00:05:39.008 03:06:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:39.008 03:06:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:39.008 03:06:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1381872' 00:05:39.008 killing process with pid 1381872 00:05:39.008 03:06:13 -- common/autotest_common.sh@955 -- # kill 1381872 00:05:39.008 03:06:13 -- common/autotest_common.sh@960 -- # wait 1381872 00:05:39.267 00:05:39.267 real 0m3.233s 00:05:39.267 user 0m3.354s 00:05:39.267 sys 0m1.010s 00:05:39.267 03:06:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:39.267 03:06:13 -- common/autotest_common.sh@10 -- # set +x 00:05:39.267 ************************************ 00:05:39.267 END TEST non_locking_app_on_locked_coremask 00:05:39.267 ************************************ 00:05:39.526 03:06:13 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:39.526 03:06:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:39.526 03:06:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:39.526 03:06:13 -- common/autotest_common.sh@10 -- # set +x 00:05:39.526 ************************************ 00:05:39.526 START TEST locking_app_on_unlocked_coremask 00:05:39.526 ************************************ 00:05:39.526 03:06:13 -- common/autotest_common.sh@1111 -- # locking_app_on_unlocked_coremask 00:05:39.526 03:06:13 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1382190 00:05:39.526 03:06:13 -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:39.526 03:06:13 -- event/cpu_locks.sh@99 -- # waitforlisten 1382190 /var/tmp/spdk.sock 00:05:39.526 03:06:13 -- common/autotest_common.sh@817 -- # '[' -z 1382190 ']' 00:05:39.526 03:06:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.526 03:06:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:39.526 03:06:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.526 03:06:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:39.526 03:06:13 -- common/autotest_common.sh@10 -- # set +x 00:05:39.526 [2024-04-25 03:06:13.923807] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:05:39.526 [2024-04-25 03:06:13.923878] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1382190 ] 00:05:39.526 EAL: No free 2048 kB hugepages reported on node 1 00:05:39.526 [2024-04-25 03:06:13.986143] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:39.526 [2024-04-25 03:06:13.986179] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.785 [2024-04-25 03:06:14.098814] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.044 03:06:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:40.044 03:06:14 -- common/autotest_common.sh@850 -- # return 0 00:05:40.044 03:06:14 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1382313 00:05:40.044 03:06:14 -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:40.044 03:06:14 -- event/cpu_locks.sh@103 -- # waitforlisten 1382313 /var/tmp/spdk2.sock 00:05:40.044 03:06:14 -- common/autotest_common.sh@817 -- # '[' -z 1382313 ']' 00:05:40.044 03:06:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:40.044 03:06:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:40.044 03:06:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:40.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:40.044 03:06:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:40.044 03:06:14 -- common/autotest_common.sh@10 -- # set +x 00:05:40.044 [2024-04-25 03:06:14.411019] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:05:40.044 [2024-04-25 03:06:14.411101] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1382313 ] 00:05:40.044 EAL: No free 2048 kB hugepages reported on node 1 00:05:40.044 [2024-04-25 03:06:14.509469] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.303 [2024-04-25 03:06:14.742579] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.237 03:06:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:41.238 03:06:15 -- common/autotest_common.sh@850 -- # return 0 00:05:41.238 03:06:15 -- event/cpu_locks.sh@105 -- # locks_exist 1382313 00:05:41.238 03:06:15 -- event/cpu_locks.sh@22 -- # lslocks -p 1382313 00:05:41.238 03:06:15 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:41.495 lslocks: write error 00:05:41.495 03:06:15 -- event/cpu_locks.sh@107 -- # killprocess 1382190 00:05:41.495 03:06:15 -- common/autotest_common.sh@936 -- # '[' -z 1382190 ']' 00:05:41.495 03:06:15 -- common/autotest_common.sh@940 -- # kill -0 1382190 00:05:41.495 03:06:15 -- common/autotest_common.sh@941 -- # uname 00:05:41.495 03:06:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:41.495 03:06:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1382190 00:05:41.495 03:06:15 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:41.495 03:06:15 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:41.495 03:06:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1382190' 00:05:41.495 killing process with pid 1382190 00:05:41.495 03:06:15 -- common/autotest_common.sh@955 -- # kill 1382190 00:05:41.495 03:06:15 -- common/autotest_common.sh@960 -- # wait 1382190 00:05:42.430 03:06:16 -- event/cpu_locks.sh@108 -- # killprocess 1382313 00:05:42.430 03:06:16 -- common/autotest_common.sh@936 -- # '[' -z 1382313 ']' 00:05:42.430 03:06:16 -- common/autotest_common.sh@940 -- # kill -0 1382313 00:05:42.430 03:06:16 -- common/autotest_common.sh@941 -- # uname 00:05:42.430 03:06:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:42.430 03:06:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1382313 00:05:42.430 03:06:16 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:42.430 03:06:16 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:42.430 03:06:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1382313' 00:05:42.430 killing process with pid 1382313 00:05:42.431 03:06:16 -- common/autotest_common.sh@955 -- # kill 1382313 00:05:42.431 03:06:16 -- common/autotest_common.sh@960 -- # wait 1382313 00:05:42.996 00:05:42.996 real 0m3.499s 00:05:42.996 user 0m3.669s 00:05:42.996 sys 0m1.100s 00:05:42.996 03:06:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:42.996 03:06:17 -- common/autotest_common.sh@10 -- # set +x 00:05:42.996 ************************************ 00:05:42.996 END TEST locking_app_on_unlocked_coremask 00:05:42.996 ************************************ 00:05:42.996 03:06:17 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:42.996 03:06:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:42.996 03:06:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:42.996 03:06:17 -- common/autotest_common.sh@10 -- # set +x 00:05:42.997 ************************************ 00:05:42.997 START TEST locking_app_on_locked_coremask 00:05:42.997 ************************************ 00:05:42.997 03:06:17 -- common/autotest_common.sh@1111 -- # locking_app_on_locked_coremask 00:05:42.997 03:06:17 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1382752 00:05:42.997 03:06:17 -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:42.997 03:06:17 -- event/cpu_locks.sh@116 -- # waitforlisten 1382752 /var/tmp/spdk.sock 00:05:42.997 03:06:17 -- common/autotest_common.sh@817 -- # '[' -z 1382752 ']' 00:05:42.997 03:06:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:42.997 03:06:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:42.997 03:06:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:42.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:42.997 03:06:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:42.997 03:06:17 -- common/autotest_common.sh@10 -- # set +x 00:05:43.255 [2024-04-25 03:06:17.536480] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:05:43.255 [2024-04-25 03:06:17.536565] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1382752 ] 00:05:43.255 EAL: No free 2048 kB hugepages reported on node 1 00:05:43.255 [2024-04-25 03:06:17.596327] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.255 [2024-04-25 03:06:17.709118] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.190 03:06:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:44.190 03:06:18 -- common/autotest_common.sh@850 -- # return 0 00:05:44.190 03:06:18 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1382792 00:05:44.190 03:06:18 -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:44.190 03:06:18 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1382792 /var/tmp/spdk2.sock 00:05:44.190 03:06:18 -- common/autotest_common.sh@638 -- # local es=0 00:05:44.190 03:06:18 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 1382792 /var/tmp/spdk2.sock 00:05:44.190 03:06:18 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:05:44.190 03:06:18 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:44.190 03:06:18 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:05:44.190 03:06:18 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:44.190 03:06:18 -- common/autotest_common.sh@641 -- # waitforlisten 1382792 /var/tmp/spdk2.sock 00:05:44.190 03:06:18 -- common/autotest_common.sh@817 -- # '[' -z 1382792 ']' 00:05:44.190 03:06:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:44.190 03:06:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:44.190 03:06:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:44.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:44.190 03:06:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:44.190 03:06:18 -- common/autotest_common.sh@10 -- # set +x 00:05:44.190 [2024-04-25 03:06:18.513964] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:05:44.190 [2024-04-25 03:06:18.514077] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1382792 ] 00:05:44.190 EAL: No free 2048 kB hugepages reported on node 1 00:05:44.190 [2024-04-25 03:06:18.609455] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1382752 has claimed it. 00:05:44.190 [2024-04-25 03:06:18.609505] app.c: 821:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:44.754 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 832: kill: (1382792) - No such process 00:05:44.754 ERROR: process (pid: 1382792) is no longer running 00:05:44.754 03:06:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:44.754 03:06:19 -- common/autotest_common.sh@850 -- # return 1 00:05:44.754 03:06:19 -- common/autotest_common.sh@641 -- # es=1 00:05:44.754 03:06:19 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:44.754 03:06:19 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:44.754 03:06:19 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:44.754 03:06:19 -- event/cpu_locks.sh@122 -- # locks_exist 1382752 00:05:44.754 03:06:19 -- event/cpu_locks.sh@22 -- # lslocks -p 1382752 00:05:44.754 03:06:19 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:45.319 lslocks: write error 00:05:45.319 03:06:19 -- event/cpu_locks.sh@124 -- # killprocess 1382752 00:05:45.319 03:06:19 -- common/autotest_common.sh@936 -- # '[' -z 1382752 ']' 00:05:45.319 03:06:19 -- common/autotest_common.sh@940 -- # kill -0 1382752 00:05:45.319 03:06:19 -- common/autotest_common.sh@941 -- # uname 00:05:45.319 03:06:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:45.319 03:06:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1382752 00:05:45.319 03:06:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:45.319 03:06:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:45.319 03:06:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1382752' 00:05:45.319 killing process with pid 1382752 00:05:45.319 03:06:19 -- common/autotest_common.sh@955 -- # kill 1382752 00:05:45.319 03:06:19 -- common/autotest_common.sh@960 -- # wait 1382752 00:05:45.579 00:05:45.579 real 0m2.546s 00:05:45.579 user 0m2.894s 00:05:45.579 sys 0m0.664s 00:05:45.579 03:06:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:45.579 03:06:20 -- common/autotest_common.sh@10 -- # set +x 00:05:45.579 ************************************ 00:05:45.579 END TEST locking_app_on_locked_coremask 00:05:45.579 ************************************ 00:05:45.579 03:06:20 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:45.579 03:06:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:45.579 03:06:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:45.579 03:06:20 -- common/autotest_common.sh@10 -- # set +x 00:05:45.838 ************************************ 00:05:45.838 START TEST locking_overlapped_coremask 00:05:45.838 ************************************ 00:05:45.838 03:06:20 -- common/autotest_common.sh@1111 -- # locking_overlapped_coremask 00:05:45.838 03:06:20 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1383065 00:05:45.838 03:06:20 -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:45.838 03:06:20 -- event/cpu_locks.sh@133 -- # waitforlisten 1383065 /var/tmp/spdk.sock 00:05:45.838 03:06:20 -- common/autotest_common.sh@817 -- # '[' -z 1383065 ']' 00:05:45.838 03:06:20 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:45.838 03:06:20 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:45.838 03:06:20 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:45.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:45.838 03:06:20 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:45.838 03:06:20 -- common/autotest_common.sh@10 -- # set +x 00:05:45.838 [2024-04-25 03:06:20.201532] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:05:45.838 [2024-04-25 03:06:20.201641] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1383065 ] 00:05:45.838 EAL: No free 2048 kB hugepages reported on node 1 00:05:45.838 [2024-04-25 03:06:20.262718] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:46.096 [2024-04-25 03:06:20.380195] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:46.096 [2024-04-25 03:06:20.380250] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:46.096 [2024-04-25 03:06:20.380253] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.354 03:06:20 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:46.354 03:06:20 -- common/autotest_common.sh@850 -- # return 0 00:05:46.354 03:06:20 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1383074 00:05:46.354 03:06:20 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1383074 /var/tmp/spdk2.sock 00:05:46.354 03:06:20 -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:46.354 03:06:20 -- common/autotest_common.sh@638 -- # local es=0 00:05:46.354 03:06:20 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 1383074 /var/tmp/spdk2.sock 00:05:46.354 03:06:20 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:05:46.354 03:06:20 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:46.354 03:06:20 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:05:46.354 03:06:20 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:46.354 03:06:20 -- common/autotest_common.sh@641 -- # waitforlisten 1383074 /var/tmp/spdk2.sock 00:05:46.354 03:06:20 -- common/autotest_common.sh@817 -- # '[' -z 1383074 ']' 00:05:46.354 03:06:20 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:46.354 03:06:20 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:46.354 03:06:20 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:46.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:46.354 03:06:20 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:46.354 03:06:20 -- common/autotest_common.sh@10 -- # set +x 00:05:46.354 [2024-04-25 03:06:20.679287] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:05:46.354 [2024-04-25 03:06:20.679364] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1383074 ] 00:05:46.354 EAL: No free 2048 kB hugepages reported on node 1 00:05:46.354 [2024-04-25 03:06:20.779333] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1383065 has claimed it. 00:05:46.354 [2024-04-25 03:06:20.779393] app.c: 821:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:46.919 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 832: kill: (1383074) - No such process 00:05:46.919 ERROR: process (pid: 1383074) is no longer running 00:05:46.919 03:06:21 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:46.919 03:06:21 -- common/autotest_common.sh@850 -- # return 1 00:05:46.919 03:06:21 -- common/autotest_common.sh@641 -- # es=1 00:05:46.919 03:06:21 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:46.919 03:06:21 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:46.919 03:06:21 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:46.919 03:06:21 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:46.919 03:06:21 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:46.919 03:06:21 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:46.920 03:06:21 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:46.920 03:06:21 -- event/cpu_locks.sh@141 -- # killprocess 1383065 00:05:46.920 03:06:21 -- common/autotest_common.sh@936 -- # '[' -z 1383065 ']' 00:05:46.920 03:06:21 -- common/autotest_common.sh@940 -- # kill -0 1383065 00:05:46.920 03:06:21 -- common/autotest_common.sh@941 -- # uname 00:05:46.920 03:06:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:46.920 03:06:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1383065 00:05:46.920 03:06:21 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:46.920 03:06:21 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:46.920 03:06:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1383065' 00:05:46.920 killing process with pid 1383065 00:05:46.920 03:06:21 -- common/autotest_common.sh@955 -- # kill 1383065 00:05:46.920 03:06:21 -- common/autotest_common.sh@960 -- # wait 1383065 00:05:47.484 00:05:47.484 real 0m1.717s 00:05:47.485 user 0m4.561s 00:05:47.485 sys 0m0.462s 00:05:47.485 03:06:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:47.485 03:06:21 -- common/autotest_common.sh@10 -- # set +x 00:05:47.485 ************************************ 00:05:47.485 END TEST locking_overlapped_coremask 00:05:47.485 ************************************ 00:05:47.485 03:06:21 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:47.485 03:06:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:47.485 03:06:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:47.485 03:06:21 -- common/autotest_common.sh@10 -- # set +x 00:05:47.744 ************************************ 00:05:47.744 START TEST locking_overlapped_coremask_via_rpc 00:05:47.744 ************************************ 00:05:47.744 03:06:21 -- common/autotest_common.sh@1111 -- # locking_overlapped_coremask_via_rpc 00:05:47.744 03:06:21 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1383370 00:05:47.744 03:06:21 -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:47.744 03:06:21 -- event/cpu_locks.sh@149 -- # waitforlisten 1383370 /var/tmp/spdk.sock 00:05:47.744 03:06:21 -- common/autotest_common.sh@817 -- # '[' -z 1383370 ']' 00:05:47.744 03:06:21 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:47.744 03:06:21 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:47.744 03:06:21 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:47.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:47.744 03:06:21 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:47.744 03:06:21 -- common/autotest_common.sh@10 -- # set +x 00:05:47.744 [2024-04-25 03:06:22.037221] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:05:47.744 [2024-04-25 03:06:22.037312] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1383370 ] 00:05:47.744 EAL: No free 2048 kB hugepages reported on node 1 00:05:47.744 [2024-04-25 03:06:22.094145] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:47.744 [2024-04-25 03:06:22.094181] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:47.744 [2024-04-25 03:06:22.200933] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:47.744 [2024-04-25 03:06:22.200990] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:47.744 [2024-04-25 03:06:22.200993] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.004 03:06:22 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:48.004 03:06:22 -- common/autotest_common.sh@850 -- # return 0 00:05:48.004 03:06:22 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1383376 00:05:48.004 03:06:22 -- event/cpu_locks.sh@153 -- # waitforlisten 1383376 /var/tmp/spdk2.sock 00:05:48.004 03:06:22 -- common/autotest_common.sh@817 -- # '[' -z 1383376 ']' 00:05:48.004 03:06:22 -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:48.004 03:06:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:48.004 03:06:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:48.004 03:06:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:48.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:48.004 03:06:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:48.004 03:06:22 -- common/autotest_common.sh@10 -- # set +x 00:05:48.269 [2024-04-25 03:06:22.506455] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:05:48.269 [2024-04-25 03:06:22.506538] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1383376 ] 00:05:48.269 EAL: No free 2048 kB hugepages reported on node 1 00:05:48.269 [2024-04-25 03:06:22.595552] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:48.269 [2024-04-25 03:06:22.595584] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:48.528 [2024-04-25 03:06:22.818690] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:48.528 [2024-04-25 03:06:22.818744] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:05:48.528 [2024-04-25 03:06:22.818746] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:49.093 03:06:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:49.093 03:06:23 -- common/autotest_common.sh@850 -- # return 0 00:05:49.093 03:06:23 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:49.093 03:06:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:49.093 03:06:23 -- common/autotest_common.sh@10 -- # set +x 00:05:49.093 03:06:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:49.093 03:06:23 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:49.093 03:06:23 -- common/autotest_common.sh@638 -- # local es=0 00:05:49.093 03:06:23 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:49.093 03:06:23 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:05:49.093 03:06:23 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:49.093 03:06:23 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:05:49.093 03:06:23 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:49.093 03:06:23 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:49.093 03:06:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:49.093 03:06:23 -- common/autotest_common.sh@10 -- # set +x 00:05:49.093 [2024-04-25 03:06:23.447731] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1383370 has claimed it. 00:05:49.093 request: 00:05:49.093 { 00:05:49.093 "method": "framework_enable_cpumask_locks", 00:05:49.093 "req_id": 1 00:05:49.093 } 00:05:49.093 Got JSON-RPC error response 00:05:49.093 response: 00:05:49.093 { 00:05:49.093 "code": -32603, 00:05:49.093 "message": "Failed to claim CPU core: 2" 00:05:49.093 } 00:05:49.093 03:06:23 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:05:49.093 03:06:23 -- common/autotest_common.sh@641 -- # es=1 00:05:49.093 03:06:23 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:49.093 03:06:23 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:49.093 03:06:23 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:49.093 03:06:23 -- event/cpu_locks.sh@158 -- # waitforlisten 1383370 /var/tmp/spdk.sock 00:05:49.093 03:06:23 -- common/autotest_common.sh@817 -- # '[' -z 1383370 ']' 00:05:49.093 03:06:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.093 03:06:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:49.093 03:06:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.093 03:06:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:49.093 03:06:23 -- common/autotest_common.sh@10 -- # set +x 00:05:49.351 03:06:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:49.351 03:06:23 -- common/autotest_common.sh@850 -- # return 0 00:05:49.351 03:06:23 -- event/cpu_locks.sh@159 -- # waitforlisten 1383376 /var/tmp/spdk2.sock 00:05:49.351 03:06:23 -- common/autotest_common.sh@817 -- # '[' -z 1383376 ']' 00:05:49.351 03:06:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:49.351 03:06:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:49.351 03:06:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:49.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:49.351 03:06:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:49.351 03:06:23 -- common/autotest_common.sh@10 -- # set +x 00:05:49.610 03:06:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:49.610 03:06:23 -- common/autotest_common.sh@850 -- # return 0 00:05:49.610 03:06:23 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:49.610 03:06:23 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:49.610 03:06:23 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:49.610 03:06:23 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:49.610 00:05:49.610 real 0m1.941s 00:05:49.610 user 0m0.994s 00:05:49.610 sys 0m0.166s 00:05:49.610 03:06:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:49.610 03:06:23 -- common/autotest_common.sh@10 -- # set +x 00:05:49.610 ************************************ 00:05:49.610 END TEST locking_overlapped_coremask_via_rpc 00:05:49.610 ************************************ 00:05:49.610 03:06:23 -- event/cpu_locks.sh@174 -- # cleanup 00:05:49.610 03:06:23 -- event/cpu_locks.sh@15 -- # [[ -z 1383370 ]] 00:05:49.610 03:06:23 -- event/cpu_locks.sh@15 -- # killprocess 1383370 00:05:49.610 03:06:23 -- common/autotest_common.sh@936 -- # '[' -z 1383370 ']' 00:05:49.610 03:06:23 -- common/autotest_common.sh@940 -- # kill -0 1383370 00:05:49.610 03:06:23 -- common/autotest_common.sh@941 -- # uname 00:05:49.610 03:06:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:49.610 03:06:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1383370 00:05:49.610 03:06:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:49.610 03:06:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:49.610 03:06:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1383370' 00:05:49.610 killing process with pid 1383370 00:05:49.610 03:06:23 -- common/autotest_common.sh@955 -- # kill 1383370 00:05:49.610 03:06:23 -- common/autotest_common.sh@960 -- # wait 1383370 00:05:50.176 03:06:24 -- event/cpu_locks.sh@16 -- # [[ -z 1383376 ]] 00:05:50.176 03:06:24 -- event/cpu_locks.sh@16 -- # killprocess 1383376 00:05:50.176 03:06:24 -- common/autotest_common.sh@936 -- # '[' -z 1383376 ']' 00:05:50.176 03:06:24 -- common/autotest_common.sh@940 -- # kill -0 1383376 00:05:50.176 03:06:24 -- common/autotest_common.sh@941 -- # uname 00:05:50.176 03:06:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:50.176 03:06:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1383376 00:05:50.177 03:06:24 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:05:50.177 03:06:24 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:05:50.177 03:06:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1383376' 00:05:50.177 killing process with pid 1383376 00:05:50.177 03:06:24 -- common/autotest_common.sh@955 -- # kill 1383376 00:05:50.177 03:06:24 -- common/autotest_common.sh@960 -- # wait 1383376 00:05:50.435 03:06:24 -- event/cpu_locks.sh@18 -- # rm -f 00:05:50.435 03:06:24 -- event/cpu_locks.sh@1 -- # cleanup 00:05:50.435 03:06:24 -- event/cpu_locks.sh@15 -- # [[ -z 1383370 ]] 00:05:50.435 03:06:24 -- event/cpu_locks.sh@15 -- # killprocess 1383370 00:05:50.435 03:06:24 -- common/autotest_common.sh@936 -- # '[' -z 1383370 ']' 00:05:50.435 03:06:24 -- common/autotest_common.sh@940 -- # kill -0 1383370 00:05:50.435 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (1383370) - No such process 00:05:50.435 03:06:24 -- common/autotest_common.sh@963 -- # echo 'Process with pid 1383370 is not found' 00:05:50.435 Process with pid 1383370 is not found 00:05:50.435 03:06:24 -- event/cpu_locks.sh@16 -- # [[ -z 1383376 ]] 00:05:50.435 03:06:24 -- event/cpu_locks.sh@16 -- # killprocess 1383376 00:05:50.435 03:06:24 -- common/autotest_common.sh@936 -- # '[' -z 1383376 ']' 00:05:50.435 03:06:24 -- common/autotest_common.sh@940 -- # kill -0 1383376 00:05:50.435 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (1383376) - No such process 00:05:50.435 03:06:24 -- common/autotest_common.sh@963 -- # echo 'Process with pid 1383376 is not found' 00:05:50.435 Process with pid 1383376 is not found 00:05:50.435 03:06:24 -- event/cpu_locks.sh@18 -- # rm -f 00:05:50.435 00:05:50.435 real 0m17.803s 00:05:50.435 user 0m29.481s 00:05:50.435 sys 0m5.613s 00:05:50.435 03:06:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:50.435 03:06:24 -- common/autotest_common.sh@10 -- # set +x 00:05:50.435 ************************************ 00:05:50.435 END TEST cpu_locks 00:05:50.435 ************************************ 00:05:50.435 00:05:50.435 real 0m44.816s 00:05:50.435 user 1m22.301s 00:05:50.435 sys 0m9.901s 00:05:50.435 03:06:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:50.435 03:06:24 -- common/autotest_common.sh@10 -- # set +x 00:05:50.435 ************************************ 00:05:50.435 END TEST event 00:05:50.435 ************************************ 00:05:50.693 03:06:24 -- spdk/autotest.sh@178 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:50.693 03:06:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:50.694 03:06:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:50.694 03:06:24 -- common/autotest_common.sh@10 -- # set +x 00:05:50.694 ************************************ 00:05:50.694 START TEST thread 00:05:50.694 ************************************ 00:05:50.694 03:06:25 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:50.694 * Looking for test storage... 00:05:50.694 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:50.694 03:06:25 -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:50.694 03:06:25 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:05:50.694 03:06:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:50.694 03:06:25 -- common/autotest_common.sh@10 -- # set +x 00:05:50.952 ************************************ 00:05:50.952 START TEST thread_poller_perf 00:05:50.952 ************************************ 00:05:50.952 03:06:25 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:50.952 [2024-04-25 03:06:25.220993] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:05:50.952 [2024-04-25 03:06:25.221056] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1383796 ] 00:05:50.952 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.952 [2024-04-25 03:06:25.286409] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.952 [2024-04-25 03:06:25.403862] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.952 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:52.326 ====================================== 00:05:52.326 busy:2716173244 (cyc) 00:05:52.326 total_run_count: 297000 00:05:52.326 tsc_hz: 2700000000 (cyc) 00:05:52.326 ====================================== 00:05:52.326 poller_cost: 9145 (cyc), 3387 (nsec) 00:05:52.326 00:05:52.326 real 0m1.328s 00:05:52.326 user 0m1.244s 00:05:52.326 sys 0m0.078s 00:05:52.326 03:06:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:52.326 03:06:26 -- common/autotest_common.sh@10 -- # set +x 00:05:52.326 ************************************ 00:05:52.326 END TEST thread_poller_perf 00:05:52.326 ************************************ 00:05:52.326 03:06:26 -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:52.326 03:06:26 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:05:52.326 03:06:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:52.326 03:06:26 -- common/autotest_common.sh@10 -- # set +x 00:05:52.326 ************************************ 00:05:52.326 START TEST thread_poller_perf 00:05:52.326 ************************************ 00:05:52.326 03:06:26 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:52.326 [2024-04-25 03:06:26.656946] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:05:52.326 [2024-04-25 03:06:26.657004] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1384041 ] 00:05:52.326 EAL: No free 2048 kB hugepages reported on node 1 00:05:52.326 [2024-04-25 03:06:26.717119] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.584 [2024-04-25 03:06:26.835405] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.584 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:53.518 ====================================== 00:05:53.518 busy:2702612868 (cyc) 00:05:53.518 total_run_count: 3908000 00:05:53.518 tsc_hz: 2700000000 (cyc) 00:05:53.518 ====================================== 00:05:53.518 poller_cost: 691 (cyc), 255 (nsec) 00:05:53.518 00:05:53.518 real 0m1.315s 00:05:53.518 user 0m1.224s 00:05:53.518 sys 0m0.085s 00:05:53.518 03:06:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:53.518 03:06:27 -- common/autotest_common.sh@10 -- # set +x 00:05:53.518 ************************************ 00:05:53.518 END TEST thread_poller_perf 00:05:53.518 ************************************ 00:05:53.518 03:06:27 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:53.518 00:05:53.518 real 0m2.922s 00:05:53.518 user 0m2.595s 00:05:53.518 sys 0m0.307s 00:05:53.518 03:06:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:53.518 03:06:27 -- common/autotest_common.sh@10 -- # set +x 00:05:53.518 ************************************ 00:05:53.518 END TEST thread 00:05:53.518 ************************************ 00:05:53.518 03:06:28 -- spdk/autotest.sh@179 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:05:53.518 03:06:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:53.518 03:06:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:53.518 03:06:28 -- common/autotest_common.sh@10 -- # set +x 00:05:53.777 ************************************ 00:05:53.777 START TEST accel 00:05:53.777 ************************************ 00:05:53.777 03:06:28 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:05:53.777 * Looking for test storage... 00:05:53.777 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:05:53.777 03:06:28 -- accel/accel.sh@81 -- # declare -A expected_opcs 00:05:53.777 03:06:28 -- accel/accel.sh@82 -- # get_expected_opcs 00:05:53.777 03:06:28 -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:53.777 03:06:28 -- accel/accel.sh@62 -- # spdk_tgt_pid=1384251 00:05:53.777 03:06:28 -- accel/accel.sh@63 -- # waitforlisten 1384251 00:05:53.777 03:06:28 -- common/autotest_common.sh@817 -- # '[' -z 1384251 ']' 00:05:53.777 03:06:28 -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:05:53.777 03:06:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.777 03:06:28 -- accel/accel.sh@61 -- # build_accel_config 00:05:53.777 03:06:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:53.777 03:06:28 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:53.777 03:06:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.777 03:06:28 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:53.777 03:06:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:53.777 03:06:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:53.777 03:06:28 -- common/autotest_common.sh@10 -- # set +x 00:05:53.777 03:06:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:53.777 03:06:28 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:53.777 03:06:28 -- accel/accel.sh@40 -- # local IFS=, 00:05:53.777 03:06:28 -- accel/accel.sh@41 -- # jq -r . 00:05:53.777 [2024-04-25 03:06:28.205481] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:05:53.777 [2024-04-25 03:06:28.205555] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1384251 ] 00:05:53.777 EAL: No free 2048 kB hugepages reported on node 1 00:05:53.777 [2024-04-25 03:06:28.264900] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.036 [2024-04-25 03:06:28.369911] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.294 03:06:28 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:54.294 03:06:28 -- common/autotest_common.sh@850 -- # return 0 00:05:54.294 03:06:28 -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:05:54.294 03:06:28 -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:05:54.294 03:06:28 -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:05:54.294 03:06:28 -- accel/accel.sh@68 -- # [[ -n '' ]] 00:05:54.294 03:06:28 -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:05:54.294 03:06:28 -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:05:54.294 03:06:28 -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:05:54.294 03:06:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:54.295 03:06:28 -- common/autotest_common.sh@10 -- # set +x 00:05:54.295 03:06:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:54.295 03:06:28 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:54.295 03:06:28 -- accel/accel.sh@72 -- # IFS== 00:05:54.295 03:06:28 -- accel/accel.sh@72 -- # read -r opc module 00:05:54.295 03:06:28 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:54.295 03:06:28 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:54.295 03:06:28 -- accel/accel.sh@72 -- # IFS== 00:05:54.295 03:06:28 -- accel/accel.sh@72 -- # read -r opc module 00:05:54.295 03:06:28 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:54.295 03:06:28 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:54.295 03:06:28 -- accel/accel.sh@72 -- # IFS== 00:05:54.295 03:06:28 -- accel/accel.sh@72 -- # read -r opc module 00:05:54.295 03:06:28 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:54.295 03:06:28 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:54.295 03:06:28 -- accel/accel.sh@72 -- # IFS== 00:05:54.295 03:06:28 -- accel/accel.sh@72 -- # read -r opc module 00:05:54.295 03:06:28 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:54.295 03:06:28 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:54.295 03:06:28 -- accel/accel.sh@72 -- # IFS== 00:05:54.295 03:06:28 -- accel/accel.sh@72 -- # read -r opc module 00:05:54.295 03:06:28 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:54.295 03:06:28 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:54.295 03:06:28 -- accel/accel.sh@72 -- # IFS== 00:05:54.295 03:06:28 -- accel/accel.sh@72 -- # read -r opc module 00:05:54.295 03:06:28 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:54.295 03:06:28 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:54.295 03:06:28 -- accel/accel.sh@72 -- # IFS== 00:05:54.295 03:06:28 -- accel/accel.sh@72 -- # read -r opc module 00:05:54.295 03:06:28 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:54.295 03:06:28 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:54.295 03:06:28 -- accel/accel.sh@72 -- # IFS== 00:05:54.295 03:06:28 -- accel/accel.sh@72 -- # read -r opc module 00:05:54.295 03:06:28 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:54.295 03:06:28 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:54.295 03:06:28 -- accel/accel.sh@72 -- # IFS== 00:05:54.295 03:06:28 -- accel/accel.sh@72 -- # read -r opc module 00:05:54.295 03:06:28 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:54.295 03:06:28 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:54.295 03:06:28 -- accel/accel.sh@72 -- # IFS== 00:05:54.295 03:06:28 -- accel/accel.sh@72 -- # read -r opc module 00:05:54.295 03:06:28 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:54.295 03:06:28 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:54.295 03:06:28 -- accel/accel.sh@72 -- # IFS== 00:05:54.295 03:06:28 -- accel/accel.sh@72 -- # read -r opc module 00:05:54.295 03:06:28 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:54.295 03:06:28 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:54.295 03:06:28 -- accel/accel.sh@72 -- # IFS== 00:05:54.295 03:06:28 -- accel/accel.sh@72 -- # read -r opc module 00:05:54.295 03:06:28 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:54.295 03:06:28 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:54.295 03:06:28 -- accel/accel.sh@72 -- # IFS== 00:05:54.295 03:06:28 -- accel/accel.sh@72 -- # read -r opc module 00:05:54.295 03:06:28 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:54.295 03:06:28 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:54.295 03:06:28 -- accel/accel.sh@72 -- # IFS== 00:05:54.295 03:06:28 -- accel/accel.sh@72 -- # read -r opc module 00:05:54.295 03:06:28 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:54.295 03:06:28 -- accel/accel.sh@75 -- # killprocess 1384251 00:05:54.295 03:06:28 -- common/autotest_common.sh@936 -- # '[' -z 1384251 ']' 00:05:54.295 03:06:28 -- common/autotest_common.sh@940 -- # kill -0 1384251 00:05:54.295 03:06:28 -- common/autotest_common.sh@941 -- # uname 00:05:54.295 03:06:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:54.295 03:06:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1384251 00:05:54.295 03:06:28 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:54.295 03:06:28 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:54.295 03:06:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1384251' 00:05:54.295 killing process with pid 1384251 00:05:54.295 03:06:28 -- common/autotest_common.sh@955 -- # kill 1384251 00:05:54.295 03:06:28 -- common/autotest_common.sh@960 -- # wait 1384251 00:05:54.862 03:06:29 -- accel/accel.sh@76 -- # trap - ERR 00:05:54.862 03:06:29 -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:05:54.862 03:06:29 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:05:54.862 03:06:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:54.862 03:06:29 -- common/autotest_common.sh@10 -- # set +x 00:05:54.862 03:06:29 -- common/autotest_common.sh@1111 -- # accel_perf -h 00:05:54.862 03:06:29 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:05:54.862 03:06:29 -- accel/accel.sh@12 -- # build_accel_config 00:05:54.862 03:06:29 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:54.862 03:06:29 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:54.862 03:06:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:54.862 03:06:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:54.862 03:06:29 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:54.862 03:06:29 -- accel/accel.sh@40 -- # local IFS=, 00:05:54.862 03:06:29 -- accel/accel.sh@41 -- # jq -r . 00:05:54.862 03:06:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:54.862 03:06:29 -- common/autotest_common.sh@10 -- # set +x 00:05:54.862 03:06:29 -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:05:54.862 03:06:29 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:54.862 03:06:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:54.862 03:06:29 -- common/autotest_common.sh@10 -- # set +x 00:05:55.121 ************************************ 00:05:55.121 START TEST accel_missing_filename 00:05:55.121 ************************************ 00:05:55.121 03:06:29 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w compress 00:05:55.121 03:06:29 -- common/autotest_common.sh@638 -- # local es=0 00:05:55.121 03:06:29 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress 00:05:55.121 03:06:29 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:05:55.121 03:06:29 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:55.121 03:06:29 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:05:55.121 03:06:29 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:55.121 03:06:29 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress 00:05:55.121 03:06:29 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:05:55.121 03:06:29 -- accel/accel.sh@12 -- # build_accel_config 00:05:55.121 03:06:29 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:55.121 03:06:29 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:55.121 03:06:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:55.121 03:06:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:55.121 03:06:29 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:55.121 03:06:29 -- accel/accel.sh@40 -- # local IFS=, 00:05:55.121 03:06:29 -- accel/accel.sh@41 -- # jq -r . 00:05:55.121 [2024-04-25 03:06:29.404343] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:05:55.121 [2024-04-25 03:06:29.404407] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1384431 ] 00:05:55.121 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.121 [2024-04-25 03:06:29.466587] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.121 [2024-04-25 03:06:29.584137] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.380 [2024-04-25 03:06:29.645849] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:55.380 [2024-04-25 03:06:29.728793] accel_perf.c:1394:main: *ERROR*: ERROR starting application 00:05:55.380 A filename is required. 00:05:55.380 03:06:29 -- common/autotest_common.sh@641 -- # es=234 00:05:55.380 03:06:29 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:55.380 03:06:29 -- common/autotest_common.sh@650 -- # es=106 00:05:55.380 03:06:29 -- common/autotest_common.sh@651 -- # case "$es" in 00:05:55.380 03:06:29 -- common/autotest_common.sh@658 -- # es=1 00:05:55.380 03:06:29 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:55.380 00:05:55.380 real 0m0.463s 00:05:55.380 user 0m0.348s 00:05:55.380 sys 0m0.148s 00:05:55.380 03:06:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:55.380 03:06:29 -- common/autotest_common.sh@10 -- # set +x 00:05:55.380 ************************************ 00:05:55.380 END TEST accel_missing_filename 00:05:55.380 ************************************ 00:05:55.380 03:06:29 -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:55.380 03:06:29 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:05:55.380 03:06:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:55.380 03:06:29 -- common/autotest_common.sh@10 -- # set +x 00:05:55.639 ************************************ 00:05:55.639 START TEST accel_compress_verify 00:05:55.639 ************************************ 00:05:55.639 03:06:29 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:55.639 03:06:29 -- common/autotest_common.sh@638 -- # local es=0 00:05:55.639 03:06:29 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:55.639 03:06:29 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:05:55.639 03:06:29 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:55.639 03:06:29 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:05:55.639 03:06:29 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:55.639 03:06:29 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:55.639 03:06:29 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:55.639 03:06:29 -- accel/accel.sh@12 -- # build_accel_config 00:05:55.639 03:06:29 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:55.639 03:06:29 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:55.639 03:06:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:55.639 03:06:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:55.639 03:06:29 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:55.639 03:06:29 -- accel/accel.sh@40 -- # local IFS=, 00:05:55.639 03:06:29 -- accel/accel.sh@41 -- # jq -r . 00:05:55.639 [2024-04-25 03:06:29.979544] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:05:55.639 [2024-04-25 03:06:29.979605] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1384584 ] 00:05:55.639 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.639 [2024-04-25 03:06:30.045127] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.898 [2024-04-25 03:06:30.161216] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.898 [2024-04-25 03:06:30.223548] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:55.898 [2024-04-25 03:06:30.311892] accel_perf.c:1394:main: *ERROR*: ERROR starting application 00:05:56.156 00:05:56.156 Compression does not support the verify option, aborting. 00:05:56.156 03:06:30 -- common/autotest_common.sh@641 -- # es=161 00:05:56.156 03:06:30 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:56.156 03:06:30 -- common/autotest_common.sh@650 -- # es=33 00:05:56.156 03:06:30 -- common/autotest_common.sh@651 -- # case "$es" in 00:05:56.156 03:06:30 -- common/autotest_common.sh@658 -- # es=1 00:05:56.156 03:06:30 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:56.156 00:05:56.157 real 0m0.474s 00:05:56.157 user 0m0.369s 00:05:56.157 sys 0m0.139s 00:05:56.157 03:06:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:56.157 03:06:30 -- common/autotest_common.sh@10 -- # set +x 00:05:56.157 ************************************ 00:05:56.157 END TEST accel_compress_verify 00:05:56.157 ************************************ 00:05:56.157 03:06:30 -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:05:56.157 03:06:30 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:56.157 03:06:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:56.157 03:06:30 -- common/autotest_common.sh@10 -- # set +x 00:05:56.157 ************************************ 00:05:56.157 START TEST accel_wrong_workload 00:05:56.157 ************************************ 00:05:56.157 03:06:30 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w foobar 00:05:56.157 03:06:30 -- common/autotest_common.sh@638 -- # local es=0 00:05:56.157 03:06:30 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:05:56.157 03:06:30 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:05:56.157 03:06:30 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:56.157 03:06:30 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:05:56.157 03:06:30 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:56.157 03:06:30 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w foobar 00:05:56.157 03:06:30 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:05:56.157 03:06:30 -- accel/accel.sh@12 -- # build_accel_config 00:05:56.157 03:06:30 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:56.157 03:06:30 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:56.157 03:06:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:56.157 03:06:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:56.157 03:06:30 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:56.157 03:06:30 -- accel/accel.sh@40 -- # local IFS=, 00:05:56.157 03:06:30 -- accel/accel.sh@41 -- # jq -r . 00:05:56.157 Unsupported workload type: foobar 00:05:56.157 [2024-04-25 03:06:30.563504] app.c:1364:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:05:56.157 accel_perf options: 00:05:56.157 [-h help message] 00:05:56.157 [-q queue depth per core] 00:05:56.157 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:56.157 [-T number of threads per core 00:05:56.157 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:56.157 [-t time in seconds] 00:05:56.157 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:56.157 [ dif_verify, , dif_generate, dif_generate_copy 00:05:56.157 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:56.157 [-l for compress/decompress workloads, name of uncompressed input file 00:05:56.157 [-S for crc32c workload, use this seed value (default 0) 00:05:56.157 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:56.157 [-f for fill workload, use this BYTE value (default 255) 00:05:56.157 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:56.157 [-y verify result if this switch is on] 00:05:56.157 [-a tasks to allocate per core (default: same value as -q)] 00:05:56.157 Can be used to spread operations across a wider range of memory. 00:05:56.157 03:06:30 -- common/autotest_common.sh@641 -- # es=1 00:05:56.157 03:06:30 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:56.157 03:06:30 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:56.157 03:06:30 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:56.157 00:05:56.157 real 0m0.022s 00:05:56.157 user 0m0.012s 00:05:56.157 sys 0m0.009s 00:05:56.157 03:06:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:56.157 03:06:30 -- common/autotest_common.sh@10 -- # set +x 00:05:56.157 ************************************ 00:05:56.157 END TEST accel_wrong_workload 00:05:56.157 ************************************ 00:05:56.157 Error: writing output failed: Broken pipe 00:05:56.157 03:06:30 -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:05:56.157 03:06:30 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:05:56.157 03:06:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:56.157 03:06:30 -- common/autotest_common.sh@10 -- # set +x 00:05:56.416 ************************************ 00:05:56.416 START TEST accel_negative_buffers 00:05:56.416 ************************************ 00:05:56.416 03:06:30 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:05:56.416 03:06:30 -- common/autotest_common.sh@638 -- # local es=0 00:05:56.416 03:06:30 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:05:56.416 03:06:30 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:05:56.416 03:06:30 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:56.416 03:06:30 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:05:56.416 03:06:30 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:56.416 03:06:30 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w xor -y -x -1 00:05:56.416 03:06:30 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:05:56.416 03:06:30 -- accel/accel.sh@12 -- # build_accel_config 00:05:56.416 03:06:30 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:56.416 03:06:30 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:56.416 03:06:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:56.416 03:06:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:56.416 03:06:30 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:56.416 03:06:30 -- accel/accel.sh@40 -- # local IFS=, 00:05:56.416 03:06:30 -- accel/accel.sh@41 -- # jq -r . 00:05:56.416 -x option must be non-negative. 00:05:56.416 [2024-04-25 03:06:30.717372] app.c:1364:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:05:56.416 accel_perf options: 00:05:56.416 [-h help message] 00:05:56.416 [-q queue depth per core] 00:05:56.416 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:56.416 [-T number of threads per core 00:05:56.416 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:56.416 [-t time in seconds] 00:05:56.416 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:56.417 [ dif_verify, , dif_generate, dif_generate_copy 00:05:56.417 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:56.417 [-l for compress/decompress workloads, name of uncompressed input file 00:05:56.417 [-S for crc32c workload, use this seed value (default 0) 00:05:56.417 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:56.417 [-f for fill workload, use this BYTE value (default 255) 00:05:56.417 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:56.417 [-y verify result if this switch is on] 00:05:56.417 [-a tasks to allocate per core (default: same value as -q)] 00:05:56.417 Can be used to spread operations across a wider range of memory. 00:05:56.417 03:06:30 -- common/autotest_common.sh@641 -- # es=1 00:05:56.417 03:06:30 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:56.417 03:06:30 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:56.417 03:06:30 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:56.417 00:05:56.417 real 0m0.022s 00:05:56.417 user 0m0.012s 00:05:56.417 sys 0m0.011s 00:05:56.417 03:06:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:56.417 03:06:30 -- common/autotest_common.sh@10 -- # set +x 00:05:56.417 ************************************ 00:05:56.417 END TEST accel_negative_buffers 00:05:56.417 ************************************ 00:05:56.417 Error: writing output failed: Broken pipe 00:05:56.417 03:06:30 -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:05:56.417 03:06:30 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:05:56.417 03:06:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:56.417 03:06:30 -- common/autotest_common.sh@10 -- # set +x 00:05:56.417 ************************************ 00:05:56.417 START TEST accel_crc32c 00:05:56.417 ************************************ 00:05:56.417 03:06:30 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w crc32c -S 32 -y 00:05:56.417 03:06:30 -- accel/accel.sh@16 -- # local accel_opc 00:05:56.417 03:06:30 -- accel/accel.sh@17 -- # local accel_module 00:05:56.417 03:06:30 -- accel/accel.sh@19 -- # IFS=: 00:05:56.417 03:06:30 -- accel/accel.sh@19 -- # read -r var val 00:05:56.417 03:06:30 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:56.417 03:06:30 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:56.417 03:06:30 -- accel/accel.sh@12 -- # build_accel_config 00:05:56.417 03:06:30 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:56.417 03:06:30 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:56.417 03:06:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:56.417 03:06:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:56.417 03:06:30 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:56.417 03:06:30 -- accel/accel.sh@40 -- # local IFS=, 00:05:56.417 03:06:30 -- accel/accel.sh@41 -- # jq -r . 00:05:56.417 [2024-04-25 03:06:30.849853] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:05:56.417 [2024-04-25 03:06:30.849929] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1384712 ] 00:05:56.417 EAL: No free 2048 kB hugepages reported on node 1 00:05:56.417 [2024-04-25 03:06:30.912023] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.676 [2024-04-25 03:06:31.028746] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.676 03:06:31 -- accel/accel.sh@20 -- # val= 00:05:56.676 03:06:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.676 03:06:31 -- accel/accel.sh@19 -- # IFS=: 00:05:56.676 03:06:31 -- accel/accel.sh@19 -- # read -r var val 00:05:56.676 03:06:31 -- accel/accel.sh@20 -- # val= 00:05:56.676 03:06:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.676 03:06:31 -- accel/accel.sh@19 -- # IFS=: 00:05:56.676 03:06:31 -- accel/accel.sh@19 -- # read -r var val 00:05:56.676 03:06:31 -- accel/accel.sh@20 -- # val=0x1 00:05:56.676 03:06:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.676 03:06:31 -- accel/accel.sh@19 -- # IFS=: 00:05:56.676 03:06:31 -- accel/accel.sh@19 -- # read -r var val 00:05:56.676 03:06:31 -- accel/accel.sh@20 -- # val= 00:05:56.676 03:06:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.676 03:06:31 -- accel/accel.sh@19 -- # IFS=: 00:05:56.676 03:06:31 -- accel/accel.sh@19 -- # read -r var val 00:05:56.676 03:06:31 -- accel/accel.sh@20 -- # val= 00:05:56.676 03:06:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.676 03:06:31 -- accel/accel.sh@19 -- # IFS=: 00:05:56.676 03:06:31 -- accel/accel.sh@19 -- # read -r var val 00:05:56.676 03:06:31 -- accel/accel.sh@20 -- # val=crc32c 00:05:56.676 03:06:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.676 03:06:31 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:56.676 03:06:31 -- accel/accel.sh@19 -- # IFS=: 00:05:56.676 03:06:31 -- accel/accel.sh@19 -- # read -r var val 00:05:56.676 03:06:31 -- accel/accel.sh@20 -- # val=32 00:05:56.676 03:06:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.676 03:06:31 -- accel/accel.sh@19 -- # IFS=: 00:05:56.676 03:06:31 -- accel/accel.sh@19 -- # read -r var val 00:05:56.676 03:06:31 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:56.676 03:06:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.676 03:06:31 -- accel/accel.sh@19 -- # IFS=: 00:05:56.676 03:06:31 -- accel/accel.sh@19 -- # read -r var val 00:05:56.676 03:06:31 -- accel/accel.sh@20 -- # val= 00:05:56.676 03:06:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.676 03:06:31 -- accel/accel.sh@19 -- # IFS=: 00:05:56.676 03:06:31 -- accel/accel.sh@19 -- # read -r var val 00:05:56.676 03:06:31 -- accel/accel.sh@20 -- # val=software 00:05:56.676 03:06:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.676 03:06:31 -- accel/accel.sh@22 -- # accel_module=software 00:05:56.676 03:06:31 -- accel/accel.sh@19 -- # IFS=: 00:05:56.676 03:06:31 -- accel/accel.sh@19 -- # read -r var val 00:05:56.676 03:06:31 -- accel/accel.sh@20 -- # val=32 00:05:56.676 03:06:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.676 03:06:31 -- accel/accel.sh@19 -- # IFS=: 00:05:56.676 03:06:31 -- accel/accel.sh@19 -- # read -r var val 00:05:56.676 03:06:31 -- accel/accel.sh@20 -- # val=32 00:05:56.676 03:06:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.676 03:06:31 -- accel/accel.sh@19 -- # IFS=: 00:05:56.676 03:06:31 -- accel/accel.sh@19 -- # read -r var val 00:05:56.676 03:06:31 -- accel/accel.sh@20 -- # val=1 00:05:56.676 03:06:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.676 03:06:31 -- accel/accel.sh@19 -- # IFS=: 00:05:56.676 03:06:31 -- accel/accel.sh@19 -- # read -r var val 00:05:56.676 03:06:31 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:56.676 03:06:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.676 03:06:31 -- accel/accel.sh@19 -- # IFS=: 00:05:56.676 03:06:31 -- accel/accel.sh@19 -- # read -r var val 00:05:56.676 03:06:31 -- accel/accel.sh@20 -- # val=Yes 00:05:56.676 03:06:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.676 03:06:31 -- accel/accel.sh@19 -- # IFS=: 00:05:56.676 03:06:31 -- accel/accel.sh@19 -- # read -r var val 00:05:56.676 03:06:31 -- accel/accel.sh@20 -- # val= 00:05:56.676 03:06:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.676 03:06:31 -- accel/accel.sh@19 -- # IFS=: 00:05:56.676 03:06:31 -- accel/accel.sh@19 -- # read -r var val 00:05:56.676 03:06:31 -- accel/accel.sh@20 -- # val= 00:05:56.676 03:06:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.676 03:06:31 -- accel/accel.sh@19 -- # IFS=: 00:05:56.676 03:06:31 -- accel/accel.sh@19 -- # read -r var val 00:05:58.051 03:06:32 -- accel/accel.sh@20 -- # val= 00:05:58.051 03:06:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:58.051 03:06:32 -- accel/accel.sh@19 -- # IFS=: 00:05:58.051 03:06:32 -- accel/accel.sh@19 -- # read -r var val 00:05:58.051 03:06:32 -- accel/accel.sh@20 -- # val= 00:05:58.051 03:06:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:58.051 03:06:32 -- accel/accel.sh@19 -- # IFS=: 00:05:58.051 03:06:32 -- accel/accel.sh@19 -- # read -r var val 00:05:58.051 03:06:32 -- accel/accel.sh@20 -- # val= 00:05:58.051 03:06:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:58.051 03:06:32 -- accel/accel.sh@19 -- # IFS=: 00:05:58.051 03:06:32 -- accel/accel.sh@19 -- # read -r var val 00:05:58.051 03:06:32 -- accel/accel.sh@20 -- # val= 00:05:58.051 03:06:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:58.051 03:06:32 -- accel/accel.sh@19 -- # IFS=: 00:05:58.051 03:06:32 -- accel/accel.sh@19 -- # read -r var val 00:05:58.051 03:06:32 -- accel/accel.sh@20 -- # val= 00:05:58.051 03:06:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:58.051 03:06:32 -- accel/accel.sh@19 -- # IFS=: 00:05:58.051 03:06:32 -- accel/accel.sh@19 -- # read -r var val 00:05:58.051 03:06:32 -- accel/accel.sh@20 -- # val= 00:05:58.051 03:06:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:58.051 03:06:32 -- accel/accel.sh@19 -- # IFS=: 00:05:58.051 03:06:32 -- accel/accel.sh@19 -- # read -r var val 00:05:58.051 03:06:32 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:58.051 03:06:32 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:58.051 03:06:32 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:58.051 00:05:58.051 real 0m1.471s 00:05:58.051 user 0m1.332s 00:05:58.051 sys 0m0.141s 00:05:58.051 03:06:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:58.051 03:06:32 -- common/autotest_common.sh@10 -- # set +x 00:05:58.051 ************************************ 00:05:58.051 END TEST accel_crc32c 00:05:58.051 ************************************ 00:05:58.051 03:06:32 -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:05:58.051 03:06:32 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:05:58.051 03:06:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:58.051 03:06:32 -- common/autotest_common.sh@10 -- # set +x 00:05:58.051 ************************************ 00:05:58.051 START TEST accel_crc32c_C2 00:05:58.051 ************************************ 00:05:58.051 03:06:32 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w crc32c -y -C 2 00:05:58.051 03:06:32 -- accel/accel.sh@16 -- # local accel_opc 00:05:58.051 03:06:32 -- accel/accel.sh@17 -- # local accel_module 00:05:58.051 03:06:32 -- accel/accel.sh@19 -- # IFS=: 00:05:58.051 03:06:32 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:58.051 03:06:32 -- accel/accel.sh@19 -- # read -r var val 00:05:58.051 03:06:32 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:58.051 03:06:32 -- accel/accel.sh@12 -- # build_accel_config 00:05:58.051 03:06:32 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:58.051 03:06:32 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:58.051 03:06:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:58.051 03:06:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:58.051 03:06:32 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:58.051 03:06:32 -- accel/accel.sh@40 -- # local IFS=, 00:05:58.051 03:06:32 -- accel/accel.sh@41 -- # jq -r . 00:05:58.051 [2024-04-25 03:06:32.433857] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:05:58.051 [2024-04-25 03:06:32.433914] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1384958 ] 00:05:58.051 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.051 [2024-04-25 03:06:32.496342] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.310 [2024-04-25 03:06:32.613836] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.310 03:06:32 -- accel/accel.sh@20 -- # val= 00:05:58.310 03:06:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:58.310 03:06:32 -- accel/accel.sh@19 -- # IFS=: 00:05:58.310 03:06:32 -- accel/accel.sh@19 -- # read -r var val 00:05:58.310 03:06:32 -- accel/accel.sh@20 -- # val= 00:05:58.310 03:06:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:58.310 03:06:32 -- accel/accel.sh@19 -- # IFS=: 00:05:58.310 03:06:32 -- accel/accel.sh@19 -- # read -r var val 00:05:58.310 03:06:32 -- accel/accel.sh@20 -- # val=0x1 00:05:58.310 03:06:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:58.310 03:06:32 -- accel/accel.sh@19 -- # IFS=: 00:05:58.310 03:06:32 -- accel/accel.sh@19 -- # read -r var val 00:05:58.310 03:06:32 -- accel/accel.sh@20 -- # val= 00:05:58.310 03:06:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:58.310 03:06:32 -- accel/accel.sh@19 -- # IFS=: 00:05:58.310 03:06:32 -- accel/accel.sh@19 -- # read -r var val 00:05:58.310 03:06:32 -- accel/accel.sh@20 -- # val= 00:05:58.310 03:06:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:58.310 03:06:32 -- accel/accel.sh@19 -- # IFS=: 00:05:58.310 03:06:32 -- accel/accel.sh@19 -- # read -r var val 00:05:58.310 03:06:32 -- accel/accel.sh@20 -- # val=crc32c 00:05:58.310 03:06:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:58.310 03:06:32 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:58.310 03:06:32 -- accel/accel.sh@19 -- # IFS=: 00:05:58.310 03:06:32 -- accel/accel.sh@19 -- # read -r var val 00:05:58.310 03:06:32 -- accel/accel.sh@20 -- # val=0 00:05:58.310 03:06:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:58.310 03:06:32 -- accel/accel.sh@19 -- # IFS=: 00:05:58.310 03:06:32 -- accel/accel.sh@19 -- # read -r var val 00:05:58.310 03:06:32 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:58.310 03:06:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:58.310 03:06:32 -- accel/accel.sh@19 -- # IFS=: 00:05:58.310 03:06:32 -- accel/accel.sh@19 -- # read -r var val 00:05:58.310 03:06:32 -- accel/accel.sh@20 -- # val= 00:05:58.310 03:06:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:58.310 03:06:32 -- accel/accel.sh@19 -- # IFS=: 00:05:58.310 03:06:32 -- accel/accel.sh@19 -- # read -r var val 00:05:58.310 03:06:32 -- accel/accel.sh@20 -- # val=software 00:05:58.310 03:06:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:58.310 03:06:32 -- accel/accel.sh@22 -- # accel_module=software 00:05:58.310 03:06:32 -- accel/accel.sh@19 -- # IFS=: 00:05:58.310 03:06:32 -- accel/accel.sh@19 -- # read -r var val 00:05:58.310 03:06:32 -- accel/accel.sh@20 -- # val=32 00:05:58.310 03:06:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:58.310 03:06:32 -- accel/accel.sh@19 -- # IFS=: 00:05:58.310 03:06:32 -- accel/accel.sh@19 -- # read -r var val 00:05:58.310 03:06:32 -- accel/accel.sh@20 -- # val=32 00:05:58.310 03:06:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:58.310 03:06:32 -- accel/accel.sh@19 -- # IFS=: 00:05:58.310 03:06:32 -- accel/accel.sh@19 -- # read -r var val 00:05:58.310 03:06:32 -- accel/accel.sh@20 -- # val=1 00:05:58.310 03:06:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:58.310 03:06:32 -- accel/accel.sh@19 -- # IFS=: 00:05:58.310 03:06:32 -- accel/accel.sh@19 -- # read -r var val 00:05:58.310 03:06:32 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:58.310 03:06:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:58.310 03:06:32 -- accel/accel.sh@19 -- # IFS=: 00:05:58.310 03:06:32 -- accel/accel.sh@19 -- # read -r var val 00:05:58.310 03:06:32 -- accel/accel.sh@20 -- # val=Yes 00:05:58.310 03:06:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:58.310 03:06:32 -- accel/accel.sh@19 -- # IFS=: 00:05:58.310 03:06:32 -- accel/accel.sh@19 -- # read -r var val 00:05:58.310 03:06:32 -- accel/accel.sh@20 -- # val= 00:05:58.310 03:06:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:58.310 03:06:32 -- accel/accel.sh@19 -- # IFS=: 00:05:58.310 03:06:32 -- accel/accel.sh@19 -- # read -r var val 00:05:58.310 03:06:32 -- accel/accel.sh@20 -- # val= 00:05:58.310 03:06:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:58.310 03:06:32 -- accel/accel.sh@19 -- # IFS=: 00:05:58.310 03:06:32 -- accel/accel.sh@19 -- # read -r var val 00:05:59.686 03:06:33 -- accel/accel.sh@20 -- # val= 00:05:59.686 03:06:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.686 03:06:33 -- accel/accel.sh@19 -- # IFS=: 00:05:59.686 03:06:33 -- accel/accel.sh@19 -- # read -r var val 00:05:59.686 03:06:33 -- accel/accel.sh@20 -- # val= 00:05:59.686 03:06:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.686 03:06:33 -- accel/accel.sh@19 -- # IFS=: 00:05:59.686 03:06:33 -- accel/accel.sh@19 -- # read -r var val 00:05:59.686 03:06:33 -- accel/accel.sh@20 -- # val= 00:05:59.686 03:06:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.686 03:06:33 -- accel/accel.sh@19 -- # IFS=: 00:05:59.686 03:06:33 -- accel/accel.sh@19 -- # read -r var val 00:05:59.686 03:06:33 -- accel/accel.sh@20 -- # val= 00:05:59.686 03:06:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.686 03:06:33 -- accel/accel.sh@19 -- # IFS=: 00:05:59.686 03:06:33 -- accel/accel.sh@19 -- # read -r var val 00:05:59.686 03:06:33 -- accel/accel.sh@20 -- # val= 00:05:59.686 03:06:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.686 03:06:33 -- accel/accel.sh@19 -- # IFS=: 00:05:59.686 03:06:33 -- accel/accel.sh@19 -- # read -r var val 00:05:59.686 03:06:33 -- accel/accel.sh@20 -- # val= 00:05:59.686 03:06:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.686 03:06:33 -- accel/accel.sh@19 -- # IFS=: 00:05:59.686 03:06:33 -- accel/accel.sh@19 -- # read -r var val 00:05:59.686 03:06:33 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:59.686 03:06:33 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:59.686 03:06:33 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:59.686 00:05:59.686 real 0m1.476s 00:05:59.686 user 0m1.339s 00:05:59.686 sys 0m0.139s 00:05:59.686 03:06:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:59.686 03:06:33 -- common/autotest_common.sh@10 -- # set +x 00:05:59.686 ************************************ 00:05:59.686 END TEST accel_crc32c_C2 00:05:59.686 ************************************ 00:05:59.686 03:06:33 -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:05:59.686 03:06:33 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:59.686 03:06:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:59.686 03:06:33 -- common/autotest_common.sh@10 -- # set +x 00:05:59.686 ************************************ 00:05:59.686 START TEST accel_copy 00:05:59.686 ************************************ 00:05:59.686 03:06:33 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy -y 00:05:59.686 03:06:33 -- accel/accel.sh@16 -- # local accel_opc 00:05:59.686 03:06:33 -- accel/accel.sh@17 -- # local accel_module 00:05:59.686 03:06:34 -- accel/accel.sh@19 -- # IFS=: 00:05:59.686 03:06:34 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:05:59.686 03:06:34 -- accel/accel.sh@19 -- # read -r var val 00:05:59.686 03:06:34 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:59.686 03:06:34 -- accel/accel.sh@12 -- # build_accel_config 00:05:59.686 03:06:34 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:59.686 03:06:34 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:59.686 03:06:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:59.686 03:06:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:59.686 03:06:34 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:59.686 03:06:34 -- accel/accel.sh@40 -- # local IFS=, 00:05:59.686 03:06:34 -- accel/accel.sh@41 -- # jq -r . 00:05:59.686 [2024-04-25 03:06:34.017742] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:05:59.686 [2024-04-25 03:06:34.017797] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1385124 ] 00:05:59.686 EAL: No free 2048 kB hugepages reported on node 1 00:05:59.686 [2024-04-25 03:06:34.082186] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.946 [2024-04-25 03:06:34.199800] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.946 03:06:34 -- accel/accel.sh@20 -- # val= 00:05:59.946 03:06:34 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.946 03:06:34 -- accel/accel.sh@19 -- # IFS=: 00:05:59.946 03:06:34 -- accel/accel.sh@19 -- # read -r var val 00:05:59.946 03:06:34 -- accel/accel.sh@20 -- # val= 00:05:59.946 03:06:34 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.946 03:06:34 -- accel/accel.sh@19 -- # IFS=: 00:05:59.946 03:06:34 -- accel/accel.sh@19 -- # read -r var val 00:05:59.946 03:06:34 -- accel/accel.sh@20 -- # val=0x1 00:05:59.946 03:06:34 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.946 03:06:34 -- accel/accel.sh@19 -- # IFS=: 00:05:59.946 03:06:34 -- accel/accel.sh@19 -- # read -r var val 00:05:59.946 03:06:34 -- accel/accel.sh@20 -- # val= 00:05:59.946 03:06:34 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.946 03:06:34 -- accel/accel.sh@19 -- # IFS=: 00:05:59.946 03:06:34 -- accel/accel.sh@19 -- # read -r var val 00:05:59.946 03:06:34 -- accel/accel.sh@20 -- # val= 00:05:59.946 03:06:34 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.946 03:06:34 -- accel/accel.sh@19 -- # IFS=: 00:05:59.946 03:06:34 -- accel/accel.sh@19 -- # read -r var val 00:05:59.946 03:06:34 -- accel/accel.sh@20 -- # val=copy 00:05:59.946 03:06:34 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.946 03:06:34 -- accel/accel.sh@23 -- # accel_opc=copy 00:05:59.946 03:06:34 -- accel/accel.sh@19 -- # IFS=: 00:05:59.946 03:06:34 -- accel/accel.sh@19 -- # read -r var val 00:05:59.946 03:06:34 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:59.946 03:06:34 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.946 03:06:34 -- accel/accel.sh@19 -- # IFS=: 00:05:59.946 03:06:34 -- accel/accel.sh@19 -- # read -r var val 00:05:59.946 03:06:34 -- accel/accel.sh@20 -- # val= 00:05:59.946 03:06:34 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.946 03:06:34 -- accel/accel.sh@19 -- # IFS=: 00:05:59.946 03:06:34 -- accel/accel.sh@19 -- # read -r var val 00:05:59.946 03:06:34 -- accel/accel.sh@20 -- # val=software 00:05:59.946 03:06:34 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.946 03:06:34 -- accel/accel.sh@22 -- # accel_module=software 00:05:59.946 03:06:34 -- accel/accel.sh@19 -- # IFS=: 00:05:59.946 03:06:34 -- accel/accel.sh@19 -- # read -r var val 00:05:59.946 03:06:34 -- accel/accel.sh@20 -- # val=32 00:05:59.946 03:06:34 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.946 03:06:34 -- accel/accel.sh@19 -- # IFS=: 00:05:59.946 03:06:34 -- accel/accel.sh@19 -- # read -r var val 00:05:59.946 03:06:34 -- accel/accel.sh@20 -- # val=32 00:05:59.946 03:06:34 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.946 03:06:34 -- accel/accel.sh@19 -- # IFS=: 00:05:59.946 03:06:34 -- accel/accel.sh@19 -- # read -r var val 00:05:59.946 03:06:34 -- accel/accel.sh@20 -- # val=1 00:05:59.946 03:06:34 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.946 03:06:34 -- accel/accel.sh@19 -- # IFS=: 00:05:59.946 03:06:34 -- accel/accel.sh@19 -- # read -r var val 00:05:59.946 03:06:34 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:59.946 03:06:34 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.946 03:06:34 -- accel/accel.sh@19 -- # IFS=: 00:05:59.946 03:06:34 -- accel/accel.sh@19 -- # read -r var val 00:05:59.946 03:06:34 -- accel/accel.sh@20 -- # val=Yes 00:05:59.946 03:06:34 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.946 03:06:34 -- accel/accel.sh@19 -- # IFS=: 00:05:59.946 03:06:34 -- accel/accel.sh@19 -- # read -r var val 00:05:59.946 03:06:34 -- accel/accel.sh@20 -- # val= 00:05:59.946 03:06:34 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.946 03:06:34 -- accel/accel.sh@19 -- # IFS=: 00:05:59.946 03:06:34 -- accel/accel.sh@19 -- # read -r var val 00:05:59.946 03:06:34 -- accel/accel.sh@20 -- # val= 00:05:59.946 03:06:34 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.946 03:06:34 -- accel/accel.sh@19 -- # IFS=: 00:05:59.946 03:06:34 -- accel/accel.sh@19 -- # read -r var val 00:06:01.320 03:06:35 -- accel/accel.sh@20 -- # val= 00:06:01.320 03:06:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.320 03:06:35 -- accel/accel.sh@19 -- # IFS=: 00:06:01.320 03:06:35 -- accel/accel.sh@19 -- # read -r var val 00:06:01.320 03:06:35 -- accel/accel.sh@20 -- # val= 00:06:01.320 03:06:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.320 03:06:35 -- accel/accel.sh@19 -- # IFS=: 00:06:01.320 03:06:35 -- accel/accel.sh@19 -- # read -r var val 00:06:01.320 03:06:35 -- accel/accel.sh@20 -- # val= 00:06:01.320 03:06:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.320 03:06:35 -- accel/accel.sh@19 -- # IFS=: 00:06:01.320 03:06:35 -- accel/accel.sh@19 -- # read -r var val 00:06:01.320 03:06:35 -- accel/accel.sh@20 -- # val= 00:06:01.320 03:06:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.320 03:06:35 -- accel/accel.sh@19 -- # IFS=: 00:06:01.320 03:06:35 -- accel/accel.sh@19 -- # read -r var val 00:06:01.320 03:06:35 -- accel/accel.sh@20 -- # val= 00:06:01.320 03:06:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.320 03:06:35 -- accel/accel.sh@19 -- # IFS=: 00:06:01.320 03:06:35 -- accel/accel.sh@19 -- # read -r var val 00:06:01.320 03:06:35 -- accel/accel.sh@20 -- # val= 00:06:01.320 03:06:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.320 03:06:35 -- accel/accel.sh@19 -- # IFS=: 00:06:01.320 03:06:35 -- accel/accel.sh@19 -- # read -r var val 00:06:01.320 03:06:35 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:01.320 03:06:35 -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:01.320 03:06:35 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:01.320 00:06:01.320 real 0m1.468s 00:06:01.320 user 0m1.331s 00:06:01.320 sys 0m0.137s 00:06:01.320 03:06:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:01.320 03:06:35 -- common/autotest_common.sh@10 -- # set +x 00:06:01.320 ************************************ 00:06:01.320 END TEST accel_copy 00:06:01.320 ************************************ 00:06:01.320 03:06:35 -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:01.320 03:06:35 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:06:01.320 03:06:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:01.320 03:06:35 -- common/autotest_common.sh@10 -- # set +x 00:06:01.320 ************************************ 00:06:01.320 START TEST accel_fill 00:06:01.320 ************************************ 00:06:01.320 03:06:35 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:01.320 03:06:35 -- accel/accel.sh@16 -- # local accel_opc 00:06:01.320 03:06:35 -- accel/accel.sh@17 -- # local accel_module 00:06:01.320 03:06:35 -- accel/accel.sh@19 -- # IFS=: 00:06:01.320 03:06:35 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:01.321 03:06:35 -- accel/accel.sh@19 -- # read -r var val 00:06:01.321 03:06:35 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:01.321 03:06:35 -- accel/accel.sh@12 -- # build_accel_config 00:06:01.321 03:06:35 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:01.321 03:06:35 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:01.321 03:06:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:01.321 03:06:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:01.321 03:06:35 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:01.321 03:06:35 -- accel/accel.sh@40 -- # local IFS=, 00:06:01.321 03:06:35 -- accel/accel.sh@41 -- # jq -r . 00:06:01.321 [2024-04-25 03:06:35.616032] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:06:01.321 [2024-04-25 03:06:35.616096] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1385402 ] 00:06:01.321 EAL: No free 2048 kB hugepages reported on node 1 00:06:01.321 [2024-04-25 03:06:35.680488] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.321 [2024-04-25 03:06:35.795606] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.579 03:06:35 -- accel/accel.sh@20 -- # val= 00:06:01.579 03:06:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.580 03:06:35 -- accel/accel.sh@19 -- # IFS=: 00:06:01.580 03:06:35 -- accel/accel.sh@19 -- # read -r var val 00:06:01.580 03:06:35 -- accel/accel.sh@20 -- # val= 00:06:01.580 03:06:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.580 03:06:35 -- accel/accel.sh@19 -- # IFS=: 00:06:01.580 03:06:35 -- accel/accel.sh@19 -- # read -r var val 00:06:01.580 03:06:35 -- accel/accel.sh@20 -- # val=0x1 00:06:01.580 03:06:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.580 03:06:35 -- accel/accel.sh@19 -- # IFS=: 00:06:01.580 03:06:35 -- accel/accel.sh@19 -- # read -r var val 00:06:01.580 03:06:35 -- accel/accel.sh@20 -- # val= 00:06:01.580 03:06:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.580 03:06:35 -- accel/accel.sh@19 -- # IFS=: 00:06:01.580 03:06:35 -- accel/accel.sh@19 -- # read -r var val 00:06:01.580 03:06:35 -- accel/accel.sh@20 -- # val= 00:06:01.580 03:06:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.580 03:06:35 -- accel/accel.sh@19 -- # IFS=: 00:06:01.580 03:06:35 -- accel/accel.sh@19 -- # read -r var val 00:06:01.580 03:06:35 -- accel/accel.sh@20 -- # val=fill 00:06:01.580 03:06:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.580 03:06:35 -- accel/accel.sh@23 -- # accel_opc=fill 00:06:01.580 03:06:35 -- accel/accel.sh@19 -- # IFS=: 00:06:01.580 03:06:35 -- accel/accel.sh@19 -- # read -r var val 00:06:01.580 03:06:35 -- accel/accel.sh@20 -- # val=0x80 00:06:01.580 03:06:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.580 03:06:35 -- accel/accel.sh@19 -- # IFS=: 00:06:01.580 03:06:35 -- accel/accel.sh@19 -- # read -r var val 00:06:01.580 03:06:35 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:01.580 03:06:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.580 03:06:35 -- accel/accel.sh@19 -- # IFS=: 00:06:01.580 03:06:35 -- accel/accel.sh@19 -- # read -r var val 00:06:01.580 03:06:35 -- accel/accel.sh@20 -- # val= 00:06:01.580 03:06:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.580 03:06:35 -- accel/accel.sh@19 -- # IFS=: 00:06:01.580 03:06:35 -- accel/accel.sh@19 -- # read -r var val 00:06:01.580 03:06:35 -- accel/accel.sh@20 -- # val=software 00:06:01.580 03:06:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.580 03:06:35 -- accel/accel.sh@22 -- # accel_module=software 00:06:01.580 03:06:35 -- accel/accel.sh@19 -- # IFS=: 00:06:01.580 03:06:35 -- accel/accel.sh@19 -- # read -r var val 00:06:01.580 03:06:35 -- accel/accel.sh@20 -- # val=64 00:06:01.580 03:06:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.580 03:06:35 -- accel/accel.sh@19 -- # IFS=: 00:06:01.580 03:06:35 -- accel/accel.sh@19 -- # read -r var val 00:06:01.580 03:06:35 -- accel/accel.sh@20 -- # val=64 00:06:01.580 03:06:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.580 03:06:35 -- accel/accel.sh@19 -- # IFS=: 00:06:01.580 03:06:35 -- accel/accel.sh@19 -- # read -r var val 00:06:01.580 03:06:35 -- accel/accel.sh@20 -- # val=1 00:06:01.580 03:06:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.580 03:06:35 -- accel/accel.sh@19 -- # IFS=: 00:06:01.580 03:06:35 -- accel/accel.sh@19 -- # read -r var val 00:06:01.580 03:06:35 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:01.580 03:06:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.580 03:06:35 -- accel/accel.sh@19 -- # IFS=: 00:06:01.580 03:06:35 -- accel/accel.sh@19 -- # read -r var val 00:06:01.580 03:06:35 -- accel/accel.sh@20 -- # val=Yes 00:06:01.580 03:06:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.580 03:06:35 -- accel/accel.sh@19 -- # IFS=: 00:06:01.580 03:06:35 -- accel/accel.sh@19 -- # read -r var val 00:06:01.580 03:06:35 -- accel/accel.sh@20 -- # val= 00:06:01.580 03:06:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.580 03:06:35 -- accel/accel.sh@19 -- # IFS=: 00:06:01.580 03:06:35 -- accel/accel.sh@19 -- # read -r var val 00:06:01.580 03:06:35 -- accel/accel.sh@20 -- # val= 00:06:01.580 03:06:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.580 03:06:35 -- accel/accel.sh@19 -- # IFS=: 00:06:01.580 03:06:35 -- accel/accel.sh@19 -- # read -r var val 00:06:02.955 03:06:37 -- accel/accel.sh@20 -- # val= 00:06:02.955 03:06:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.955 03:06:37 -- accel/accel.sh@19 -- # IFS=: 00:06:02.955 03:06:37 -- accel/accel.sh@19 -- # read -r var val 00:06:02.955 03:06:37 -- accel/accel.sh@20 -- # val= 00:06:02.955 03:06:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.955 03:06:37 -- accel/accel.sh@19 -- # IFS=: 00:06:02.955 03:06:37 -- accel/accel.sh@19 -- # read -r var val 00:06:02.955 03:06:37 -- accel/accel.sh@20 -- # val= 00:06:02.955 03:06:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.955 03:06:37 -- accel/accel.sh@19 -- # IFS=: 00:06:02.955 03:06:37 -- accel/accel.sh@19 -- # read -r var val 00:06:02.955 03:06:37 -- accel/accel.sh@20 -- # val= 00:06:02.955 03:06:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.955 03:06:37 -- accel/accel.sh@19 -- # IFS=: 00:06:02.955 03:06:37 -- accel/accel.sh@19 -- # read -r var val 00:06:02.955 03:06:37 -- accel/accel.sh@20 -- # val= 00:06:02.955 03:06:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.955 03:06:37 -- accel/accel.sh@19 -- # IFS=: 00:06:02.955 03:06:37 -- accel/accel.sh@19 -- # read -r var val 00:06:02.955 03:06:37 -- accel/accel.sh@20 -- # val= 00:06:02.955 03:06:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.955 03:06:37 -- accel/accel.sh@19 -- # IFS=: 00:06:02.955 03:06:37 -- accel/accel.sh@19 -- # read -r var val 00:06:02.955 03:06:37 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:02.955 03:06:37 -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:02.955 03:06:37 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:02.955 00:06:02.955 real 0m1.460s 00:06:02.955 user 0m1.327s 00:06:02.955 sys 0m0.135s 00:06:02.955 03:06:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:02.955 03:06:37 -- common/autotest_common.sh@10 -- # set +x 00:06:02.955 ************************************ 00:06:02.955 END TEST accel_fill 00:06:02.955 ************************************ 00:06:02.955 03:06:37 -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:02.955 03:06:37 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:02.955 03:06:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:02.955 03:06:37 -- common/autotest_common.sh@10 -- # set +x 00:06:02.955 ************************************ 00:06:02.955 START TEST accel_copy_crc32c 00:06:02.955 ************************************ 00:06:02.955 03:06:37 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy_crc32c -y 00:06:02.955 03:06:37 -- accel/accel.sh@16 -- # local accel_opc 00:06:02.955 03:06:37 -- accel/accel.sh@17 -- # local accel_module 00:06:02.955 03:06:37 -- accel/accel.sh@19 -- # IFS=: 00:06:02.955 03:06:37 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:02.955 03:06:37 -- accel/accel.sh@19 -- # read -r var val 00:06:02.955 03:06:37 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:02.955 03:06:37 -- accel/accel.sh@12 -- # build_accel_config 00:06:02.955 03:06:37 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:02.955 03:06:37 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:02.955 03:06:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:02.955 03:06:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:02.955 03:06:37 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:02.955 03:06:37 -- accel/accel.sh@40 -- # local IFS=, 00:06:02.955 03:06:37 -- accel/accel.sh@41 -- # jq -r . 00:06:02.955 [2024-04-25 03:06:37.187353] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:06:02.955 [2024-04-25 03:06:37.187418] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1385575 ] 00:06:02.955 EAL: No free 2048 kB hugepages reported on node 1 00:06:02.955 [2024-04-25 03:06:37.249484] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.955 [2024-04-25 03:06:37.366536] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.955 03:06:37 -- accel/accel.sh@20 -- # val= 00:06:02.955 03:06:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.955 03:06:37 -- accel/accel.sh@19 -- # IFS=: 00:06:02.955 03:06:37 -- accel/accel.sh@19 -- # read -r var val 00:06:02.955 03:06:37 -- accel/accel.sh@20 -- # val= 00:06:02.955 03:06:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.955 03:06:37 -- accel/accel.sh@19 -- # IFS=: 00:06:02.955 03:06:37 -- accel/accel.sh@19 -- # read -r var val 00:06:02.955 03:06:37 -- accel/accel.sh@20 -- # val=0x1 00:06:02.955 03:06:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.955 03:06:37 -- accel/accel.sh@19 -- # IFS=: 00:06:02.955 03:06:37 -- accel/accel.sh@19 -- # read -r var val 00:06:02.955 03:06:37 -- accel/accel.sh@20 -- # val= 00:06:02.955 03:06:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.955 03:06:37 -- accel/accel.sh@19 -- # IFS=: 00:06:02.955 03:06:37 -- accel/accel.sh@19 -- # read -r var val 00:06:02.955 03:06:37 -- accel/accel.sh@20 -- # val= 00:06:02.955 03:06:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.955 03:06:37 -- accel/accel.sh@19 -- # IFS=: 00:06:02.955 03:06:37 -- accel/accel.sh@19 -- # read -r var val 00:06:02.955 03:06:37 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:02.955 03:06:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.955 03:06:37 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:02.955 03:06:37 -- accel/accel.sh@19 -- # IFS=: 00:06:02.955 03:06:37 -- accel/accel.sh@19 -- # read -r var val 00:06:02.955 03:06:37 -- accel/accel.sh@20 -- # val=0 00:06:02.955 03:06:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.955 03:06:37 -- accel/accel.sh@19 -- # IFS=: 00:06:02.955 03:06:37 -- accel/accel.sh@19 -- # read -r var val 00:06:02.955 03:06:37 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:02.955 03:06:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.955 03:06:37 -- accel/accel.sh@19 -- # IFS=: 00:06:02.955 03:06:37 -- accel/accel.sh@19 -- # read -r var val 00:06:02.955 03:06:37 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:02.955 03:06:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.955 03:06:37 -- accel/accel.sh@19 -- # IFS=: 00:06:02.955 03:06:37 -- accel/accel.sh@19 -- # read -r var val 00:06:02.955 03:06:37 -- accel/accel.sh@20 -- # val= 00:06:02.955 03:06:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.955 03:06:37 -- accel/accel.sh@19 -- # IFS=: 00:06:02.955 03:06:37 -- accel/accel.sh@19 -- # read -r var val 00:06:02.955 03:06:37 -- accel/accel.sh@20 -- # val=software 00:06:02.955 03:06:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.955 03:06:37 -- accel/accel.sh@22 -- # accel_module=software 00:06:02.955 03:06:37 -- accel/accel.sh@19 -- # IFS=: 00:06:02.955 03:06:37 -- accel/accel.sh@19 -- # read -r var val 00:06:02.955 03:06:37 -- accel/accel.sh@20 -- # val=32 00:06:02.955 03:06:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.955 03:06:37 -- accel/accel.sh@19 -- # IFS=: 00:06:02.955 03:06:37 -- accel/accel.sh@19 -- # read -r var val 00:06:02.955 03:06:37 -- accel/accel.sh@20 -- # val=32 00:06:02.955 03:06:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.955 03:06:37 -- accel/accel.sh@19 -- # IFS=: 00:06:02.955 03:06:37 -- accel/accel.sh@19 -- # read -r var val 00:06:02.955 03:06:37 -- accel/accel.sh@20 -- # val=1 00:06:02.955 03:06:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.955 03:06:37 -- accel/accel.sh@19 -- # IFS=: 00:06:02.956 03:06:37 -- accel/accel.sh@19 -- # read -r var val 00:06:02.956 03:06:37 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:02.956 03:06:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.956 03:06:37 -- accel/accel.sh@19 -- # IFS=: 00:06:02.956 03:06:37 -- accel/accel.sh@19 -- # read -r var val 00:06:02.956 03:06:37 -- accel/accel.sh@20 -- # val=Yes 00:06:02.956 03:06:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.956 03:06:37 -- accel/accel.sh@19 -- # IFS=: 00:06:02.956 03:06:37 -- accel/accel.sh@19 -- # read -r var val 00:06:02.956 03:06:37 -- accel/accel.sh@20 -- # val= 00:06:02.956 03:06:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.956 03:06:37 -- accel/accel.sh@19 -- # IFS=: 00:06:02.956 03:06:37 -- accel/accel.sh@19 -- # read -r var val 00:06:02.956 03:06:37 -- accel/accel.sh@20 -- # val= 00:06:02.956 03:06:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.956 03:06:37 -- accel/accel.sh@19 -- # IFS=: 00:06:02.956 03:06:37 -- accel/accel.sh@19 -- # read -r var val 00:06:04.329 03:06:38 -- accel/accel.sh@20 -- # val= 00:06:04.329 03:06:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.329 03:06:38 -- accel/accel.sh@19 -- # IFS=: 00:06:04.329 03:06:38 -- accel/accel.sh@19 -- # read -r var val 00:06:04.329 03:06:38 -- accel/accel.sh@20 -- # val= 00:06:04.329 03:06:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.329 03:06:38 -- accel/accel.sh@19 -- # IFS=: 00:06:04.329 03:06:38 -- accel/accel.sh@19 -- # read -r var val 00:06:04.329 03:06:38 -- accel/accel.sh@20 -- # val= 00:06:04.329 03:06:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.329 03:06:38 -- accel/accel.sh@19 -- # IFS=: 00:06:04.329 03:06:38 -- accel/accel.sh@19 -- # read -r var val 00:06:04.329 03:06:38 -- accel/accel.sh@20 -- # val= 00:06:04.329 03:06:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.329 03:06:38 -- accel/accel.sh@19 -- # IFS=: 00:06:04.329 03:06:38 -- accel/accel.sh@19 -- # read -r var val 00:06:04.329 03:06:38 -- accel/accel.sh@20 -- # val= 00:06:04.329 03:06:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.329 03:06:38 -- accel/accel.sh@19 -- # IFS=: 00:06:04.329 03:06:38 -- accel/accel.sh@19 -- # read -r var val 00:06:04.329 03:06:38 -- accel/accel.sh@20 -- # val= 00:06:04.329 03:06:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.329 03:06:38 -- accel/accel.sh@19 -- # IFS=: 00:06:04.329 03:06:38 -- accel/accel.sh@19 -- # read -r var val 00:06:04.329 03:06:38 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:04.329 03:06:38 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:04.329 03:06:38 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:04.329 00:06:04.329 real 0m1.469s 00:06:04.329 user 0m1.331s 00:06:04.329 sys 0m0.140s 00:06:04.329 03:06:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:04.329 03:06:38 -- common/autotest_common.sh@10 -- # set +x 00:06:04.329 ************************************ 00:06:04.329 END TEST accel_copy_crc32c 00:06:04.329 ************************************ 00:06:04.329 03:06:38 -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:04.329 03:06:38 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:04.329 03:06:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:04.329 03:06:38 -- common/autotest_common.sh@10 -- # set +x 00:06:04.329 ************************************ 00:06:04.329 START TEST accel_copy_crc32c_C2 00:06:04.329 ************************************ 00:06:04.329 03:06:38 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:04.329 03:06:38 -- accel/accel.sh@16 -- # local accel_opc 00:06:04.329 03:06:38 -- accel/accel.sh@17 -- # local accel_module 00:06:04.329 03:06:38 -- accel/accel.sh@19 -- # IFS=: 00:06:04.329 03:06:38 -- accel/accel.sh@19 -- # read -r var val 00:06:04.329 03:06:38 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:04.329 03:06:38 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:04.329 03:06:38 -- accel/accel.sh@12 -- # build_accel_config 00:06:04.329 03:06:38 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:04.329 03:06:38 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:04.329 03:06:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:04.329 03:06:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:04.329 03:06:38 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:04.329 03:06:38 -- accel/accel.sh@40 -- # local IFS=, 00:06:04.329 03:06:38 -- accel/accel.sh@41 -- # jq -r . 00:06:04.329 [2024-04-25 03:06:38.774660] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:06:04.329 [2024-04-25 03:06:38.774722] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1385748 ] 00:06:04.329 EAL: No free 2048 kB hugepages reported on node 1 00:06:04.587 [2024-04-25 03:06:38.836737] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.587 [2024-04-25 03:06:38.954966] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.587 03:06:39 -- accel/accel.sh@20 -- # val= 00:06:04.587 03:06:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.587 03:06:39 -- accel/accel.sh@19 -- # IFS=: 00:06:04.587 03:06:39 -- accel/accel.sh@19 -- # read -r var val 00:06:04.587 03:06:39 -- accel/accel.sh@20 -- # val= 00:06:04.587 03:06:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.587 03:06:39 -- accel/accel.sh@19 -- # IFS=: 00:06:04.587 03:06:39 -- accel/accel.sh@19 -- # read -r var val 00:06:04.587 03:06:39 -- accel/accel.sh@20 -- # val=0x1 00:06:04.587 03:06:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.587 03:06:39 -- accel/accel.sh@19 -- # IFS=: 00:06:04.587 03:06:39 -- accel/accel.sh@19 -- # read -r var val 00:06:04.587 03:06:39 -- accel/accel.sh@20 -- # val= 00:06:04.587 03:06:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.587 03:06:39 -- accel/accel.sh@19 -- # IFS=: 00:06:04.587 03:06:39 -- accel/accel.sh@19 -- # read -r var val 00:06:04.587 03:06:39 -- accel/accel.sh@20 -- # val= 00:06:04.587 03:06:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.587 03:06:39 -- accel/accel.sh@19 -- # IFS=: 00:06:04.587 03:06:39 -- accel/accel.sh@19 -- # read -r var val 00:06:04.587 03:06:39 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:04.587 03:06:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.587 03:06:39 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:04.587 03:06:39 -- accel/accel.sh@19 -- # IFS=: 00:06:04.587 03:06:39 -- accel/accel.sh@19 -- # read -r var val 00:06:04.587 03:06:39 -- accel/accel.sh@20 -- # val=0 00:06:04.587 03:06:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.587 03:06:39 -- accel/accel.sh@19 -- # IFS=: 00:06:04.587 03:06:39 -- accel/accel.sh@19 -- # read -r var val 00:06:04.587 03:06:39 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:04.587 03:06:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.587 03:06:39 -- accel/accel.sh@19 -- # IFS=: 00:06:04.587 03:06:39 -- accel/accel.sh@19 -- # read -r var val 00:06:04.587 03:06:39 -- accel/accel.sh@20 -- # val='8192 bytes' 00:06:04.587 03:06:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.587 03:06:39 -- accel/accel.sh@19 -- # IFS=: 00:06:04.587 03:06:39 -- accel/accel.sh@19 -- # read -r var val 00:06:04.587 03:06:39 -- accel/accel.sh@20 -- # val= 00:06:04.587 03:06:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.587 03:06:39 -- accel/accel.sh@19 -- # IFS=: 00:06:04.587 03:06:39 -- accel/accel.sh@19 -- # read -r var val 00:06:04.587 03:06:39 -- accel/accel.sh@20 -- # val=software 00:06:04.587 03:06:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.587 03:06:39 -- accel/accel.sh@22 -- # accel_module=software 00:06:04.587 03:06:39 -- accel/accel.sh@19 -- # IFS=: 00:06:04.587 03:06:39 -- accel/accel.sh@19 -- # read -r var val 00:06:04.587 03:06:39 -- accel/accel.sh@20 -- # val=32 00:06:04.588 03:06:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.588 03:06:39 -- accel/accel.sh@19 -- # IFS=: 00:06:04.588 03:06:39 -- accel/accel.sh@19 -- # read -r var val 00:06:04.588 03:06:39 -- accel/accel.sh@20 -- # val=32 00:06:04.588 03:06:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.588 03:06:39 -- accel/accel.sh@19 -- # IFS=: 00:06:04.588 03:06:39 -- accel/accel.sh@19 -- # read -r var val 00:06:04.588 03:06:39 -- accel/accel.sh@20 -- # val=1 00:06:04.588 03:06:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.588 03:06:39 -- accel/accel.sh@19 -- # IFS=: 00:06:04.588 03:06:39 -- accel/accel.sh@19 -- # read -r var val 00:06:04.588 03:06:39 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:04.588 03:06:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.588 03:06:39 -- accel/accel.sh@19 -- # IFS=: 00:06:04.588 03:06:39 -- accel/accel.sh@19 -- # read -r var val 00:06:04.588 03:06:39 -- accel/accel.sh@20 -- # val=Yes 00:06:04.588 03:06:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.588 03:06:39 -- accel/accel.sh@19 -- # IFS=: 00:06:04.588 03:06:39 -- accel/accel.sh@19 -- # read -r var val 00:06:04.588 03:06:39 -- accel/accel.sh@20 -- # val= 00:06:04.588 03:06:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.588 03:06:39 -- accel/accel.sh@19 -- # IFS=: 00:06:04.588 03:06:39 -- accel/accel.sh@19 -- # read -r var val 00:06:04.588 03:06:39 -- accel/accel.sh@20 -- # val= 00:06:04.588 03:06:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.588 03:06:39 -- accel/accel.sh@19 -- # IFS=: 00:06:04.588 03:06:39 -- accel/accel.sh@19 -- # read -r var val 00:06:05.960 03:06:40 -- accel/accel.sh@20 -- # val= 00:06:05.960 03:06:40 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.960 03:06:40 -- accel/accel.sh@19 -- # IFS=: 00:06:05.960 03:06:40 -- accel/accel.sh@19 -- # read -r var val 00:06:05.960 03:06:40 -- accel/accel.sh@20 -- # val= 00:06:05.960 03:06:40 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.960 03:06:40 -- accel/accel.sh@19 -- # IFS=: 00:06:05.960 03:06:40 -- accel/accel.sh@19 -- # read -r var val 00:06:05.960 03:06:40 -- accel/accel.sh@20 -- # val= 00:06:05.960 03:06:40 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.960 03:06:40 -- accel/accel.sh@19 -- # IFS=: 00:06:05.960 03:06:40 -- accel/accel.sh@19 -- # read -r var val 00:06:05.960 03:06:40 -- accel/accel.sh@20 -- # val= 00:06:05.960 03:06:40 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.960 03:06:40 -- accel/accel.sh@19 -- # IFS=: 00:06:05.960 03:06:40 -- accel/accel.sh@19 -- # read -r var val 00:06:05.960 03:06:40 -- accel/accel.sh@20 -- # val= 00:06:05.960 03:06:40 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.960 03:06:40 -- accel/accel.sh@19 -- # IFS=: 00:06:05.960 03:06:40 -- accel/accel.sh@19 -- # read -r var val 00:06:05.960 03:06:40 -- accel/accel.sh@20 -- # val= 00:06:05.960 03:06:40 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.960 03:06:40 -- accel/accel.sh@19 -- # IFS=: 00:06:05.960 03:06:40 -- accel/accel.sh@19 -- # read -r var val 00:06:05.960 03:06:40 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:05.960 03:06:40 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:05.960 03:06:40 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:05.960 00:06:05.960 real 0m1.476s 00:06:05.960 user 0m1.336s 00:06:05.960 sys 0m0.141s 00:06:05.960 03:06:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:05.960 03:06:40 -- common/autotest_common.sh@10 -- # set +x 00:06:05.961 ************************************ 00:06:05.961 END TEST accel_copy_crc32c_C2 00:06:05.961 ************************************ 00:06:05.961 03:06:40 -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:05.961 03:06:40 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:05.961 03:06:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:05.961 03:06:40 -- common/autotest_common.sh@10 -- # set +x 00:06:05.961 ************************************ 00:06:05.961 START TEST accel_dualcast 00:06:05.961 ************************************ 00:06:05.961 03:06:40 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dualcast -y 00:06:05.961 03:06:40 -- accel/accel.sh@16 -- # local accel_opc 00:06:05.961 03:06:40 -- accel/accel.sh@17 -- # local accel_module 00:06:05.961 03:06:40 -- accel/accel.sh@19 -- # IFS=: 00:06:05.961 03:06:40 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:05.961 03:06:40 -- accel/accel.sh@19 -- # read -r var val 00:06:05.961 03:06:40 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:05.961 03:06:40 -- accel/accel.sh@12 -- # build_accel_config 00:06:05.961 03:06:40 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:05.961 03:06:40 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:05.961 03:06:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:05.961 03:06:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:05.961 03:06:40 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:05.961 03:06:40 -- accel/accel.sh@40 -- # local IFS=, 00:06:05.961 03:06:40 -- accel/accel.sh@41 -- # jq -r . 00:06:05.961 [2024-04-25 03:06:40.366503] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:06:05.961 [2024-04-25 03:06:40.366568] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1386021 ] 00:06:05.961 EAL: No free 2048 kB hugepages reported on node 1 00:06:05.961 [2024-04-25 03:06:40.428829] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.219 [2024-04-25 03:06:40.546970] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.219 03:06:40 -- accel/accel.sh@20 -- # val= 00:06:06.219 03:06:40 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.219 03:06:40 -- accel/accel.sh@19 -- # IFS=: 00:06:06.219 03:06:40 -- accel/accel.sh@19 -- # read -r var val 00:06:06.219 03:06:40 -- accel/accel.sh@20 -- # val= 00:06:06.219 03:06:40 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.219 03:06:40 -- accel/accel.sh@19 -- # IFS=: 00:06:06.219 03:06:40 -- accel/accel.sh@19 -- # read -r var val 00:06:06.219 03:06:40 -- accel/accel.sh@20 -- # val=0x1 00:06:06.219 03:06:40 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.219 03:06:40 -- accel/accel.sh@19 -- # IFS=: 00:06:06.219 03:06:40 -- accel/accel.sh@19 -- # read -r var val 00:06:06.219 03:06:40 -- accel/accel.sh@20 -- # val= 00:06:06.219 03:06:40 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.219 03:06:40 -- accel/accel.sh@19 -- # IFS=: 00:06:06.219 03:06:40 -- accel/accel.sh@19 -- # read -r var val 00:06:06.219 03:06:40 -- accel/accel.sh@20 -- # val= 00:06:06.219 03:06:40 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.219 03:06:40 -- accel/accel.sh@19 -- # IFS=: 00:06:06.219 03:06:40 -- accel/accel.sh@19 -- # read -r var val 00:06:06.219 03:06:40 -- accel/accel.sh@20 -- # val=dualcast 00:06:06.219 03:06:40 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.219 03:06:40 -- accel/accel.sh@23 -- # accel_opc=dualcast 00:06:06.219 03:06:40 -- accel/accel.sh@19 -- # IFS=: 00:06:06.219 03:06:40 -- accel/accel.sh@19 -- # read -r var val 00:06:06.219 03:06:40 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:06.219 03:06:40 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.219 03:06:40 -- accel/accel.sh@19 -- # IFS=: 00:06:06.219 03:06:40 -- accel/accel.sh@19 -- # read -r var val 00:06:06.219 03:06:40 -- accel/accel.sh@20 -- # val= 00:06:06.219 03:06:40 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.219 03:06:40 -- accel/accel.sh@19 -- # IFS=: 00:06:06.219 03:06:40 -- accel/accel.sh@19 -- # read -r var val 00:06:06.219 03:06:40 -- accel/accel.sh@20 -- # val=software 00:06:06.219 03:06:40 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.219 03:06:40 -- accel/accel.sh@22 -- # accel_module=software 00:06:06.219 03:06:40 -- accel/accel.sh@19 -- # IFS=: 00:06:06.219 03:06:40 -- accel/accel.sh@19 -- # read -r var val 00:06:06.219 03:06:40 -- accel/accel.sh@20 -- # val=32 00:06:06.219 03:06:40 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.219 03:06:40 -- accel/accel.sh@19 -- # IFS=: 00:06:06.219 03:06:40 -- accel/accel.sh@19 -- # read -r var val 00:06:06.219 03:06:40 -- accel/accel.sh@20 -- # val=32 00:06:06.219 03:06:40 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.219 03:06:40 -- accel/accel.sh@19 -- # IFS=: 00:06:06.219 03:06:40 -- accel/accel.sh@19 -- # read -r var val 00:06:06.219 03:06:40 -- accel/accel.sh@20 -- # val=1 00:06:06.219 03:06:40 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.219 03:06:40 -- accel/accel.sh@19 -- # IFS=: 00:06:06.219 03:06:40 -- accel/accel.sh@19 -- # read -r var val 00:06:06.219 03:06:40 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:06.219 03:06:40 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.219 03:06:40 -- accel/accel.sh@19 -- # IFS=: 00:06:06.219 03:06:40 -- accel/accel.sh@19 -- # read -r var val 00:06:06.219 03:06:40 -- accel/accel.sh@20 -- # val=Yes 00:06:06.219 03:06:40 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.219 03:06:40 -- accel/accel.sh@19 -- # IFS=: 00:06:06.219 03:06:40 -- accel/accel.sh@19 -- # read -r var val 00:06:06.219 03:06:40 -- accel/accel.sh@20 -- # val= 00:06:06.219 03:06:40 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.219 03:06:40 -- accel/accel.sh@19 -- # IFS=: 00:06:06.219 03:06:40 -- accel/accel.sh@19 -- # read -r var val 00:06:06.219 03:06:40 -- accel/accel.sh@20 -- # val= 00:06:06.219 03:06:40 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.219 03:06:40 -- accel/accel.sh@19 -- # IFS=: 00:06:06.219 03:06:40 -- accel/accel.sh@19 -- # read -r var val 00:06:07.593 03:06:41 -- accel/accel.sh@20 -- # val= 00:06:07.593 03:06:41 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.593 03:06:41 -- accel/accel.sh@19 -- # IFS=: 00:06:07.593 03:06:41 -- accel/accel.sh@19 -- # read -r var val 00:06:07.593 03:06:41 -- accel/accel.sh@20 -- # val= 00:06:07.593 03:06:41 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.593 03:06:41 -- accel/accel.sh@19 -- # IFS=: 00:06:07.593 03:06:41 -- accel/accel.sh@19 -- # read -r var val 00:06:07.593 03:06:41 -- accel/accel.sh@20 -- # val= 00:06:07.593 03:06:41 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.593 03:06:41 -- accel/accel.sh@19 -- # IFS=: 00:06:07.593 03:06:41 -- accel/accel.sh@19 -- # read -r var val 00:06:07.593 03:06:41 -- accel/accel.sh@20 -- # val= 00:06:07.593 03:06:41 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.593 03:06:41 -- accel/accel.sh@19 -- # IFS=: 00:06:07.593 03:06:41 -- accel/accel.sh@19 -- # read -r var val 00:06:07.593 03:06:41 -- accel/accel.sh@20 -- # val= 00:06:07.593 03:06:41 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.593 03:06:41 -- accel/accel.sh@19 -- # IFS=: 00:06:07.593 03:06:41 -- accel/accel.sh@19 -- # read -r var val 00:06:07.593 03:06:41 -- accel/accel.sh@20 -- # val= 00:06:07.593 03:06:41 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.593 03:06:41 -- accel/accel.sh@19 -- # IFS=: 00:06:07.593 03:06:41 -- accel/accel.sh@19 -- # read -r var val 00:06:07.593 03:06:41 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:07.593 03:06:41 -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:06:07.593 03:06:41 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:07.593 00:06:07.593 real 0m1.475s 00:06:07.593 user 0m1.336s 00:06:07.593 sys 0m0.140s 00:06:07.593 03:06:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:07.593 03:06:41 -- common/autotest_common.sh@10 -- # set +x 00:06:07.593 ************************************ 00:06:07.593 END TEST accel_dualcast 00:06:07.593 ************************************ 00:06:07.593 03:06:41 -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:07.593 03:06:41 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:07.593 03:06:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:07.593 03:06:41 -- common/autotest_common.sh@10 -- # set +x 00:06:07.593 ************************************ 00:06:07.593 START TEST accel_compare 00:06:07.593 ************************************ 00:06:07.593 03:06:41 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w compare -y 00:06:07.593 03:06:41 -- accel/accel.sh@16 -- # local accel_opc 00:06:07.593 03:06:41 -- accel/accel.sh@17 -- # local accel_module 00:06:07.593 03:06:41 -- accel/accel.sh@19 -- # IFS=: 00:06:07.593 03:06:41 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:07.593 03:06:41 -- accel/accel.sh@19 -- # read -r var val 00:06:07.593 03:06:41 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:07.593 03:06:41 -- accel/accel.sh@12 -- # build_accel_config 00:06:07.593 03:06:41 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:07.593 03:06:41 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:07.593 03:06:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:07.593 03:06:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:07.593 03:06:41 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:07.593 03:06:41 -- accel/accel.sh@40 -- # local IFS=, 00:06:07.593 03:06:41 -- accel/accel.sh@41 -- # jq -r . 00:06:07.593 [2024-04-25 03:06:41.957855] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:06:07.593 [2024-04-25 03:06:41.957917] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1386179 ] 00:06:07.593 EAL: No free 2048 kB hugepages reported on node 1 00:06:07.593 [2024-04-25 03:06:42.019530] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.889 [2024-04-25 03:06:42.141492] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.889 03:06:42 -- accel/accel.sh@20 -- # val= 00:06:07.889 03:06:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.889 03:06:42 -- accel/accel.sh@19 -- # IFS=: 00:06:07.889 03:06:42 -- accel/accel.sh@19 -- # read -r var val 00:06:07.889 03:06:42 -- accel/accel.sh@20 -- # val= 00:06:07.889 03:06:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.889 03:06:42 -- accel/accel.sh@19 -- # IFS=: 00:06:07.889 03:06:42 -- accel/accel.sh@19 -- # read -r var val 00:06:07.889 03:06:42 -- accel/accel.sh@20 -- # val=0x1 00:06:07.889 03:06:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.889 03:06:42 -- accel/accel.sh@19 -- # IFS=: 00:06:07.889 03:06:42 -- accel/accel.sh@19 -- # read -r var val 00:06:07.889 03:06:42 -- accel/accel.sh@20 -- # val= 00:06:07.889 03:06:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.889 03:06:42 -- accel/accel.sh@19 -- # IFS=: 00:06:07.889 03:06:42 -- accel/accel.sh@19 -- # read -r var val 00:06:07.889 03:06:42 -- accel/accel.sh@20 -- # val= 00:06:07.889 03:06:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.889 03:06:42 -- accel/accel.sh@19 -- # IFS=: 00:06:07.889 03:06:42 -- accel/accel.sh@19 -- # read -r var val 00:06:07.889 03:06:42 -- accel/accel.sh@20 -- # val=compare 00:06:07.889 03:06:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.889 03:06:42 -- accel/accel.sh@23 -- # accel_opc=compare 00:06:07.889 03:06:42 -- accel/accel.sh@19 -- # IFS=: 00:06:07.889 03:06:42 -- accel/accel.sh@19 -- # read -r var val 00:06:07.889 03:06:42 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:07.889 03:06:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.889 03:06:42 -- accel/accel.sh@19 -- # IFS=: 00:06:07.889 03:06:42 -- accel/accel.sh@19 -- # read -r var val 00:06:07.889 03:06:42 -- accel/accel.sh@20 -- # val= 00:06:07.889 03:06:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.889 03:06:42 -- accel/accel.sh@19 -- # IFS=: 00:06:07.889 03:06:42 -- accel/accel.sh@19 -- # read -r var val 00:06:07.889 03:06:42 -- accel/accel.sh@20 -- # val=software 00:06:07.889 03:06:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.889 03:06:42 -- accel/accel.sh@22 -- # accel_module=software 00:06:07.889 03:06:42 -- accel/accel.sh@19 -- # IFS=: 00:06:07.889 03:06:42 -- accel/accel.sh@19 -- # read -r var val 00:06:07.889 03:06:42 -- accel/accel.sh@20 -- # val=32 00:06:07.889 03:06:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.889 03:06:42 -- accel/accel.sh@19 -- # IFS=: 00:06:07.889 03:06:42 -- accel/accel.sh@19 -- # read -r var val 00:06:07.889 03:06:42 -- accel/accel.sh@20 -- # val=32 00:06:07.889 03:06:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.889 03:06:42 -- accel/accel.sh@19 -- # IFS=: 00:06:07.889 03:06:42 -- accel/accel.sh@19 -- # read -r var val 00:06:07.889 03:06:42 -- accel/accel.sh@20 -- # val=1 00:06:07.889 03:06:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.889 03:06:42 -- accel/accel.sh@19 -- # IFS=: 00:06:07.889 03:06:42 -- accel/accel.sh@19 -- # read -r var val 00:06:07.889 03:06:42 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:07.889 03:06:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.889 03:06:42 -- accel/accel.sh@19 -- # IFS=: 00:06:07.889 03:06:42 -- accel/accel.sh@19 -- # read -r var val 00:06:07.889 03:06:42 -- accel/accel.sh@20 -- # val=Yes 00:06:07.889 03:06:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.889 03:06:42 -- accel/accel.sh@19 -- # IFS=: 00:06:07.889 03:06:42 -- accel/accel.sh@19 -- # read -r var val 00:06:07.889 03:06:42 -- accel/accel.sh@20 -- # val= 00:06:07.889 03:06:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.889 03:06:42 -- accel/accel.sh@19 -- # IFS=: 00:06:07.889 03:06:42 -- accel/accel.sh@19 -- # read -r var val 00:06:07.889 03:06:42 -- accel/accel.sh@20 -- # val= 00:06:07.889 03:06:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.889 03:06:42 -- accel/accel.sh@19 -- # IFS=: 00:06:07.889 03:06:42 -- accel/accel.sh@19 -- # read -r var val 00:06:09.261 03:06:43 -- accel/accel.sh@20 -- # val= 00:06:09.261 03:06:43 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.261 03:06:43 -- accel/accel.sh@19 -- # IFS=: 00:06:09.261 03:06:43 -- accel/accel.sh@19 -- # read -r var val 00:06:09.261 03:06:43 -- accel/accel.sh@20 -- # val= 00:06:09.261 03:06:43 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.261 03:06:43 -- accel/accel.sh@19 -- # IFS=: 00:06:09.261 03:06:43 -- accel/accel.sh@19 -- # read -r var val 00:06:09.261 03:06:43 -- accel/accel.sh@20 -- # val= 00:06:09.261 03:06:43 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.261 03:06:43 -- accel/accel.sh@19 -- # IFS=: 00:06:09.261 03:06:43 -- accel/accel.sh@19 -- # read -r var val 00:06:09.261 03:06:43 -- accel/accel.sh@20 -- # val= 00:06:09.261 03:06:43 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.261 03:06:43 -- accel/accel.sh@19 -- # IFS=: 00:06:09.261 03:06:43 -- accel/accel.sh@19 -- # read -r var val 00:06:09.261 03:06:43 -- accel/accel.sh@20 -- # val= 00:06:09.261 03:06:43 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.261 03:06:43 -- accel/accel.sh@19 -- # IFS=: 00:06:09.261 03:06:43 -- accel/accel.sh@19 -- # read -r var val 00:06:09.261 03:06:43 -- accel/accel.sh@20 -- # val= 00:06:09.261 03:06:43 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.261 03:06:43 -- accel/accel.sh@19 -- # IFS=: 00:06:09.261 03:06:43 -- accel/accel.sh@19 -- # read -r var val 00:06:09.261 03:06:43 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:09.261 03:06:43 -- accel/accel.sh@27 -- # [[ -n compare ]] 00:06:09.261 03:06:43 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:09.261 00:06:09.261 real 0m1.480s 00:06:09.261 user 0m1.343s 00:06:09.261 sys 0m0.138s 00:06:09.261 03:06:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:09.261 03:06:43 -- common/autotest_common.sh@10 -- # set +x 00:06:09.261 ************************************ 00:06:09.261 END TEST accel_compare 00:06:09.261 ************************************ 00:06:09.261 03:06:43 -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:09.261 03:06:43 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:09.261 03:06:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:09.261 03:06:43 -- common/autotest_common.sh@10 -- # set +x 00:06:09.261 ************************************ 00:06:09.261 START TEST accel_xor 00:06:09.261 ************************************ 00:06:09.261 03:06:43 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w xor -y 00:06:09.261 03:06:43 -- accel/accel.sh@16 -- # local accel_opc 00:06:09.261 03:06:43 -- accel/accel.sh@17 -- # local accel_module 00:06:09.261 03:06:43 -- accel/accel.sh@19 -- # IFS=: 00:06:09.261 03:06:43 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:09.261 03:06:43 -- accel/accel.sh@19 -- # read -r var val 00:06:09.261 03:06:43 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:09.261 03:06:43 -- accel/accel.sh@12 -- # build_accel_config 00:06:09.261 03:06:43 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:09.261 03:06:43 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:09.261 03:06:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:09.261 03:06:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:09.261 03:06:43 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:09.261 03:06:43 -- accel/accel.sh@40 -- # local IFS=, 00:06:09.261 03:06:43 -- accel/accel.sh@41 -- # jq -r . 00:06:09.261 [2024-04-25 03:06:43.561295] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:06:09.261 [2024-04-25 03:06:43.561357] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1386461 ] 00:06:09.261 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.261 [2024-04-25 03:06:43.623650] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.261 [2024-04-25 03:06:43.744824] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.520 03:06:43 -- accel/accel.sh@20 -- # val= 00:06:09.520 03:06:43 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.520 03:06:43 -- accel/accel.sh@19 -- # IFS=: 00:06:09.520 03:06:43 -- accel/accel.sh@19 -- # read -r var val 00:06:09.520 03:06:43 -- accel/accel.sh@20 -- # val= 00:06:09.520 03:06:43 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.520 03:06:43 -- accel/accel.sh@19 -- # IFS=: 00:06:09.520 03:06:43 -- accel/accel.sh@19 -- # read -r var val 00:06:09.520 03:06:43 -- accel/accel.sh@20 -- # val=0x1 00:06:09.520 03:06:43 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.520 03:06:43 -- accel/accel.sh@19 -- # IFS=: 00:06:09.520 03:06:43 -- accel/accel.sh@19 -- # read -r var val 00:06:09.520 03:06:43 -- accel/accel.sh@20 -- # val= 00:06:09.520 03:06:43 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.520 03:06:43 -- accel/accel.sh@19 -- # IFS=: 00:06:09.520 03:06:43 -- accel/accel.sh@19 -- # read -r var val 00:06:09.520 03:06:43 -- accel/accel.sh@20 -- # val= 00:06:09.520 03:06:43 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.520 03:06:43 -- accel/accel.sh@19 -- # IFS=: 00:06:09.520 03:06:43 -- accel/accel.sh@19 -- # read -r var val 00:06:09.520 03:06:43 -- accel/accel.sh@20 -- # val=xor 00:06:09.520 03:06:43 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.520 03:06:43 -- accel/accel.sh@23 -- # accel_opc=xor 00:06:09.520 03:06:43 -- accel/accel.sh@19 -- # IFS=: 00:06:09.520 03:06:43 -- accel/accel.sh@19 -- # read -r var val 00:06:09.520 03:06:43 -- accel/accel.sh@20 -- # val=2 00:06:09.520 03:06:43 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.520 03:06:43 -- accel/accel.sh@19 -- # IFS=: 00:06:09.520 03:06:43 -- accel/accel.sh@19 -- # read -r var val 00:06:09.520 03:06:43 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:09.520 03:06:43 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.520 03:06:43 -- accel/accel.sh@19 -- # IFS=: 00:06:09.520 03:06:43 -- accel/accel.sh@19 -- # read -r var val 00:06:09.520 03:06:43 -- accel/accel.sh@20 -- # val= 00:06:09.520 03:06:43 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.520 03:06:43 -- accel/accel.sh@19 -- # IFS=: 00:06:09.520 03:06:43 -- accel/accel.sh@19 -- # read -r var val 00:06:09.520 03:06:43 -- accel/accel.sh@20 -- # val=software 00:06:09.520 03:06:43 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.520 03:06:43 -- accel/accel.sh@22 -- # accel_module=software 00:06:09.520 03:06:43 -- accel/accel.sh@19 -- # IFS=: 00:06:09.520 03:06:43 -- accel/accel.sh@19 -- # read -r var val 00:06:09.520 03:06:43 -- accel/accel.sh@20 -- # val=32 00:06:09.520 03:06:43 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.520 03:06:43 -- accel/accel.sh@19 -- # IFS=: 00:06:09.520 03:06:43 -- accel/accel.sh@19 -- # read -r var val 00:06:09.520 03:06:43 -- accel/accel.sh@20 -- # val=32 00:06:09.520 03:06:43 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.520 03:06:43 -- accel/accel.sh@19 -- # IFS=: 00:06:09.520 03:06:43 -- accel/accel.sh@19 -- # read -r var val 00:06:09.520 03:06:43 -- accel/accel.sh@20 -- # val=1 00:06:09.520 03:06:43 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.520 03:06:43 -- accel/accel.sh@19 -- # IFS=: 00:06:09.520 03:06:43 -- accel/accel.sh@19 -- # read -r var val 00:06:09.520 03:06:43 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:09.520 03:06:43 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.520 03:06:43 -- accel/accel.sh@19 -- # IFS=: 00:06:09.520 03:06:43 -- accel/accel.sh@19 -- # read -r var val 00:06:09.520 03:06:43 -- accel/accel.sh@20 -- # val=Yes 00:06:09.520 03:06:43 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.520 03:06:43 -- accel/accel.sh@19 -- # IFS=: 00:06:09.520 03:06:43 -- accel/accel.sh@19 -- # read -r var val 00:06:09.520 03:06:43 -- accel/accel.sh@20 -- # val= 00:06:09.520 03:06:43 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.520 03:06:43 -- accel/accel.sh@19 -- # IFS=: 00:06:09.520 03:06:43 -- accel/accel.sh@19 -- # read -r var val 00:06:09.520 03:06:43 -- accel/accel.sh@20 -- # val= 00:06:09.520 03:06:43 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.520 03:06:43 -- accel/accel.sh@19 -- # IFS=: 00:06:09.520 03:06:43 -- accel/accel.sh@19 -- # read -r var val 00:06:10.895 03:06:45 -- accel/accel.sh@20 -- # val= 00:06:10.895 03:06:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.895 03:06:45 -- accel/accel.sh@19 -- # IFS=: 00:06:10.895 03:06:45 -- accel/accel.sh@19 -- # read -r var val 00:06:10.895 03:06:45 -- accel/accel.sh@20 -- # val= 00:06:10.895 03:06:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.895 03:06:45 -- accel/accel.sh@19 -- # IFS=: 00:06:10.895 03:06:45 -- accel/accel.sh@19 -- # read -r var val 00:06:10.895 03:06:45 -- accel/accel.sh@20 -- # val= 00:06:10.895 03:06:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.895 03:06:45 -- accel/accel.sh@19 -- # IFS=: 00:06:10.895 03:06:45 -- accel/accel.sh@19 -- # read -r var val 00:06:10.895 03:06:45 -- accel/accel.sh@20 -- # val= 00:06:10.895 03:06:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.895 03:06:45 -- accel/accel.sh@19 -- # IFS=: 00:06:10.895 03:06:45 -- accel/accel.sh@19 -- # read -r var val 00:06:10.895 03:06:45 -- accel/accel.sh@20 -- # val= 00:06:10.895 03:06:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.895 03:06:45 -- accel/accel.sh@19 -- # IFS=: 00:06:10.895 03:06:45 -- accel/accel.sh@19 -- # read -r var val 00:06:10.895 03:06:45 -- accel/accel.sh@20 -- # val= 00:06:10.895 03:06:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.895 03:06:45 -- accel/accel.sh@19 -- # IFS=: 00:06:10.895 03:06:45 -- accel/accel.sh@19 -- # read -r var val 00:06:10.895 03:06:45 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:10.895 03:06:45 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:10.895 03:06:45 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:10.895 00:06:10.895 real 0m1.487s 00:06:10.895 user 0m1.339s 00:06:10.895 sys 0m0.149s 00:06:10.895 03:06:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:10.895 03:06:45 -- common/autotest_common.sh@10 -- # set +x 00:06:10.895 ************************************ 00:06:10.895 END TEST accel_xor 00:06:10.895 ************************************ 00:06:10.895 03:06:45 -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:10.895 03:06:45 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:10.895 03:06:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:10.895 03:06:45 -- common/autotest_common.sh@10 -- # set +x 00:06:10.895 ************************************ 00:06:10.895 START TEST accel_xor 00:06:10.895 ************************************ 00:06:10.895 03:06:45 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w xor -y -x 3 00:06:10.895 03:06:45 -- accel/accel.sh@16 -- # local accel_opc 00:06:10.895 03:06:45 -- accel/accel.sh@17 -- # local accel_module 00:06:10.895 03:06:45 -- accel/accel.sh@19 -- # IFS=: 00:06:10.895 03:06:45 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:10.895 03:06:45 -- accel/accel.sh@19 -- # read -r var val 00:06:10.895 03:06:45 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:10.895 03:06:45 -- accel/accel.sh@12 -- # build_accel_config 00:06:10.895 03:06:45 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:10.895 03:06:45 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:10.895 03:06:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:10.895 03:06:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:10.895 03:06:45 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:10.895 03:06:45 -- accel/accel.sh@40 -- # local IFS=, 00:06:10.895 03:06:45 -- accel/accel.sh@41 -- # jq -r . 00:06:10.895 [2024-04-25 03:06:45.160425] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:06:10.895 [2024-04-25 03:06:45.160489] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1386631 ] 00:06:10.895 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.895 [2024-04-25 03:06:45.218201] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.895 [2024-04-25 03:06:45.337848] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.153 03:06:45 -- accel/accel.sh@20 -- # val= 00:06:11.153 03:06:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.153 03:06:45 -- accel/accel.sh@19 -- # IFS=: 00:06:11.153 03:06:45 -- accel/accel.sh@19 -- # read -r var val 00:06:11.153 03:06:45 -- accel/accel.sh@20 -- # val= 00:06:11.153 03:06:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.153 03:06:45 -- accel/accel.sh@19 -- # IFS=: 00:06:11.153 03:06:45 -- accel/accel.sh@19 -- # read -r var val 00:06:11.153 03:06:45 -- accel/accel.sh@20 -- # val=0x1 00:06:11.153 03:06:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.153 03:06:45 -- accel/accel.sh@19 -- # IFS=: 00:06:11.153 03:06:45 -- accel/accel.sh@19 -- # read -r var val 00:06:11.153 03:06:45 -- accel/accel.sh@20 -- # val= 00:06:11.153 03:06:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.153 03:06:45 -- accel/accel.sh@19 -- # IFS=: 00:06:11.153 03:06:45 -- accel/accel.sh@19 -- # read -r var val 00:06:11.153 03:06:45 -- accel/accel.sh@20 -- # val= 00:06:11.153 03:06:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.153 03:06:45 -- accel/accel.sh@19 -- # IFS=: 00:06:11.153 03:06:45 -- accel/accel.sh@19 -- # read -r var val 00:06:11.153 03:06:45 -- accel/accel.sh@20 -- # val=xor 00:06:11.153 03:06:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.153 03:06:45 -- accel/accel.sh@23 -- # accel_opc=xor 00:06:11.153 03:06:45 -- accel/accel.sh@19 -- # IFS=: 00:06:11.153 03:06:45 -- accel/accel.sh@19 -- # read -r var val 00:06:11.153 03:06:45 -- accel/accel.sh@20 -- # val=3 00:06:11.153 03:06:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.153 03:06:45 -- accel/accel.sh@19 -- # IFS=: 00:06:11.153 03:06:45 -- accel/accel.sh@19 -- # read -r var val 00:06:11.153 03:06:45 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:11.153 03:06:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.153 03:06:45 -- accel/accel.sh@19 -- # IFS=: 00:06:11.153 03:06:45 -- accel/accel.sh@19 -- # read -r var val 00:06:11.153 03:06:45 -- accel/accel.sh@20 -- # val= 00:06:11.153 03:06:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.153 03:06:45 -- accel/accel.sh@19 -- # IFS=: 00:06:11.153 03:06:45 -- accel/accel.sh@19 -- # read -r var val 00:06:11.153 03:06:45 -- accel/accel.sh@20 -- # val=software 00:06:11.153 03:06:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.154 03:06:45 -- accel/accel.sh@22 -- # accel_module=software 00:06:11.154 03:06:45 -- accel/accel.sh@19 -- # IFS=: 00:06:11.154 03:06:45 -- accel/accel.sh@19 -- # read -r var val 00:06:11.154 03:06:45 -- accel/accel.sh@20 -- # val=32 00:06:11.154 03:06:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.154 03:06:45 -- accel/accel.sh@19 -- # IFS=: 00:06:11.154 03:06:45 -- accel/accel.sh@19 -- # read -r var val 00:06:11.154 03:06:45 -- accel/accel.sh@20 -- # val=32 00:06:11.154 03:06:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.154 03:06:45 -- accel/accel.sh@19 -- # IFS=: 00:06:11.154 03:06:45 -- accel/accel.sh@19 -- # read -r var val 00:06:11.154 03:06:45 -- accel/accel.sh@20 -- # val=1 00:06:11.154 03:06:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.154 03:06:45 -- accel/accel.sh@19 -- # IFS=: 00:06:11.154 03:06:45 -- accel/accel.sh@19 -- # read -r var val 00:06:11.154 03:06:45 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:11.154 03:06:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.154 03:06:45 -- accel/accel.sh@19 -- # IFS=: 00:06:11.154 03:06:45 -- accel/accel.sh@19 -- # read -r var val 00:06:11.154 03:06:45 -- accel/accel.sh@20 -- # val=Yes 00:06:11.154 03:06:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.154 03:06:45 -- accel/accel.sh@19 -- # IFS=: 00:06:11.154 03:06:45 -- accel/accel.sh@19 -- # read -r var val 00:06:11.154 03:06:45 -- accel/accel.sh@20 -- # val= 00:06:11.154 03:06:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.154 03:06:45 -- accel/accel.sh@19 -- # IFS=: 00:06:11.154 03:06:45 -- accel/accel.sh@19 -- # read -r var val 00:06:11.154 03:06:45 -- accel/accel.sh@20 -- # val= 00:06:11.154 03:06:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.154 03:06:45 -- accel/accel.sh@19 -- # IFS=: 00:06:11.154 03:06:45 -- accel/accel.sh@19 -- # read -r var val 00:06:12.531 03:06:46 -- accel/accel.sh@20 -- # val= 00:06:12.531 03:06:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.531 03:06:46 -- accel/accel.sh@19 -- # IFS=: 00:06:12.531 03:06:46 -- accel/accel.sh@19 -- # read -r var val 00:06:12.531 03:06:46 -- accel/accel.sh@20 -- # val= 00:06:12.531 03:06:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.531 03:06:46 -- accel/accel.sh@19 -- # IFS=: 00:06:12.531 03:06:46 -- accel/accel.sh@19 -- # read -r var val 00:06:12.531 03:06:46 -- accel/accel.sh@20 -- # val= 00:06:12.531 03:06:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.531 03:06:46 -- accel/accel.sh@19 -- # IFS=: 00:06:12.531 03:06:46 -- accel/accel.sh@19 -- # read -r var val 00:06:12.531 03:06:46 -- accel/accel.sh@20 -- # val= 00:06:12.531 03:06:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.531 03:06:46 -- accel/accel.sh@19 -- # IFS=: 00:06:12.531 03:06:46 -- accel/accel.sh@19 -- # read -r var val 00:06:12.531 03:06:46 -- accel/accel.sh@20 -- # val= 00:06:12.531 03:06:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.531 03:06:46 -- accel/accel.sh@19 -- # IFS=: 00:06:12.531 03:06:46 -- accel/accel.sh@19 -- # read -r var val 00:06:12.531 03:06:46 -- accel/accel.sh@20 -- # val= 00:06:12.531 03:06:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.531 03:06:46 -- accel/accel.sh@19 -- # IFS=: 00:06:12.531 03:06:46 -- accel/accel.sh@19 -- # read -r var val 00:06:12.531 03:06:46 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:12.531 03:06:46 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:12.531 03:06:46 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:12.531 00:06:12.531 real 0m1.479s 00:06:12.531 user 0m1.346s 00:06:12.531 sys 0m0.134s 00:06:12.531 03:06:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:12.531 03:06:46 -- common/autotest_common.sh@10 -- # set +x 00:06:12.531 ************************************ 00:06:12.531 END TEST accel_xor 00:06:12.531 ************************************ 00:06:12.531 03:06:46 -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:12.531 03:06:46 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:12.531 03:06:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:12.531 03:06:46 -- common/autotest_common.sh@10 -- # set +x 00:06:12.531 ************************************ 00:06:12.531 START TEST accel_dif_verify 00:06:12.531 ************************************ 00:06:12.531 03:06:46 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_verify 00:06:12.531 03:06:46 -- accel/accel.sh@16 -- # local accel_opc 00:06:12.531 03:06:46 -- accel/accel.sh@17 -- # local accel_module 00:06:12.531 03:06:46 -- accel/accel.sh@19 -- # IFS=: 00:06:12.531 03:06:46 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:12.531 03:06:46 -- accel/accel.sh@19 -- # read -r var val 00:06:12.531 03:06:46 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:12.531 03:06:46 -- accel/accel.sh@12 -- # build_accel_config 00:06:12.531 03:06:46 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:12.531 03:06:46 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:12.531 03:06:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:12.531 03:06:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:12.531 03:06:46 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:12.531 03:06:46 -- accel/accel.sh@40 -- # local IFS=, 00:06:12.531 03:06:46 -- accel/accel.sh@41 -- # jq -r . 00:06:12.531 [2024-04-25 03:06:46.764085] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:06:12.531 [2024-04-25 03:06:46.764146] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1386808 ] 00:06:12.531 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.531 [2024-04-25 03:06:46.830659] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.531 [2024-04-25 03:06:46.950976] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.531 03:06:47 -- accel/accel.sh@20 -- # val= 00:06:12.531 03:06:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.531 03:06:47 -- accel/accel.sh@19 -- # IFS=: 00:06:12.531 03:06:47 -- accel/accel.sh@19 -- # read -r var val 00:06:12.531 03:06:47 -- accel/accel.sh@20 -- # val= 00:06:12.531 03:06:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.531 03:06:47 -- accel/accel.sh@19 -- # IFS=: 00:06:12.531 03:06:47 -- accel/accel.sh@19 -- # read -r var val 00:06:12.531 03:06:47 -- accel/accel.sh@20 -- # val=0x1 00:06:12.531 03:06:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.531 03:06:47 -- accel/accel.sh@19 -- # IFS=: 00:06:12.531 03:06:47 -- accel/accel.sh@19 -- # read -r var val 00:06:12.531 03:06:47 -- accel/accel.sh@20 -- # val= 00:06:12.531 03:06:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.531 03:06:47 -- accel/accel.sh@19 -- # IFS=: 00:06:12.531 03:06:47 -- accel/accel.sh@19 -- # read -r var val 00:06:12.531 03:06:47 -- accel/accel.sh@20 -- # val= 00:06:12.531 03:06:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.531 03:06:47 -- accel/accel.sh@19 -- # IFS=: 00:06:12.531 03:06:47 -- accel/accel.sh@19 -- # read -r var val 00:06:12.531 03:06:47 -- accel/accel.sh@20 -- # val=dif_verify 00:06:12.531 03:06:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.531 03:06:47 -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:06:12.531 03:06:47 -- accel/accel.sh@19 -- # IFS=: 00:06:12.531 03:06:47 -- accel/accel.sh@19 -- # read -r var val 00:06:12.531 03:06:47 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:12.531 03:06:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.531 03:06:47 -- accel/accel.sh@19 -- # IFS=: 00:06:12.531 03:06:47 -- accel/accel.sh@19 -- # read -r var val 00:06:12.531 03:06:47 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:12.531 03:06:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.531 03:06:47 -- accel/accel.sh@19 -- # IFS=: 00:06:12.531 03:06:47 -- accel/accel.sh@19 -- # read -r var val 00:06:12.531 03:06:47 -- accel/accel.sh@20 -- # val='512 bytes' 00:06:12.531 03:06:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.531 03:06:47 -- accel/accel.sh@19 -- # IFS=: 00:06:12.531 03:06:47 -- accel/accel.sh@19 -- # read -r var val 00:06:12.531 03:06:47 -- accel/accel.sh@20 -- # val='8 bytes' 00:06:12.531 03:06:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.531 03:06:47 -- accel/accel.sh@19 -- # IFS=: 00:06:12.531 03:06:47 -- accel/accel.sh@19 -- # read -r var val 00:06:12.531 03:06:47 -- accel/accel.sh@20 -- # val= 00:06:12.531 03:06:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.531 03:06:47 -- accel/accel.sh@19 -- # IFS=: 00:06:12.531 03:06:47 -- accel/accel.sh@19 -- # read -r var val 00:06:12.532 03:06:47 -- accel/accel.sh@20 -- # val=software 00:06:12.532 03:06:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.532 03:06:47 -- accel/accel.sh@22 -- # accel_module=software 00:06:12.532 03:06:47 -- accel/accel.sh@19 -- # IFS=: 00:06:12.532 03:06:47 -- accel/accel.sh@19 -- # read -r var val 00:06:12.532 03:06:47 -- accel/accel.sh@20 -- # val=32 00:06:12.532 03:06:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.532 03:06:47 -- accel/accel.sh@19 -- # IFS=: 00:06:12.532 03:06:47 -- accel/accel.sh@19 -- # read -r var val 00:06:12.532 03:06:47 -- accel/accel.sh@20 -- # val=32 00:06:12.532 03:06:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.532 03:06:47 -- accel/accel.sh@19 -- # IFS=: 00:06:12.532 03:06:47 -- accel/accel.sh@19 -- # read -r var val 00:06:12.532 03:06:47 -- accel/accel.sh@20 -- # val=1 00:06:12.532 03:06:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.532 03:06:47 -- accel/accel.sh@19 -- # IFS=: 00:06:12.532 03:06:47 -- accel/accel.sh@19 -- # read -r var val 00:06:12.532 03:06:47 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:12.532 03:06:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.532 03:06:47 -- accel/accel.sh@19 -- # IFS=: 00:06:12.532 03:06:47 -- accel/accel.sh@19 -- # read -r var val 00:06:12.532 03:06:47 -- accel/accel.sh@20 -- # val=No 00:06:12.532 03:06:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.532 03:06:47 -- accel/accel.sh@19 -- # IFS=: 00:06:12.532 03:06:47 -- accel/accel.sh@19 -- # read -r var val 00:06:12.532 03:06:47 -- accel/accel.sh@20 -- # val= 00:06:12.532 03:06:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.532 03:06:47 -- accel/accel.sh@19 -- # IFS=: 00:06:12.532 03:06:47 -- accel/accel.sh@19 -- # read -r var val 00:06:12.532 03:06:47 -- accel/accel.sh@20 -- # val= 00:06:12.532 03:06:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.532 03:06:47 -- accel/accel.sh@19 -- # IFS=: 00:06:12.532 03:06:47 -- accel/accel.sh@19 -- # read -r var val 00:06:13.915 03:06:48 -- accel/accel.sh@20 -- # val= 00:06:13.915 03:06:48 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.915 03:06:48 -- accel/accel.sh@19 -- # IFS=: 00:06:13.915 03:06:48 -- accel/accel.sh@19 -- # read -r var val 00:06:13.915 03:06:48 -- accel/accel.sh@20 -- # val= 00:06:13.915 03:06:48 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.915 03:06:48 -- accel/accel.sh@19 -- # IFS=: 00:06:13.915 03:06:48 -- accel/accel.sh@19 -- # read -r var val 00:06:13.915 03:06:48 -- accel/accel.sh@20 -- # val= 00:06:13.915 03:06:48 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.915 03:06:48 -- accel/accel.sh@19 -- # IFS=: 00:06:13.915 03:06:48 -- accel/accel.sh@19 -- # read -r var val 00:06:13.915 03:06:48 -- accel/accel.sh@20 -- # val= 00:06:13.915 03:06:48 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.915 03:06:48 -- accel/accel.sh@19 -- # IFS=: 00:06:13.915 03:06:48 -- accel/accel.sh@19 -- # read -r var val 00:06:13.915 03:06:48 -- accel/accel.sh@20 -- # val= 00:06:13.915 03:06:48 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.915 03:06:48 -- accel/accel.sh@19 -- # IFS=: 00:06:13.915 03:06:48 -- accel/accel.sh@19 -- # read -r var val 00:06:13.915 03:06:48 -- accel/accel.sh@20 -- # val= 00:06:13.915 03:06:48 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.915 03:06:48 -- accel/accel.sh@19 -- # IFS=: 00:06:13.915 03:06:48 -- accel/accel.sh@19 -- # read -r var val 00:06:13.915 03:06:48 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:13.915 03:06:48 -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:06:13.915 03:06:48 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:13.915 00:06:13.915 real 0m1.484s 00:06:13.915 user 0m1.341s 00:06:13.915 sys 0m0.145s 00:06:13.915 03:06:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:13.915 03:06:48 -- common/autotest_common.sh@10 -- # set +x 00:06:13.915 ************************************ 00:06:13.915 END TEST accel_dif_verify 00:06:13.915 ************************************ 00:06:13.915 03:06:48 -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:13.915 03:06:48 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:13.915 03:06:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:13.915 03:06:48 -- common/autotest_common.sh@10 -- # set +x 00:06:13.915 ************************************ 00:06:13.915 START TEST accel_dif_generate 00:06:13.915 ************************************ 00:06:13.915 03:06:48 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_generate 00:06:13.915 03:06:48 -- accel/accel.sh@16 -- # local accel_opc 00:06:13.915 03:06:48 -- accel/accel.sh@17 -- # local accel_module 00:06:13.915 03:06:48 -- accel/accel.sh@19 -- # IFS=: 00:06:13.915 03:06:48 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:13.915 03:06:48 -- accel/accel.sh@19 -- # read -r var val 00:06:13.915 03:06:48 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:13.915 03:06:48 -- accel/accel.sh@12 -- # build_accel_config 00:06:13.915 03:06:48 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:13.915 03:06:48 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:13.915 03:06:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:13.915 03:06:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:13.915 03:06:48 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:13.915 03:06:48 -- accel/accel.sh@40 -- # local IFS=, 00:06:13.915 03:06:48 -- accel/accel.sh@41 -- # jq -r . 00:06:13.915 [2024-04-25 03:06:48.362997] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:06:13.915 [2024-04-25 03:06:48.363064] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1387078 ] 00:06:13.915 EAL: No free 2048 kB hugepages reported on node 1 00:06:14.173 [2024-04-25 03:06:48.429739] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.173 [2024-04-25 03:06:48.550332] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.173 03:06:48 -- accel/accel.sh@20 -- # val= 00:06:14.173 03:06:48 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.173 03:06:48 -- accel/accel.sh@19 -- # IFS=: 00:06:14.173 03:06:48 -- accel/accel.sh@19 -- # read -r var val 00:06:14.173 03:06:48 -- accel/accel.sh@20 -- # val= 00:06:14.173 03:06:48 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.173 03:06:48 -- accel/accel.sh@19 -- # IFS=: 00:06:14.173 03:06:48 -- accel/accel.sh@19 -- # read -r var val 00:06:14.173 03:06:48 -- accel/accel.sh@20 -- # val=0x1 00:06:14.173 03:06:48 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.173 03:06:48 -- accel/accel.sh@19 -- # IFS=: 00:06:14.173 03:06:48 -- accel/accel.sh@19 -- # read -r var val 00:06:14.173 03:06:48 -- accel/accel.sh@20 -- # val= 00:06:14.173 03:06:48 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.173 03:06:48 -- accel/accel.sh@19 -- # IFS=: 00:06:14.173 03:06:48 -- accel/accel.sh@19 -- # read -r var val 00:06:14.173 03:06:48 -- accel/accel.sh@20 -- # val= 00:06:14.173 03:06:48 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.173 03:06:48 -- accel/accel.sh@19 -- # IFS=: 00:06:14.173 03:06:48 -- accel/accel.sh@19 -- # read -r var val 00:06:14.173 03:06:48 -- accel/accel.sh@20 -- # val=dif_generate 00:06:14.173 03:06:48 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.173 03:06:48 -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:06:14.173 03:06:48 -- accel/accel.sh@19 -- # IFS=: 00:06:14.173 03:06:48 -- accel/accel.sh@19 -- # read -r var val 00:06:14.173 03:06:48 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:14.173 03:06:48 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.173 03:06:48 -- accel/accel.sh@19 -- # IFS=: 00:06:14.173 03:06:48 -- accel/accel.sh@19 -- # read -r var val 00:06:14.173 03:06:48 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:14.173 03:06:48 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.173 03:06:48 -- accel/accel.sh@19 -- # IFS=: 00:06:14.173 03:06:48 -- accel/accel.sh@19 -- # read -r var val 00:06:14.173 03:06:48 -- accel/accel.sh@20 -- # val='512 bytes' 00:06:14.173 03:06:48 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.173 03:06:48 -- accel/accel.sh@19 -- # IFS=: 00:06:14.173 03:06:48 -- accel/accel.sh@19 -- # read -r var val 00:06:14.173 03:06:48 -- accel/accel.sh@20 -- # val='8 bytes' 00:06:14.173 03:06:48 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.173 03:06:48 -- accel/accel.sh@19 -- # IFS=: 00:06:14.173 03:06:48 -- accel/accel.sh@19 -- # read -r var val 00:06:14.173 03:06:48 -- accel/accel.sh@20 -- # val= 00:06:14.173 03:06:48 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.173 03:06:48 -- accel/accel.sh@19 -- # IFS=: 00:06:14.173 03:06:48 -- accel/accel.sh@19 -- # read -r var val 00:06:14.173 03:06:48 -- accel/accel.sh@20 -- # val=software 00:06:14.173 03:06:48 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.173 03:06:48 -- accel/accel.sh@22 -- # accel_module=software 00:06:14.173 03:06:48 -- accel/accel.sh@19 -- # IFS=: 00:06:14.173 03:06:48 -- accel/accel.sh@19 -- # read -r var val 00:06:14.173 03:06:48 -- accel/accel.sh@20 -- # val=32 00:06:14.173 03:06:48 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.173 03:06:48 -- accel/accel.sh@19 -- # IFS=: 00:06:14.173 03:06:48 -- accel/accel.sh@19 -- # read -r var val 00:06:14.173 03:06:48 -- accel/accel.sh@20 -- # val=32 00:06:14.173 03:06:48 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.173 03:06:48 -- accel/accel.sh@19 -- # IFS=: 00:06:14.173 03:06:48 -- accel/accel.sh@19 -- # read -r var val 00:06:14.173 03:06:48 -- accel/accel.sh@20 -- # val=1 00:06:14.173 03:06:48 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.173 03:06:48 -- accel/accel.sh@19 -- # IFS=: 00:06:14.173 03:06:48 -- accel/accel.sh@19 -- # read -r var val 00:06:14.173 03:06:48 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:14.173 03:06:48 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.173 03:06:48 -- accel/accel.sh@19 -- # IFS=: 00:06:14.173 03:06:48 -- accel/accel.sh@19 -- # read -r var val 00:06:14.173 03:06:48 -- accel/accel.sh@20 -- # val=No 00:06:14.173 03:06:48 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.173 03:06:48 -- accel/accel.sh@19 -- # IFS=: 00:06:14.173 03:06:48 -- accel/accel.sh@19 -- # read -r var val 00:06:14.173 03:06:48 -- accel/accel.sh@20 -- # val= 00:06:14.173 03:06:48 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.173 03:06:48 -- accel/accel.sh@19 -- # IFS=: 00:06:14.173 03:06:48 -- accel/accel.sh@19 -- # read -r var val 00:06:14.173 03:06:48 -- accel/accel.sh@20 -- # val= 00:06:14.173 03:06:48 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.173 03:06:48 -- accel/accel.sh@19 -- # IFS=: 00:06:14.173 03:06:48 -- accel/accel.sh@19 -- # read -r var val 00:06:15.546 03:06:49 -- accel/accel.sh@20 -- # val= 00:06:15.546 03:06:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.547 03:06:49 -- accel/accel.sh@19 -- # IFS=: 00:06:15.547 03:06:49 -- accel/accel.sh@19 -- # read -r var val 00:06:15.547 03:06:49 -- accel/accel.sh@20 -- # val= 00:06:15.547 03:06:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.547 03:06:49 -- accel/accel.sh@19 -- # IFS=: 00:06:15.547 03:06:49 -- accel/accel.sh@19 -- # read -r var val 00:06:15.547 03:06:49 -- accel/accel.sh@20 -- # val= 00:06:15.547 03:06:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.547 03:06:49 -- accel/accel.sh@19 -- # IFS=: 00:06:15.547 03:06:49 -- accel/accel.sh@19 -- # read -r var val 00:06:15.547 03:06:49 -- accel/accel.sh@20 -- # val= 00:06:15.547 03:06:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.547 03:06:49 -- accel/accel.sh@19 -- # IFS=: 00:06:15.547 03:06:49 -- accel/accel.sh@19 -- # read -r var val 00:06:15.547 03:06:49 -- accel/accel.sh@20 -- # val= 00:06:15.547 03:06:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.547 03:06:49 -- accel/accel.sh@19 -- # IFS=: 00:06:15.547 03:06:49 -- accel/accel.sh@19 -- # read -r var val 00:06:15.547 03:06:49 -- accel/accel.sh@20 -- # val= 00:06:15.547 03:06:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.547 03:06:49 -- accel/accel.sh@19 -- # IFS=: 00:06:15.547 03:06:49 -- accel/accel.sh@19 -- # read -r var val 00:06:15.547 03:06:49 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:15.547 03:06:49 -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:06:15.547 03:06:49 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:15.547 00:06:15.547 real 0m1.486s 00:06:15.547 user 0m1.343s 00:06:15.547 sys 0m0.146s 00:06:15.547 03:06:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:15.547 03:06:49 -- common/autotest_common.sh@10 -- # set +x 00:06:15.547 ************************************ 00:06:15.547 END TEST accel_dif_generate 00:06:15.547 ************************************ 00:06:15.547 03:06:49 -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:15.547 03:06:49 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:15.547 03:06:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:15.547 03:06:49 -- common/autotest_common.sh@10 -- # set +x 00:06:15.547 ************************************ 00:06:15.547 START TEST accel_dif_generate_copy 00:06:15.547 ************************************ 00:06:15.547 03:06:49 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_generate_copy 00:06:15.547 03:06:49 -- accel/accel.sh@16 -- # local accel_opc 00:06:15.547 03:06:49 -- accel/accel.sh@17 -- # local accel_module 00:06:15.547 03:06:49 -- accel/accel.sh@19 -- # IFS=: 00:06:15.547 03:06:49 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:15.547 03:06:49 -- accel/accel.sh@19 -- # read -r var val 00:06:15.547 03:06:49 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:15.547 03:06:49 -- accel/accel.sh@12 -- # build_accel_config 00:06:15.547 03:06:49 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:15.547 03:06:49 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:15.547 03:06:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:15.547 03:06:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:15.547 03:06:49 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:15.547 03:06:49 -- accel/accel.sh@40 -- # local IFS=, 00:06:15.547 03:06:49 -- accel/accel.sh@41 -- # jq -r . 00:06:15.547 [2024-04-25 03:06:49.977944] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:06:15.547 [2024-04-25 03:06:49.978010] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1387242 ] 00:06:15.547 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.547 [2024-04-25 03:06:50.045852] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.805 [2024-04-25 03:06:50.169805] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.805 03:06:50 -- accel/accel.sh@20 -- # val= 00:06:15.805 03:06:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.805 03:06:50 -- accel/accel.sh@19 -- # IFS=: 00:06:15.805 03:06:50 -- accel/accel.sh@19 -- # read -r var val 00:06:15.805 03:06:50 -- accel/accel.sh@20 -- # val= 00:06:15.805 03:06:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.805 03:06:50 -- accel/accel.sh@19 -- # IFS=: 00:06:15.805 03:06:50 -- accel/accel.sh@19 -- # read -r var val 00:06:15.805 03:06:50 -- accel/accel.sh@20 -- # val=0x1 00:06:15.805 03:06:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.805 03:06:50 -- accel/accel.sh@19 -- # IFS=: 00:06:15.805 03:06:50 -- accel/accel.sh@19 -- # read -r var val 00:06:15.805 03:06:50 -- accel/accel.sh@20 -- # val= 00:06:15.805 03:06:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.805 03:06:50 -- accel/accel.sh@19 -- # IFS=: 00:06:15.805 03:06:50 -- accel/accel.sh@19 -- # read -r var val 00:06:15.805 03:06:50 -- accel/accel.sh@20 -- # val= 00:06:15.805 03:06:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.805 03:06:50 -- accel/accel.sh@19 -- # IFS=: 00:06:15.805 03:06:50 -- accel/accel.sh@19 -- # read -r var val 00:06:15.805 03:06:50 -- accel/accel.sh@20 -- # val=dif_generate_copy 00:06:15.805 03:06:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.805 03:06:50 -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:06:15.805 03:06:50 -- accel/accel.sh@19 -- # IFS=: 00:06:15.805 03:06:50 -- accel/accel.sh@19 -- # read -r var val 00:06:15.805 03:06:50 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:15.805 03:06:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.805 03:06:50 -- accel/accel.sh@19 -- # IFS=: 00:06:15.805 03:06:50 -- accel/accel.sh@19 -- # read -r var val 00:06:15.805 03:06:50 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:15.805 03:06:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.805 03:06:50 -- accel/accel.sh@19 -- # IFS=: 00:06:15.805 03:06:50 -- accel/accel.sh@19 -- # read -r var val 00:06:15.805 03:06:50 -- accel/accel.sh@20 -- # val= 00:06:15.805 03:06:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.805 03:06:50 -- accel/accel.sh@19 -- # IFS=: 00:06:15.805 03:06:50 -- accel/accel.sh@19 -- # read -r var val 00:06:15.805 03:06:50 -- accel/accel.sh@20 -- # val=software 00:06:15.805 03:06:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.805 03:06:50 -- accel/accel.sh@22 -- # accel_module=software 00:06:15.805 03:06:50 -- accel/accel.sh@19 -- # IFS=: 00:06:15.805 03:06:50 -- accel/accel.sh@19 -- # read -r var val 00:06:15.805 03:06:50 -- accel/accel.sh@20 -- # val=32 00:06:15.805 03:06:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.805 03:06:50 -- accel/accel.sh@19 -- # IFS=: 00:06:15.805 03:06:50 -- accel/accel.sh@19 -- # read -r var val 00:06:15.805 03:06:50 -- accel/accel.sh@20 -- # val=32 00:06:15.805 03:06:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.805 03:06:50 -- accel/accel.sh@19 -- # IFS=: 00:06:15.805 03:06:50 -- accel/accel.sh@19 -- # read -r var val 00:06:15.805 03:06:50 -- accel/accel.sh@20 -- # val=1 00:06:15.805 03:06:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.805 03:06:50 -- accel/accel.sh@19 -- # IFS=: 00:06:15.805 03:06:50 -- accel/accel.sh@19 -- # read -r var val 00:06:15.805 03:06:50 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:15.805 03:06:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.805 03:06:50 -- accel/accel.sh@19 -- # IFS=: 00:06:15.805 03:06:50 -- accel/accel.sh@19 -- # read -r var val 00:06:15.805 03:06:50 -- accel/accel.sh@20 -- # val=No 00:06:15.805 03:06:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.805 03:06:50 -- accel/accel.sh@19 -- # IFS=: 00:06:15.805 03:06:50 -- accel/accel.sh@19 -- # read -r var val 00:06:15.805 03:06:50 -- accel/accel.sh@20 -- # val= 00:06:15.805 03:06:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.805 03:06:50 -- accel/accel.sh@19 -- # IFS=: 00:06:15.805 03:06:50 -- accel/accel.sh@19 -- # read -r var val 00:06:15.805 03:06:50 -- accel/accel.sh@20 -- # val= 00:06:15.805 03:06:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.805 03:06:50 -- accel/accel.sh@19 -- # IFS=: 00:06:15.805 03:06:50 -- accel/accel.sh@19 -- # read -r var val 00:06:17.179 03:06:51 -- accel/accel.sh@20 -- # val= 00:06:17.179 03:06:51 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.179 03:06:51 -- accel/accel.sh@19 -- # IFS=: 00:06:17.179 03:06:51 -- accel/accel.sh@19 -- # read -r var val 00:06:17.179 03:06:51 -- accel/accel.sh@20 -- # val= 00:06:17.179 03:06:51 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.179 03:06:51 -- accel/accel.sh@19 -- # IFS=: 00:06:17.179 03:06:51 -- accel/accel.sh@19 -- # read -r var val 00:06:17.179 03:06:51 -- accel/accel.sh@20 -- # val= 00:06:17.179 03:06:51 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.179 03:06:51 -- accel/accel.sh@19 -- # IFS=: 00:06:17.179 03:06:51 -- accel/accel.sh@19 -- # read -r var val 00:06:17.179 03:06:51 -- accel/accel.sh@20 -- # val= 00:06:17.179 03:06:51 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.179 03:06:51 -- accel/accel.sh@19 -- # IFS=: 00:06:17.179 03:06:51 -- accel/accel.sh@19 -- # read -r var val 00:06:17.179 03:06:51 -- accel/accel.sh@20 -- # val= 00:06:17.179 03:06:51 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.179 03:06:51 -- accel/accel.sh@19 -- # IFS=: 00:06:17.179 03:06:51 -- accel/accel.sh@19 -- # read -r var val 00:06:17.179 03:06:51 -- accel/accel.sh@20 -- # val= 00:06:17.179 03:06:51 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.179 03:06:51 -- accel/accel.sh@19 -- # IFS=: 00:06:17.179 03:06:51 -- accel/accel.sh@19 -- # read -r var val 00:06:17.179 03:06:51 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:17.179 03:06:51 -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:06:17.179 03:06:51 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:17.179 00:06:17.179 real 0m1.491s 00:06:17.179 user 0m1.340s 00:06:17.179 sys 0m0.153s 00:06:17.179 03:06:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:17.179 03:06:51 -- common/autotest_common.sh@10 -- # set +x 00:06:17.179 ************************************ 00:06:17.179 END TEST accel_dif_generate_copy 00:06:17.179 ************************************ 00:06:17.179 03:06:51 -- accel/accel.sh@115 -- # [[ y == y ]] 00:06:17.179 03:06:51 -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:17.179 03:06:51 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:17.179 03:06:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:17.179 03:06:51 -- common/autotest_common.sh@10 -- # set +x 00:06:17.179 ************************************ 00:06:17.179 START TEST accel_comp 00:06:17.179 ************************************ 00:06:17.179 03:06:51 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:17.179 03:06:51 -- accel/accel.sh@16 -- # local accel_opc 00:06:17.179 03:06:51 -- accel/accel.sh@17 -- # local accel_module 00:06:17.179 03:06:51 -- accel/accel.sh@19 -- # IFS=: 00:06:17.180 03:06:51 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:17.180 03:06:51 -- accel/accel.sh@19 -- # read -r var val 00:06:17.180 03:06:51 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:17.180 03:06:51 -- accel/accel.sh@12 -- # build_accel_config 00:06:17.180 03:06:51 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:17.180 03:06:51 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:17.180 03:06:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:17.180 03:06:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:17.180 03:06:51 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:17.180 03:06:51 -- accel/accel.sh@40 -- # local IFS=, 00:06:17.180 03:06:51 -- accel/accel.sh@41 -- # jq -r . 00:06:17.180 [2024-04-25 03:06:51.579818] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:06:17.180 [2024-04-25 03:06:51.579880] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1387524 ] 00:06:17.180 EAL: No free 2048 kB hugepages reported on node 1 00:06:17.180 [2024-04-25 03:06:51.642261] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.438 [2024-04-25 03:06:51.762638] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.438 03:06:51 -- accel/accel.sh@20 -- # val= 00:06:17.438 03:06:51 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.438 03:06:51 -- accel/accel.sh@19 -- # IFS=: 00:06:17.438 03:06:51 -- accel/accel.sh@19 -- # read -r var val 00:06:17.438 03:06:51 -- accel/accel.sh@20 -- # val= 00:06:17.438 03:06:51 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.438 03:06:51 -- accel/accel.sh@19 -- # IFS=: 00:06:17.438 03:06:51 -- accel/accel.sh@19 -- # read -r var val 00:06:17.438 03:06:51 -- accel/accel.sh@20 -- # val= 00:06:17.438 03:06:51 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.438 03:06:51 -- accel/accel.sh@19 -- # IFS=: 00:06:17.438 03:06:51 -- accel/accel.sh@19 -- # read -r var val 00:06:17.438 03:06:51 -- accel/accel.sh@20 -- # val=0x1 00:06:17.438 03:06:51 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.438 03:06:51 -- accel/accel.sh@19 -- # IFS=: 00:06:17.438 03:06:51 -- accel/accel.sh@19 -- # read -r var val 00:06:17.438 03:06:51 -- accel/accel.sh@20 -- # val= 00:06:17.438 03:06:51 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.438 03:06:51 -- accel/accel.sh@19 -- # IFS=: 00:06:17.438 03:06:51 -- accel/accel.sh@19 -- # read -r var val 00:06:17.438 03:06:51 -- accel/accel.sh@20 -- # val= 00:06:17.438 03:06:51 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.438 03:06:51 -- accel/accel.sh@19 -- # IFS=: 00:06:17.438 03:06:51 -- accel/accel.sh@19 -- # read -r var val 00:06:17.438 03:06:51 -- accel/accel.sh@20 -- # val=compress 00:06:17.438 03:06:51 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.439 03:06:51 -- accel/accel.sh@23 -- # accel_opc=compress 00:06:17.439 03:06:51 -- accel/accel.sh@19 -- # IFS=: 00:06:17.439 03:06:51 -- accel/accel.sh@19 -- # read -r var val 00:06:17.439 03:06:51 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:17.439 03:06:51 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.439 03:06:51 -- accel/accel.sh@19 -- # IFS=: 00:06:17.439 03:06:51 -- accel/accel.sh@19 -- # read -r var val 00:06:17.439 03:06:51 -- accel/accel.sh@20 -- # val= 00:06:17.439 03:06:51 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.439 03:06:51 -- accel/accel.sh@19 -- # IFS=: 00:06:17.439 03:06:51 -- accel/accel.sh@19 -- # read -r var val 00:06:17.439 03:06:51 -- accel/accel.sh@20 -- # val=software 00:06:17.439 03:06:51 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.439 03:06:51 -- accel/accel.sh@22 -- # accel_module=software 00:06:17.439 03:06:51 -- accel/accel.sh@19 -- # IFS=: 00:06:17.439 03:06:51 -- accel/accel.sh@19 -- # read -r var val 00:06:17.439 03:06:51 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:17.439 03:06:51 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.439 03:06:51 -- accel/accel.sh@19 -- # IFS=: 00:06:17.439 03:06:51 -- accel/accel.sh@19 -- # read -r var val 00:06:17.439 03:06:51 -- accel/accel.sh@20 -- # val=32 00:06:17.439 03:06:51 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.439 03:06:51 -- accel/accel.sh@19 -- # IFS=: 00:06:17.439 03:06:51 -- accel/accel.sh@19 -- # read -r var val 00:06:17.439 03:06:51 -- accel/accel.sh@20 -- # val=32 00:06:17.439 03:06:51 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.439 03:06:51 -- accel/accel.sh@19 -- # IFS=: 00:06:17.439 03:06:51 -- accel/accel.sh@19 -- # read -r var val 00:06:17.439 03:06:51 -- accel/accel.sh@20 -- # val=1 00:06:17.439 03:06:51 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.439 03:06:51 -- accel/accel.sh@19 -- # IFS=: 00:06:17.439 03:06:51 -- accel/accel.sh@19 -- # read -r var val 00:06:17.439 03:06:51 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:17.439 03:06:51 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.439 03:06:51 -- accel/accel.sh@19 -- # IFS=: 00:06:17.439 03:06:51 -- accel/accel.sh@19 -- # read -r var val 00:06:17.439 03:06:51 -- accel/accel.sh@20 -- # val=No 00:06:17.439 03:06:51 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.439 03:06:51 -- accel/accel.sh@19 -- # IFS=: 00:06:17.439 03:06:51 -- accel/accel.sh@19 -- # read -r var val 00:06:17.439 03:06:51 -- accel/accel.sh@20 -- # val= 00:06:17.439 03:06:51 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.439 03:06:51 -- accel/accel.sh@19 -- # IFS=: 00:06:17.439 03:06:51 -- accel/accel.sh@19 -- # read -r var val 00:06:17.439 03:06:51 -- accel/accel.sh@20 -- # val= 00:06:17.439 03:06:51 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.439 03:06:51 -- accel/accel.sh@19 -- # IFS=: 00:06:17.439 03:06:51 -- accel/accel.sh@19 -- # read -r var val 00:06:18.813 03:06:53 -- accel/accel.sh@20 -- # val= 00:06:18.813 03:06:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.813 03:06:53 -- accel/accel.sh@19 -- # IFS=: 00:06:18.813 03:06:53 -- accel/accel.sh@19 -- # read -r var val 00:06:18.813 03:06:53 -- accel/accel.sh@20 -- # val= 00:06:18.813 03:06:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.813 03:06:53 -- accel/accel.sh@19 -- # IFS=: 00:06:18.813 03:06:53 -- accel/accel.sh@19 -- # read -r var val 00:06:18.813 03:06:53 -- accel/accel.sh@20 -- # val= 00:06:18.813 03:06:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.813 03:06:53 -- accel/accel.sh@19 -- # IFS=: 00:06:18.813 03:06:53 -- accel/accel.sh@19 -- # read -r var val 00:06:18.813 03:06:53 -- accel/accel.sh@20 -- # val= 00:06:18.813 03:06:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.813 03:06:53 -- accel/accel.sh@19 -- # IFS=: 00:06:18.813 03:06:53 -- accel/accel.sh@19 -- # read -r var val 00:06:18.813 03:06:53 -- accel/accel.sh@20 -- # val= 00:06:18.813 03:06:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.813 03:06:53 -- accel/accel.sh@19 -- # IFS=: 00:06:18.813 03:06:53 -- accel/accel.sh@19 -- # read -r var val 00:06:18.813 03:06:53 -- accel/accel.sh@20 -- # val= 00:06:18.813 03:06:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.813 03:06:53 -- accel/accel.sh@19 -- # IFS=: 00:06:18.813 03:06:53 -- accel/accel.sh@19 -- # read -r var val 00:06:18.813 03:06:53 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:18.813 03:06:53 -- accel/accel.sh@27 -- # [[ -n compress ]] 00:06:18.813 03:06:53 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:18.813 00:06:18.813 real 0m1.490s 00:06:18.813 user 0m1.338s 00:06:18.813 sys 0m0.154s 00:06:18.813 03:06:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:18.813 03:06:53 -- common/autotest_common.sh@10 -- # set +x 00:06:18.813 ************************************ 00:06:18.813 END TEST accel_comp 00:06:18.813 ************************************ 00:06:18.813 03:06:53 -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:18.813 03:06:53 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:18.813 03:06:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:18.813 03:06:53 -- common/autotest_common.sh@10 -- # set +x 00:06:18.813 ************************************ 00:06:18.813 START TEST accel_decomp 00:06:18.813 ************************************ 00:06:18.813 03:06:53 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:18.813 03:06:53 -- accel/accel.sh@16 -- # local accel_opc 00:06:18.813 03:06:53 -- accel/accel.sh@17 -- # local accel_module 00:06:18.813 03:06:53 -- accel/accel.sh@19 -- # IFS=: 00:06:18.813 03:06:53 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:18.813 03:06:53 -- accel/accel.sh@19 -- # read -r var val 00:06:18.813 03:06:53 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:18.813 03:06:53 -- accel/accel.sh@12 -- # build_accel_config 00:06:18.813 03:06:53 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:18.813 03:06:53 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:18.813 03:06:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:18.813 03:06:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:18.813 03:06:53 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:18.813 03:06:53 -- accel/accel.sh@40 -- # local IFS=, 00:06:18.813 03:06:53 -- accel/accel.sh@41 -- # jq -r . 00:06:18.813 [2024-04-25 03:06:53.186568] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:06:18.814 [2024-04-25 03:06:53.186644] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1387684 ] 00:06:18.814 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.814 [2024-04-25 03:06:53.249367] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.073 [2024-04-25 03:06:53.374033] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.073 03:06:53 -- accel/accel.sh@20 -- # val= 00:06:19.073 03:06:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.073 03:06:53 -- accel/accel.sh@19 -- # IFS=: 00:06:19.073 03:06:53 -- accel/accel.sh@19 -- # read -r var val 00:06:19.073 03:06:53 -- accel/accel.sh@20 -- # val= 00:06:19.073 03:06:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.073 03:06:53 -- accel/accel.sh@19 -- # IFS=: 00:06:19.073 03:06:53 -- accel/accel.sh@19 -- # read -r var val 00:06:19.073 03:06:53 -- accel/accel.sh@20 -- # val= 00:06:19.073 03:06:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.073 03:06:53 -- accel/accel.sh@19 -- # IFS=: 00:06:19.073 03:06:53 -- accel/accel.sh@19 -- # read -r var val 00:06:19.073 03:06:53 -- accel/accel.sh@20 -- # val=0x1 00:06:19.073 03:06:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.073 03:06:53 -- accel/accel.sh@19 -- # IFS=: 00:06:19.073 03:06:53 -- accel/accel.sh@19 -- # read -r var val 00:06:19.073 03:06:53 -- accel/accel.sh@20 -- # val= 00:06:19.073 03:06:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.073 03:06:53 -- accel/accel.sh@19 -- # IFS=: 00:06:19.073 03:06:53 -- accel/accel.sh@19 -- # read -r var val 00:06:19.073 03:06:53 -- accel/accel.sh@20 -- # val= 00:06:19.073 03:06:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.073 03:06:53 -- accel/accel.sh@19 -- # IFS=: 00:06:19.073 03:06:53 -- accel/accel.sh@19 -- # read -r var val 00:06:19.073 03:06:53 -- accel/accel.sh@20 -- # val=decompress 00:06:19.073 03:06:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.073 03:06:53 -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:19.073 03:06:53 -- accel/accel.sh@19 -- # IFS=: 00:06:19.073 03:06:53 -- accel/accel.sh@19 -- # read -r var val 00:06:19.073 03:06:53 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:19.073 03:06:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.073 03:06:53 -- accel/accel.sh@19 -- # IFS=: 00:06:19.073 03:06:53 -- accel/accel.sh@19 -- # read -r var val 00:06:19.073 03:06:53 -- accel/accel.sh@20 -- # val= 00:06:19.073 03:06:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.073 03:06:53 -- accel/accel.sh@19 -- # IFS=: 00:06:19.073 03:06:53 -- accel/accel.sh@19 -- # read -r var val 00:06:19.073 03:06:53 -- accel/accel.sh@20 -- # val=software 00:06:19.073 03:06:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.073 03:06:53 -- accel/accel.sh@22 -- # accel_module=software 00:06:19.073 03:06:53 -- accel/accel.sh@19 -- # IFS=: 00:06:19.073 03:06:53 -- accel/accel.sh@19 -- # read -r var val 00:06:19.073 03:06:53 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:19.073 03:06:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.073 03:06:53 -- accel/accel.sh@19 -- # IFS=: 00:06:19.073 03:06:53 -- accel/accel.sh@19 -- # read -r var val 00:06:19.073 03:06:53 -- accel/accel.sh@20 -- # val=32 00:06:19.073 03:06:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.073 03:06:53 -- accel/accel.sh@19 -- # IFS=: 00:06:19.073 03:06:53 -- accel/accel.sh@19 -- # read -r var val 00:06:19.073 03:06:53 -- accel/accel.sh@20 -- # val=32 00:06:19.073 03:06:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.073 03:06:53 -- accel/accel.sh@19 -- # IFS=: 00:06:19.073 03:06:53 -- accel/accel.sh@19 -- # read -r var val 00:06:19.073 03:06:53 -- accel/accel.sh@20 -- # val=1 00:06:19.073 03:06:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.073 03:06:53 -- accel/accel.sh@19 -- # IFS=: 00:06:19.073 03:06:53 -- accel/accel.sh@19 -- # read -r var val 00:06:19.073 03:06:53 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:19.073 03:06:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.073 03:06:53 -- accel/accel.sh@19 -- # IFS=: 00:06:19.073 03:06:53 -- accel/accel.sh@19 -- # read -r var val 00:06:19.073 03:06:53 -- accel/accel.sh@20 -- # val=Yes 00:06:19.073 03:06:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.073 03:06:53 -- accel/accel.sh@19 -- # IFS=: 00:06:19.073 03:06:53 -- accel/accel.sh@19 -- # read -r var val 00:06:19.073 03:06:53 -- accel/accel.sh@20 -- # val= 00:06:19.073 03:06:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.073 03:06:53 -- accel/accel.sh@19 -- # IFS=: 00:06:19.073 03:06:53 -- accel/accel.sh@19 -- # read -r var val 00:06:19.073 03:06:53 -- accel/accel.sh@20 -- # val= 00:06:19.073 03:06:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.073 03:06:53 -- accel/accel.sh@19 -- # IFS=: 00:06:19.073 03:06:53 -- accel/accel.sh@19 -- # read -r var val 00:06:20.448 03:06:54 -- accel/accel.sh@20 -- # val= 00:06:20.448 03:06:54 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.448 03:06:54 -- accel/accel.sh@19 -- # IFS=: 00:06:20.448 03:06:54 -- accel/accel.sh@19 -- # read -r var val 00:06:20.448 03:06:54 -- accel/accel.sh@20 -- # val= 00:06:20.448 03:06:54 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.448 03:06:54 -- accel/accel.sh@19 -- # IFS=: 00:06:20.448 03:06:54 -- accel/accel.sh@19 -- # read -r var val 00:06:20.448 03:06:54 -- accel/accel.sh@20 -- # val= 00:06:20.448 03:06:54 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.448 03:06:54 -- accel/accel.sh@19 -- # IFS=: 00:06:20.448 03:06:54 -- accel/accel.sh@19 -- # read -r var val 00:06:20.448 03:06:54 -- accel/accel.sh@20 -- # val= 00:06:20.448 03:06:54 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.448 03:06:54 -- accel/accel.sh@19 -- # IFS=: 00:06:20.448 03:06:54 -- accel/accel.sh@19 -- # read -r var val 00:06:20.448 03:06:54 -- accel/accel.sh@20 -- # val= 00:06:20.448 03:06:54 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.448 03:06:54 -- accel/accel.sh@19 -- # IFS=: 00:06:20.448 03:06:54 -- accel/accel.sh@19 -- # read -r var val 00:06:20.448 03:06:54 -- accel/accel.sh@20 -- # val= 00:06:20.448 03:06:54 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.448 03:06:54 -- accel/accel.sh@19 -- # IFS=: 00:06:20.448 03:06:54 -- accel/accel.sh@19 -- # read -r var val 00:06:20.448 03:06:54 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:20.448 03:06:54 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:20.448 03:06:54 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:20.448 00:06:20.448 real 0m1.484s 00:06:20.448 user 0m1.340s 00:06:20.448 sys 0m0.146s 00:06:20.448 03:06:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:20.448 03:06:54 -- common/autotest_common.sh@10 -- # set +x 00:06:20.448 ************************************ 00:06:20.448 END TEST accel_decomp 00:06:20.448 ************************************ 00:06:20.448 03:06:54 -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:20.448 03:06:54 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:06:20.448 03:06:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:20.448 03:06:54 -- common/autotest_common.sh@10 -- # set +x 00:06:20.448 ************************************ 00:06:20.448 START TEST accel_decmop_full 00:06:20.448 ************************************ 00:06:20.448 03:06:54 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:20.448 03:06:54 -- accel/accel.sh@16 -- # local accel_opc 00:06:20.448 03:06:54 -- accel/accel.sh@17 -- # local accel_module 00:06:20.448 03:06:54 -- accel/accel.sh@19 -- # IFS=: 00:06:20.448 03:06:54 -- accel/accel.sh@19 -- # read -r var val 00:06:20.449 03:06:54 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:20.449 03:06:54 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:20.449 03:06:54 -- accel/accel.sh@12 -- # build_accel_config 00:06:20.449 03:06:54 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:20.449 03:06:54 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:20.449 03:06:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:20.449 03:06:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:20.449 03:06:54 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:20.449 03:06:54 -- accel/accel.sh@40 -- # local IFS=, 00:06:20.449 03:06:54 -- accel/accel.sh@41 -- # jq -r . 00:06:20.449 [2024-04-25 03:06:54.796578] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:06:20.449 [2024-04-25 03:06:54.796649] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1387921 ] 00:06:20.449 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.449 [2024-04-25 03:06:54.859297] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.707 [2024-04-25 03:06:54.982477] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.707 03:06:55 -- accel/accel.sh@20 -- # val= 00:06:20.707 03:06:55 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.707 03:06:55 -- accel/accel.sh@19 -- # IFS=: 00:06:20.707 03:06:55 -- accel/accel.sh@19 -- # read -r var val 00:06:20.707 03:06:55 -- accel/accel.sh@20 -- # val= 00:06:20.707 03:06:55 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.707 03:06:55 -- accel/accel.sh@19 -- # IFS=: 00:06:20.707 03:06:55 -- accel/accel.sh@19 -- # read -r var val 00:06:20.707 03:06:55 -- accel/accel.sh@20 -- # val= 00:06:20.707 03:06:55 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.707 03:06:55 -- accel/accel.sh@19 -- # IFS=: 00:06:20.707 03:06:55 -- accel/accel.sh@19 -- # read -r var val 00:06:20.707 03:06:55 -- accel/accel.sh@20 -- # val=0x1 00:06:20.708 03:06:55 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.708 03:06:55 -- accel/accel.sh@19 -- # IFS=: 00:06:20.708 03:06:55 -- accel/accel.sh@19 -- # read -r var val 00:06:20.708 03:06:55 -- accel/accel.sh@20 -- # val= 00:06:20.708 03:06:55 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.708 03:06:55 -- accel/accel.sh@19 -- # IFS=: 00:06:20.708 03:06:55 -- accel/accel.sh@19 -- # read -r var val 00:06:20.708 03:06:55 -- accel/accel.sh@20 -- # val= 00:06:20.708 03:06:55 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.708 03:06:55 -- accel/accel.sh@19 -- # IFS=: 00:06:20.708 03:06:55 -- accel/accel.sh@19 -- # read -r var val 00:06:20.708 03:06:55 -- accel/accel.sh@20 -- # val=decompress 00:06:20.708 03:06:55 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.708 03:06:55 -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:20.708 03:06:55 -- accel/accel.sh@19 -- # IFS=: 00:06:20.708 03:06:55 -- accel/accel.sh@19 -- # read -r var val 00:06:20.708 03:06:55 -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:20.708 03:06:55 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.708 03:06:55 -- accel/accel.sh@19 -- # IFS=: 00:06:20.708 03:06:55 -- accel/accel.sh@19 -- # read -r var val 00:06:20.708 03:06:55 -- accel/accel.sh@20 -- # val= 00:06:20.708 03:06:55 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.708 03:06:55 -- accel/accel.sh@19 -- # IFS=: 00:06:20.708 03:06:55 -- accel/accel.sh@19 -- # read -r var val 00:06:20.708 03:06:55 -- accel/accel.sh@20 -- # val=software 00:06:20.708 03:06:55 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.708 03:06:55 -- accel/accel.sh@22 -- # accel_module=software 00:06:20.708 03:06:55 -- accel/accel.sh@19 -- # IFS=: 00:06:20.708 03:06:55 -- accel/accel.sh@19 -- # read -r var val 00:06:20.708 03:06:55 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:20.708 03:06:55 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.708 03:06:55 -- accel/accel.sh@19 -- # IFS=: 00:06:20.708 03:06:55 -- accel/accel.sh@19 -- # read -r var val 00:06:20.708 03:06:55 -- accel/accel.sh@20 -- # val=32 00:06:20.708 03:06:55 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.708 03:06:55 -- accel/accel.sh@19 -- # IFS=: 00:06:20.708 03:06:55 -- accel/accel.sh@19 -- # read -r var val 00:06:20.708 03:06:55 -- accel/accel.sh@20 -- # val=32 00:06:20.708 03:06:55 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.708 03:06:55 -- accel/accel.sh@19 -- # IFS=: 00:06:20.708 03:06:55 -- accel/accel.sh@19 -- # read -r var val 00:06:20.708 03:06:55 -- accel/accel.sh@20 -- # val=1 00:06:20.708 03:06:55 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.708 03:06:55 -- accel/accel.sh@19 -- # IFS=: 00:06:20.708 03:06:55 -- accel/accel.sh@19 -- # read -r var val 00:06:20.708 03:06:55 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:20.708 03:06:55 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.708 03:06:55 -- accel/accel.sh@19 -- # IFS=: 00:06:20.708 03:06:55 -- accel/accel.sh@19 -- # read -r var val 00:06:20.708 03:06:55 -- accel/accel.sh@20 -- # val=Yes 00:06:20.708 03:06:55 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.708 03:06:55 -- accel/accel.sh@19 -- # IFS=: 00:06:20.708 03:06:55 -- accel/accel.sh@19 -- # read -r var val 00:06:20.708 03:06:55 -- accel/accel.sh@20 -- # val= 00:06:20.708 03:06:55 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.708 03:06:55 -- accel/accel.sh@19 -- # IFS=: 00:06:20.708 03:06:55 -- accel/accel.sh@19 -- # read -r var val 00:06:20.708 03:06:55 -- accel/accel.sh@20 -- # val= 00:06:20.708 03:06:55 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.708 03:06:55 -- accel/accel.sh@19 -- # IFS=: 00:06:20.708 03:06:55 -- accel/accel.sh@19 -- # read -r var val 00:06:22.086 03:06:56 -- accel/accel.sh@20 -- # val= 00:06:22.086 03:06:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.086 03:06:56 -- accel/accel.sh@19 -- # IFS=: 00:06:22.086 03:06:56 -- accel/accel.sh@19 -- # read -r var val 00:06:22.086 03:06:56 -- accel/accel.sh@20 -- # val= 00:06:22.086 03:06:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.086 03:06:56 -- accel/accel.sh@19 -- # IFS=: 00:06:22.086 03:06:56 -- accel/accel.sh@19 -- # read -r var val 00:06:22.086 03:06:56 -- accel/accel.sh@20 -- # val= 00:06:22.086 03:06:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.086 03:06:56 -- accel/accel.sh@19 -- # IFS=: 00:06:22.086 03:06:56 -- accel/accel.sh@19 -- # read -r var val 00:06:22.086 03:06:56 -- accel/accel.sh@20 -- # val= 00:06:22.086 03:06:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.086 03:06:56 -- accel/accel.sh@19 -- # IFS=: 00:06:22.086 03:06:56 -- accel/accel.sh@19 -- # read -r var val 00:06:22.086 03:06:56 -- accel/accel.sh@20 -- # val= 00:06:22.086 03:06:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.086 03:06:56 -- accel/accel.sh@19 -- # IFS=: 00:06:22.086 03:06:56 -- accel/accel.sh@19 -- # read -r var val 00:06:22.086 03:06:56 -- accel/accel.sh@20 -- # val= 00:06:22.086 03:06:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.086 03:06:56 -- accel/accel.sh@19 -- # IFS=: 00:06:22.086 03:06:56 -- accel/accel.sh@19 -- # read -r var val 00:06:22.086 03:06:56 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:22.086 03:06:56 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:22.086 03:06:56 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:22.086 00:06:22.086 real 0m1.499s 00:06:22.086 user 0m1.356s 00:06:22.086 sys 0m0.145s 00:06:22.086 03:06:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:22.086 03:06:56 -- common/autotest_common.sh@10 -- # set +x 00:06:22.086 ************************************ 00:06:22.086 END TEST accel_decmop_full 00:06:22.086 ************************************ 00:06:22.086 03:06:56 -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:22.086 03:06:56 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:06:22.086 03:06:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:22.086 03:06:56 -- common/autotest_common.sh@10 -- # set +x 00:06:22.086 ************************************ 00:06:22.086 START TEST accel_decomp_mcore 00:06:22.086 ************************************ 00:06:22.086 03:06:56 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:22.086 03:06:56 -- accel/accel.sh@16 -- # local accel_opc 00:06:22.086 03:06:56 -- accel/accel.sh@17 -- # local accel_module 00:06:22.086 03:06:56 -- accel/accel.sh@19 -- # IFS=: 00:06:22.086 03:06:56 -- accel/accel.sh@19 -- # read -r var val 00:06:22.086 03:06:56 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:22.086 03:06:56 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:22.086 03:06:56 -- accel/accel.sh@12 -- # build_accel_config 00:06:22.086 03:06:56 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:22.086 03:06:56 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:22.086 03:06:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:22.086 03:06:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:22.086 03:06:56 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:22.086 03:06:56 -- accel/accel.sh@40 -- # local IFS=, 00:06:22.086 03:06:56 -- accel/accel.sh@41 -- # jq -r . 00:06:22.086 [2024-04-25 03:06:56.409230] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:06:22.086 [2024-04-25 03:06:56.409293] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1388140 ] 00:06:22.086 EAL: No free 2048 kB hugepages reported on node 1 00:06:22.086 [2024-04-25 03:06:56.476364] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:22.345 [2024-04-25 03:06:56.601330] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:22.345 [2024-04-25 03:06:56.601384] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:22.345 [2024-04-25 03:06:56.601434] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:22.345 [2024-04-25 03:06:56.601437] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.345 03:06:56 -- accel/accel.sh@20 -- # val= 00:06:22.345 03:06:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.345 03:06:56 -- accel/accel.sh@19 -- # IFS=: 00:06:22.345 03:06:56 -- accel/accel.sh@19 -- # read -r var val 00:06:22.345 03:06:56 -- accel/accel.sh@20 -- # val= 00:06:22.345 03:06:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.345 03:06:56 -- accel/accel.sh@19 -- # IFS=: 00:06:22.345 03:06:56 -- accel/accel.sh@19 -- # read -r var val 00:06:22.345 03:06:56 -- accel/accel.sh@20 -- # val= 00:06:22.345 03:06:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.345 03:06:56 -- accel/accel.sh@19 -- # IFS=: 00:06:22.345 03:06:56 -- accel/accel.sh@19 -- # read -r var val 00:06:22.345 03:06:56 -- accel/accel.sh@20 -- # val=0xf 00:06:22.345 03:06:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.345 03:06:56 -- accel/accel.sh@19 -- # IFS=: 00:06:22.345 03:06:56 -- accel/accel.sh@19 -- # read -r var val 00:06:22.345 03:06:56 -- accel/accel.sh@20 -- # val= 00:06:22.345 03:06:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.345 03:06:56 -- accel/accel.sh@19 -- # IFS=: 00:06:22.345 03:06:56 -- accel/accel.sh@19 -- # read -r var val 00:06:22.345 03:06:56 -- accel/accel.sh@20 -- # val= 00:06:22.345 03:06:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.345 03:06:56 -- accel/accel.sh@19 -- # IFS=: 00:06:22.345 03:06:56 -- accel/accel.sh@19 -- # read -r var val 00:06:22.345 03:06:56 -- accel/accel.sh@20 -- # val=decompress 00:06:22.345 03:06:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.345 03:06:56 -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:22.345 03:06:56 -- accel/accel.sh@19 -- # IFS=: 00:06:22.345 03:06:56 -- accel/accel.sh@19 -- # read -r var val 00:06:22.345 03:06:56 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:22.345 03:06:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.345 03:06:56 -- accel/accel.sh@19 -- # IFS=: 00:06:22.345 03:06:56 -- accel/accel.sh@19 -- # read -r var val 00:06:22.345 03:06:56 -- accel/accel.sh@20 -- # val= 00:06:22.345 03:06:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.345 03:06:56 -- accel/accel.sh@19 -- # IFS=: 00:06:22.345 03:06:56 -- accel/accel.sh@19 -- # read -r var val 00:06:22.345 03:06:56 -- accel/accel.sh@20 -- # val=software 00:06:22.345 03:06:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.345 03:06:56 -- accel/accel.sh@22 -- # accel_module=software 00:06:22.345 03:06:56 -- accel/accel.sh@19 -- # IFS=: 00:06:22.345 03:06:56 -- accel/accel.sh@19 -- # read -r var val 00:06:22.345 03:06:56 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:22.345 03:06:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.345 03:06:56 -- accel/accel.sh@19 -- # IFS=: 00:06:22.345 03:06:56 -- accel/accel.sh@19 -- # read -r var val 00:06:22.345 03:06:56 -- accel/accel.sh@20 -- # val=32 00:06:22.345 03:06:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.345 03:06:56 -- accel/accel.sh@19 -- # IFS=: 00:06:22.345 03:06:56 -- accel/accel.sh@19 -- # read -r var val 00:06:22.345 03:06:56 -- accel/accel.sh@20 -- # val=32 00:06:22.345 03:06:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.345 03:06:56 -- accel/accel.sh@19 -- # IFS=: 00:06:22.345 03:06:56 -- accel/accel.sh@19 -- # read -r var val 00:06:22.345 03:06:56 -- accel/accel.sh@20 -- # val=1 00:06:22.345 03:06:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.345 03:06:56 -- accel/accel.sh@19 -- # IFS=: 00:06:22.345 03:06:56 -- accel/accel.sh@19 -- # read -r var val 00:06:22.345 03:06:56 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:22.345 03:06:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.345 03:06:56 -- accel/accel.sh@19 -- # IFS=: 00:06:22.345 03:06:56 -- accel/accel.sh@19 -- # read -r var val 00:06:22.345 03:06:56 -- accel/accel.sh@20 -- # val=Yes 00:06:22.345 03:06:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.345 03:06:56 -- accel/accel.sh@19 -- # IFS=: 00:06:22.345 03:06:56 -- accel/accel.sh@19 -- # read -r var val 00:06:22.345 03:06:56 -- accel/accel.sh@20 -- # val= 00:06:22.345 03:06:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.345 03:06:56 -- accel/accel.sh@19 -- # IFS=: 00:06:22.345 03:06:56 -- accel/accel.sh@19 -- # read -r var val 00:06:22.345 03:06:56 -- accel/accel.sh@20 -- # val= 00:06:22.345 03:06:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.345 03:06:56 -- accel/accel.sh@19 -- # IFS=: 00:06:22.345 03:06:56 -- accel/accel.sh@19 -- # read -r var val 00:06:23.719 03:06:57 -- accel/accel.sh@20 -- # val= 00:06:23.719 03:06:57 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.719 03:06:57 -- accel/accel.sh@19 -- # IFS=: 00:06:23.719 03:06:57 -- accel/accel.sh@19 -- # read -r var val 00:06:23.719 03:06:57 -- accel/accel.sh@20 -- # val= 00:06:23.719 03:06:57 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.719 03:06:57 -- accel/accel.sh@19 -- # IFS=: 00:06:23.719 03:06:57 -- accel/accel.sh@19 -- # read -r var val 00:06:23.719 03:06:57 -- accel/accel.sh@20 -- # val= 00:06:23.719 03:06:57 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.719 03:06:57 -- accel/accel.sh@19 -- # IFS=: 00:06:23.719 03:06:57 -- accel/accel.sh@19 -- # read -r var val 00:06:23.719 03:06:57 -- accel/accel.sh@20 -- # val= 00:06:23.719 03:06:57 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.719 03:06:57 -- accel/accel.sh@19 -- # IFS=: 00:06:23.719 03:06:57 -- accel/accel.sh@19 -- # read -r var val 00:06:23.719 03:06:57 -- accel/accel.sh@20 -- # val= 00:06:23.719 03:06:57 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.719 03:06:57 -- accel/accel.sh@19 -- # IFS=: 00:06:23.719 03:06:57 -- accel/accel.sh@19 -- # read -r var val 00:06:23.719 03:06:57 -- accel/accel.sh@20 -- # val= 00:06:23.719 03:06:57 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.719 03:06:57 -- accel/accel.sh@19 -- # IFS=: 00:06:23.719 03:06:57 -- accel/accel.sh@19 -- # read -r var val 00:06:23.719 03:06:57 -- accel/accel.sh@20 -- # val= 00:06:23.719 03:06:57 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.719 03:06:57 -- accel/accel.sh@19 -- # IFS=: 00:06:23.719 03:06:57 -- accel/accel.sh@19 -- # read -r var val 00:06:23.719 03:06:57 -- accel/accel.sh@20 -- # val= 00:06:23.719 03:06:57 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.719 03:06:57 -- accel/accel.sh@19 -- # IFS=: 00:06:23.719 03:06:57 -- accel/accel.sh@19 -- # read -r var val 00:06:23.719 03:06:57 -- accel/accel.sh@20 -- # val= 00:06:23.719 03:06:57 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.719 03:06:57 -- accel/accel.sh@19 -- # IFS=: 00:06:23.719 03:06:57 -- accel/accel.sh@19 -- # read -r var val 00:06:23.719 03:06:57 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:23.719 03:06:57 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:23.719 03:06:57 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:23.719 00:06:23.719 real 0m1.507s 00:06:23.719 user 0m4.829s 00:06:23.719 sys 0m0.157s 00:06:23.719 03:06:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:23.719 03:06:57 -- common/autotest_common.sh@10 -- # set +x 00:06:23.719 ************************************ 00:06:23.719 END TEST accel_decomp_mcore 00:06:23.719 ************************************ 00:06:23.719 03:06:57 -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:23.719 03:06:57 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:06:23.719 03:06:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:23.719 03:06:57 -- common/autotest_common.sh@10 -- # set +x 00:06:23.719 ************************************ 00:06:23.719 START TEST accel_decomp_full_mcore 00:06:23.719 ************************************ 00:06:23.719 03:06:58 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:23.719 03:06:58 -- accel/accel.sh@16 -- # local accel_opc 00:06:23.719 03:06:58 -- accel/accel.sh@17 -- # local accel_module 00:06:23.719 03:06:58 -- accel/accel.sh@19 -- # IFS=: 00:06:23.719 03:06:58 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:23.719 03:06:58 -- accel/accel.sh@19 -- # read -r var val 00:06:23.719 03:06:58 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:23.719 03:06:58 -- accel/accel.sh@12 -- # build_accel_config 00:06:23.719 03:06:58 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:23.719 03:06:58 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:23.719 03:06:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:23.719 03:06:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:23.719 03:06:58 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:23.719 03:06:58 -- accel/accel.sh@40 -- # local IFS=, 00:06:23.719 03:06:58 -- accel/accel.sh@41 -- # jq -r . 00:06:23.719 [2024-04-25 03:06:58.031811] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:06:23.719 [2024-04-25 03:06:58.031875] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1388302 ] 00:06:23.719 EAL: No free 2048 kB hugepages reported on node 1 00:06:23.719 [2024-04-25 03:06:58.093199] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:23.719 [2024-04-25 03:06:58.213910] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:23.719 [2024-04-25 03:06:58.213966] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:23.719 [2024-04-25 03:06:58.214022] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:23.719 [2024-04-25 03:06:58.214026] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.977 03:06:58 -- accel/accel.sh@20 -- # val= 00:06:23.977 03:06:58 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.977 03:06:58 -- accel/accel.sh@19 -- # IFS=: 00:06:23.977 03:06:58 -- accel/accel.sh@19 -- # read -r var val 00:06:23.977 03:06:58 -- accel/accel.sh@20 -- # val= 00:06:23.977 03:06:58 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.977 03:06:58 -- accel/accel.sh@19 -- # IFS=: 00:06:23.977 03:06:58 -- accel/accel.sh@19 -- # read -r var val 00:06:23.977 03:06:58 -- accel/accel.sh@20 -- # val= 00:06:23.977 03:06:58 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.977 03:06:58 -- accel/accel.sh@19 -- # IFS=: 00:06:23.977 03:06:58 -- accel/accel.sh@19 -- # read -r var val 00:06:23.977 03:06:58 -- accel/accel.sh@20 -- # val=0xf 00:06:23.977 03:06:58 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.977 03:06:58 -- accel/accel.sh@19 -- # IFS=: 00:06:23.977 03:06:58 -- accel/accel.sh@19 -- # read -r var val 00:06:23.977 03:06:58 -- accel/accel.sh@20 -- # val= 00:06:23.977 03:06:58 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.977 03:06:58 -- accel/accel.sh@19 -- # IFS=: 00:06:23.977 03:06:58 -- accel/accel.sh@19 -- # read -r var val 00:06:23.977 03:06:58 -- accel/accel.sh@20 -- # val= 00:06:23.977 03:06:58 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.977 03:06:58 -- accel/accel.sh@19 -- # IFS=: 00:06:23.977 03:06:58 -- accel/accel.sh@19 -- # read -r var val 00:06:23.977 03:06:58 -- accel/accel.sh@20 -- # val=decompress 00:06:23.977 03:06:58 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.977 03:06:58 -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:23.977 03:06:58 -- accel/accel.sh@19 -- # IFS=: 00:06:23.977 03:06:58 -- accel/accel.sh@19 -- # read -r var val 00:06:23.977 03:06:58 -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:23.977 03:06:58 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.977 03:06:58 -- accel/accel.sh@19 -- # IFS=: 00:06:23.977 03:06:58 -- accel/accel.sh@19 -- # read -r var val 00:06:23.977 03:06:58 -- accel/accel.sh@20 -- # val= 00:06:23.977 03:06:58 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.977 03:06:58 -- accel/accel.sh@19 -- # IFS=: 00:06:23.977 03:06:58 -- accel/accel.sh@19 -- # read -r var val 00:06:23.977 03:06:58 -- accel/accel.sh@20 -- # val=software 00:06:23.977 03:06:58 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.977 03:06:58 -- accel/accel.sh@22 -- # accel_module=software 00:06:23.977 03:06:58 -- accel/accel.sh@19 -- # IFS=: 00:06:23.977 03:06:58 -- accel/accel.sh@19 -- # read -r var val 00:06:23.977 03:06:58 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:23.977 03:06:58 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.977 03:06:58 -- accel/accel.sh@19 -- # IFS=: 00:06:23.977 03:06:58 -- accel/accel.sh@19 -- # read -r var val 00:06:23.977 03:06:58 -- accel/accel.sh@20 -- # val=32 00:06:23.977 03:06:58 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.977 03:06:58 -- accel/accel.sh@19 -- # IFS=: 00:06:23.977 03:06:58 -- accel/accel.sh@19 -- # read -r var val 00:06:23.977 03:06:58 -- accel/accel.sh@20 -- # val=32 00:06:23.977 03:06:58 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.977 03:06:58 -- accel/accel.sh@19 -- # IFS=: 00:06:23.977 03:06:58 -- accel/accel.sh@19 -- # read -r var val 00:06:23.977 03:06:58 -- accel/accel.sh@20 -- # val=1 00:06:23.977 03:06:58 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.977 03:06:58 -- accel/accel.sh@19 -- # IFS=: 00:06:23.977 03:06:58 -- accel/accel.sh@19 -- # read -r var val 00:06:23.977 03:06:58 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:23.977 03:06:58 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.977 03:06:58 -- accel/accel.sh@19 -- # IFS=: 00:06:23.977 03:06:58 -- accel/accel.sh@19 -- # read -r var val 00:06:23.978 03:06:58 -- accel/accel.sh@20 -- # val=Yes 00:06:23.978 03:06:58 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.978 03:06:58 -- accel/accel.sh@19 -- # IFS=: 00:06:23.978 03:06:58 -- accel/accel.sh@19 -- # read -r var val 00:06:23.978 03:06:58 -- accel/accel.sh@20 -- # val= 00:06:23.978 03:06:58 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.978 03:06:58 -- accel/accel.sh@19 -- # IFS=: 00:06:23.978 03:06:58 -- accel/accel.sh@19 -- # read -r var val 00:06:23.978 03:06:58 -- accel/accel.sh@20 -- # val= 00:06:23.978 03:06:58 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.978 03:06:58 -- accel/accel.sh@19 -- # IFS=: 00:06:23.978 03:06:58 -- accel/accel.sh@19 -- # read -r var val 00:06:25.352 03:06:59 -- accel/accel.sh@20 -- # val= 00:06:25.352 03:06:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.352 03:06:59 -- accel/accel.sh@19 -- # IFS=: 00:06:25.352 03:06:59 -- accel/accel.sh@19 -- # read -r var val 00:06:25.352 03:06:59 -- accel/accel.sh@20 -- # val= 00:06:25.352 03:06:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.352 03:06:59 -- accel/accel.sh@19 -- # IFS=: 00:06:25.352 03:06:59 -- accel/accel.sh@19 -- # read -r var val 00:06:25.352 03:06:59 -- accel/accel.sh@20 -- # val= 00:06:25.352 03:06:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.352 03:06:59 -- accel/accel.sh@19 -- # IFS=: 00:06:25.352 03:06:59 -- accel/accel.sh@19 -- # read -r var val 00:06:25.352 03:06:59 -- accel/accel.sh@20 -- # val= 00:06:25.352 03:06:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.352 03:06:59 -- accel/accel.sh@19 -- # IFS=: 00:06:25.352 03:06:59 -- accel/accel.sh@19 -- # read -r var val 00:06:25.352 03:06:59 -- accel/accel.sh@20 -- # val= 00:06:25.352 03:06:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.352 03:06:59 -- accel/accel.sh@19 -- # IFS=: 00:06:25.352 03:06:59 -- accel/accel.sh@19 -- # read -r var val 00:06:25.352 03:06:59 -- accel/accel.sh@20 -- # val= 00:06:25.352 03:06:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.352 03:06:59 -- accel/accel.sh@19 -- # IFS=: 00:06:25.352 03:06:59 -- accel/accel.sh@19 -- # read -r var val 00:06:25.352 03:06:59 -- accel/accel.sh@20 -- # val= 00:06:25.352 03:06:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.352 03:06:59 -- accel/accel.sh@19 -- # IFS=: 00:06:25.352 03:06:59 -- accel/accel.sh@19 -- # read -r var val 00:06:25.352 03:06:59 -- accel/accel.sh@20 -- # val= 00:06:25.352 03:06:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.352 03:06:59 -- accel/accel.sh@19 -- # IFS=: 00:06:25.352 03:06:59 -- accel/accel.sh@19 -- # read -r var val 00:06:25.352 03:06:59 -- accel/accel.sh@20 -- # val= 00:06:25.352 03:06:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.352 03:06:59 -- accel/accel.sh@19 -- # IFS=: 00:06:25.352 03:06:59 -- accel/accel.sh@19 -- # read -r var val 00:06:25.352 03:06:59 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:25.352 03:06:59 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:25.352 03:06:59 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:25.352 00:06:25.352 real 0m1.500s 00:06:25.352 user 0m4.837s 00:06:25.352 sys 0m0.157s 00:06:25.352 03:06:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:25.352 03:06:59 -- common/autotest_common.sh@10 -- # set +x 00:06:25.352 ************************************ 00:06:25.352 END TEST accel_decomp_full_mcore 00:06:25.352 ************************************ 00:06:25.352 03:06:59 -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:25.352 03:06:59 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:06:25.352 03:06:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:25.352 03:06:59 -- common/autotest_common.sh@10 -- # set +x 00:06:25.352 ************************************ 00:06:25.352 START TEST accel_decomp_mthread 00:06:25.352 ************************************ 00:06:25.352 03:06:59 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:25.352 03:06:59 -- accel/accel.sh@16 -- # local accel_opc 00:06:25.352 03:06:59 -- accel/accel.sh@17 -- # local accel_module 00:06:25.352 03:06:59 -- accel/accel.sh@19 -- # IFS=: 00:06:25.352 03:06:59 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:25.352 03:06:59 -- accel/accel.sh@19 -- # read -r var val 00:06:25.352 03:06:59 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:25.352 03:06:59 -- accel/accel.sh@12 -- # build_accel_config 00:06:25.353 03:06:59 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:25.353 03:06:59 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:25.353 03:06:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:25.353 03:06:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:25.353 03:06:59 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:25.353 03:06:59 -- accel/accel.sh@40 -- # local IFS=, 00:06:25.353 03:06:59 -- accel/accel.sh@41 -- # jq -r . 00:06:25.353 [2024-04-25 03:06:59.648039] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:06:25.353 [2024-04-25 03:06:59.648104] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1388590 ] 00:06:25.353 EAL: No free 2048 kB hugepages reported on node 1 00:06:25.353 [2024-04-25 03:06:59.709671] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.353 [2024-04-25 03:06:59.830854] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.614 03:06:59 -- accel/accel.sh@20 -- # val= 00:06:25.615 03:06:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.615 03:06:59 -- accel/accel.sh@19 -- # IFS=: 00:06:25.615 03:06:59 -- accel/accel.sh@19 -- # read -r var val 00:06:25.615 03:06:59 -- accel/accel.sh@20 -- # val= 00:06:25.615 03:06:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.615 03:06:59 -- accel/accel.sh@19 -- # IFS=: 00:06:25.615 03:06:59 -- accel/accel.sh@19 -- # read -r var val 00:06:25.615 03:06:59 -- accel/accel.sh@20 -- # val= 00:06:25.615 03:06:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.615 03:06:59 -- accel/accel.sh@19 -- # IFS=: 00:06:25.615 03:06:59 -- accel/accel.sh@19 -- # read -r var val 00:06:25.615 03:06:59 -- accel/accel.sh@20 -- # val=0x1 00:06:25.615 03:06:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.615 03:06:59 -- accel/accel.sh@19 -- # IFS=: 00:06:25.615 03:06:59 -- accel/accel.sh@19 -- # read -r var val 00:06:25.615 03:06:59 -- accel/accel.sh@20 -- # val= 00:06:25.615 03:06:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.615 03:06:59 -- accel/accel.sh@19 -- # IFS=: 00:06:25.615 03:06:59 -- accel/accel.sh@19 -- # read -r var val 00:06:25.615 03:06:59 -- accel/accel.sh@20 -- # val= 00:06:25.615 03:06:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.615 03:06:59 -- accel/accel.sh@19 -- # IFS=: 00:06:25.615 03:06:59 -- accel/accel.sh@19 -- # read -r var val 00:06:25.615 03:06:59 -- accel/accel.sh@20 -- # val=decompress 00:06:25.615 03:06:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.615 03:06:59 -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:25.615 03:06:59 -- accel/accel.sh@19 -- # IFS=: 00:06:25.615 03:06:59 -- accel/accel.sh@19 -- # read -r var val 00:06:25.615 03:06:59 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:25.615 03:06:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.615 03:06:59 -- accel/accel.sh@19 -- # IFS=: 00:06:25.615 03:06:59 -- accel/accel.sh@19 -- # read -r var val 00:06:25.615 03:06:59 -- accel/accel.sh@20 -- # val= 00:06:25.615 03:06:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.615 03:06:59 -- accel/accel.sh@19 -- # IFS=: 00:06:25.615 03:06:59 -- accel/accel.sh@19 -- # read -r var val 00:06:25.615 03:06:59 -- accel/accel.sh@20 -- # val=software 00:06:25.615 03:06:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.615 03:06:59 -- accel/accel.sh@22 -- # accel_module=software 00:06:25.615 03:06:59 -- accel/accel.sh@19 -- # IFS=: 00:06:25.615 03:06:59 -- accel/accel.sh@19 -- # read -r var val 00:06:25.615 03:06:59 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:25.615 03:06:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.615 03:06:59 -- accel/accel.sh@19 -- # IFS=: 00:06:25.615 03:06:59 -- accel/accel.sh@19 -- # read -r var val 00:06:25.615 03:06:59 -- accel/accel.sh@20 -- # val=32 00:06:25.615 03:06:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.615 03:06:59 -- accel/accel.sh@19 -- # IFS=: 00:06:25.615 03:06:59 -- accel/accel.sh@19 -- # read -r var val 00:06:25.615 03:06:59 -- accel/accel.sh@20 -- # val=32 00:06:25.615 03:06:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.615 03:06:59 -- accel/accel.sh@19 -- # IFS=: 00:06:25.615 03:06:59 -- accel/accel.sh@19 -- # read -r var val 00:06:25.615 03:06:59 -- accel/accel.sh@20 -- # val=2 00:06:25.615 03:06:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.615 03:06:59 -- accel/accel.sh@19 -- # IFS=: 00:06:25.615 03:06:59 -- accel/accel.sh@19 -- # read -r var val 00:06:25.615 03:06:59 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:25.615 03:06:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.615 03:06:59 -- accel/accel.sh@19 -- # IFS=: 00:06:25.615 03:06:59 -- accel/accel.sh@19 -- # read -r var val 00:06:25.615 03:06:59 -- accel/accel.sh@20 -- # val=Yes 00:06:25.615 03:06:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.615 03:06:59 -- accel/accel.sh@19 -- # IFS=: 00:06:25.615 03:06:59 -- accel/accel.sh@19 -- # read -r var val 00:06:25.615 03:06:59 -- accel/accel.sh@20 -- # val= 00:06:25.615 03:06:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.615 03:06:59 -- accel/accel.sh@19 -- # IFS=: 00:06:25.615 03:06:59 -- accel/accel.sh@19 -- # read -r var val 00:06:25.615 03:06:59 -- accel/accel.sh@20 -- # val= 00:06:25.615 03:06:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.615 03:06:59 -- accel/accel.sh@19 -- # IFS=: 00:06:25.615 03:06:59 -- accel/accel.sh@19 -- # read -r var val 00:06:26.990 03:07:01 -- accel/accel.sh@20 -- # val= 00:06:26.990 03:07:01 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.990 03:07:01 -- accel/accel.sh@19 -- # IFS=: 00:06:26.990 03:07:01 -- accel/accel.sh@19 -- # read -r var val 00:06:26.990 03:07:01 -- accel/accel.sh@20 -- # val= 00:06:26.990 03:07:01 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.990 03:07:01 -- accel/accel.sh@19 -- # IFS=: 00:06:26.990 03:07:01 -- accel/accel.sh@19 -- # read -r var val 00:06:26.990 03:07:01 -- accel/accel.sh@20 -- # val= 00:06:26.990 03:07:01 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.990 03:07:01 -- accel/accel.sh@19 -- # IFS=: 00:06:26.990 03:07:01 -- accel/accel.sh@19 -- # read -r var val 00:06:26.990 03:07:01 -- accel/accel.sh@20 -- # val= 00:06:26.990 03:07:01 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.990 03:07:01 -- accel/accel.sh@19 -- # IFS=: 00:06:26.990 03:07:01 -- accel/accel.sh@19 -- # read -r var val 00:06:26.990 03:07:01 -- accel/accel.sh@20 -- # val= 00:06:26.990 03:07:01 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.990 03:07:01 -- accel/accel.sh@19 -- # IFS=: 00:06:26.990 03:07:01 -- accel/accel.sh@19 -- # read -r var val 00:06:26.990 03:07:01 -- accel/accel.sh@20 -- # val= 00:06:26.990 03:07:01 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.990 03:07:01 -- accel/accel.sh@19 -- # IFS=: 00:06:26.990 03:07:01 -- accel/accel.sh@19 -- # read -r var val 00:06:26.990 03:07:01 -- accel/accel.sh@20 -- # val= 00:06:26.990 03:07:01 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.990 03:07:01 -- accel/accel.sh@19 -- # IFS=: 00:06:26.990 03:07:01 -- accel/accel.sh@19 -- # read -r var val 00:06:26.990 03:07:01 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:26.990 03:07:01 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:26.990 03:07:01 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:26.990 00:06:26.990 real 0m1.496s 00:06:26.990 user 0m1.356s 00:06:26.990 sys 0m0.142s 00:06:26.990 03:07:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:26.990 03:07:01 -- common/autotest_common.sh@10 -- # set +x 00:06:26.990 ************************************ 00:06:26.990 END TEST accel_decomp_mthread 00:06:26.990 ************************************ 00:06:26.990 03:07:01 -- accel/accel.sh@122 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:26.990 03:07:01 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:06:26.990 03:07:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:26.990 03:07:01 -- common/autotest_common.sh@10 -- # set +x 00:06:26.990 ************************************ 00:06:26.990 START TEST accel_deomp_full_mthread 00:06:26.990 ************************************ 00:06:26.990 03:07:01 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:26.990 03:07:01 -- accel/accel.sh@16 -- # local accel_opc 00:06:26.990 03:07:01 -- accel/accel.sh@17 -- # local accel_module 00:06:26.990 03:07:01 -- accel/accel.sh@19 -- # IFS=: 00:06:26.990 03:07:01 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:26.990 03:07:01 -- accel/accel.sh@19 -- # read -r var val 00:06:26.990 03:07:01 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:26.990 03:07:01 -- accel/accel.sh@12 -- # build_accel_config 00:06:26.990 03:07:01 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:26.990 03:07:01 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:26.990 03:07:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:26.990 03:07:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:26.990 03:07:01 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:26.990 03:07:01 -- accel/accel.sh@40 -- # local IFS=, 00:06:26.990 03:07:01 -- accel/accel.sh@41 -- # jq -r . 00:06:26.990 [2024-04-25 03:07:01.260284] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:06:26.990 [2024-04-25 03:07:01.260346] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1388753 ] 00:06:26.990 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.990 [2024-04-25 03:07:01.323256] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.990 [2024-04-25 03:07:01.444720] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.249 03:07:01 -- accel/accel.sh@20 -- # val= 00:06:27.249 03:07:01 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.249 03:07:01 -- accel/accel.sh@19 -- # IFS=: 00:06:27.249 03:07:01 -- accel/accel.sh@19 -- # read -r var val 00:06:27.249 03:07:01 -- accel/accel.sh@20 -- # val= 00:06:27.249 03:07:01 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.249 03:07:01 -- accel/accel.sh@19 -- # IFS=: 00:06:27.249 03:07:01 -- accel/accel.sh@19 -- # read -r var val 00:06:27.249 03:07:01 -- accel/accel.sh@20 -- # val= 00:06:27.249 03:07:01 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.249 03:07:01 -- accel/accel.sh@19 -- # IFS=: 00:06:27.249 03:07:01 -- accel/accel.sh@19 -- # read -r var val 00:06:27.249 03:07:01 -- accel/accel.sh@20 -- # val=0x1 00:06:27.249 03:07:01 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.249 03:07:01 -- accel/accel.sh@19 -- # IFS=: 00:06:27.249 03:07:01 -- accel/accel.sh@19 -- # read -r var val 00:06:27.249 03:07:01 -- accel/accel.sh@20 -- # val= 00:06:27.249 03:07:01 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.249 03:07:01 -- accel/accel.sh@19 -- # IFS=: 00:06:27.249 03:07:01 -- accel/accel.sh@19 -- # read -r var val 00:06:27.249 03:07:01 -- accel/accel.sh@20 -- # val= 00:06:27.249 03:07:01 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.249 03:07:01 -- accel/accel.sh@19 -- # IFS=: 00:06:27.249 03:07:01 -- accel/accel.sh@19 -- # read -r var val 00:06:27.249 03:07:01 -- accel/accel.sh@20 -- # val=decompress 00:06:27.249 03:07:01 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.249 03:07:01 -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:27.249 03:07:01 -- accel/accel.sh@19 -- # IFS=: 00:06:27.249 03:07:01 -- accel/accel.sh@19 -- # read -r var val 00:06:27.249 03:07:01 -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:27.249 03:07:01 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.249 03:07:01 -- accel/accel.sh@19 -- # IFS=: 00:06:27.249 03:07:01 -- accel/accel.sh@19 -- # read -r var val 00:06:27.249 03:07:01 -- accel/accel.sh@20 -- # val= 00:06:27.249 03:07:01 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.249 03:07:01 -- accel/accel.sh@19 -- # IFS=: 00:06:27.249 03:07:01 -- accel/accel.sh@19 -- # read -r var val 00:06:27.249 03:07:01 -- accel/accel.sh@20 -- # val=software 00:06:27.249 03:07:01 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.249 03:07:01 -- accel/accel.sh@22 -- # accel_module=software 00:06:27.249 03:07:01 -- accel/accel.sh@19 -- # IFS=: 00:06:27.249 03:07:01 -- accel/accel.sh@19 -- # read -r var val 00:06:27.249 03:07:01 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:27.249 03:07:01 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.249 03:07:01 -- accel/accel.sh@19 -- # IFS=: 00:06:27.249 03:07:01 -- accel/accel.sh@19 -- # read -r var val 00:06:27.249 03:07:01 -- accel/accel.sh@20 -- # val=32 00:06:27.249 03:07:01 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.249 03:07:01 -- accel/accel.sh@19 -- # IFS=: 00:06:27.249 03:07:01 -- accel/accel.sh@19 -- # read -r var val 00:06:27.249 03:07:01 -- accel/accel.sh@20 -- # val=32 00:06:27.249 03:07:01 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.249 03:07:01 -- accel/accel.sh@19 -- # IFS=: 00:06:27.249 03:07:01 -- accel/accel.sh@19 -- # read -r var val 00:06:27.249 03:07:01 -- accel/accel.sh@20 -- # val=2 00:06:27.249 03:07:01 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.249 03:07:01 -- accel/accel.sh@19 -- # IFS=: 00:06:27.249 03:07:01 -- accel/accel.sh@19 -- # read -r var val 00:06:27.249 03:07:01 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:27.249 03:07:01 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.249 03:07:01 -- accel/accel.sh@19 -- # IFS=: 00:06:27.249 03:07:01 -- accel/accel.sh@19 -- # read -r var val 00:06:27.249 03:07:01 -- accel/accel.sh@20 -- # val=Yes 00:06:27.249 03:07:01 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.249 03:07:01 -- accel/accel.sh@19 -- # IFS=: 00:06:27.249 03:07:01 -- accel/accel.sh@19 -- # read -r var val 00:06:27.249 03:07:01 -- accel/accel.sh@20 -- # val= 00:06:27.249 03:07:01 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.249 03:07:01 -- accel/accel.sh@19 -- # IFS=: 00:06:27.249 03:07:01 -- accel/accel.sh@19 -- # read -r var val 00:06:27.249 03:07:01 -- accel/accel.sh@20 -- # val= 00:06:27.249 03:07:01 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.249 03:07:01 -- accel/accel.sh@19 -- # IFS=: 00:06:27.249 03:07:01 -- accel/accel.sh@19 -- # read -r var val 00:06:28.623 03:07:02 -- accel/accel.sh@20 -- # val= 00:06:28.623 03:07:02 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.623 03:07:02 -- accel/accel.sh@19 -- # IFS=: 00:06:28.623 03:07:02 -- accel/accel.sh@19 -- # read -r var val 00:06:28.623 03:07:02 -- accel/accel.sh@20 -- # val= 00:06:28.623 03:07:02 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.623 03:07:02 -- accel/accel.sh@19 -- # IFS=: 00:06:28.623 03:07:02 -- accel/accel.sh@19 -- # read -r var val 00:06:28.623 03:07:02 -- accel/accel.sh@20 -- # val= 00:06:28.623 03:07:02 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.623 03:07:02 -- accel/accel.sh@19 -- # IFS=: 00:06:28.623 03:07:02 -- accel/accel.sh@19 -- # read -r var val 00:06:28.623 03:07:02 -- accel/accel.sh@20 -- # val= 00:06:28.623 03:07:02 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.623 03:07:02 -- accel/accel.sh@19 -- # IFS=: 00:06:28.623 03:07:02 -- accel/accel.sh@19 -- # read -r var val 00:06:28.623 03:07:02 -- accel/accel.sh@20 -- # val= 00:06:28.623 03:07:02 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.623 03:07:02 -- accel/accel.sh@19 -- # IFS=: 00:06:28.623 03:07:02 -- accel/accel.sh@19 -- # read -r var val 00:06:28.623 03:07:02 -- accel/accel.sh@20 -- # val= 00:06:28.623 03:07:02 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.623 03:07:02 -- accel/accel.sh@19 -- # IFS=: 00:06:28.623 03:07:02 -- accel/accel.sh@19 -- # read -r var val 00:06:28.623 03:07:02 -- accel/accel.sh@20 -- # val= 00:06:28.623 03:07:02 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.623 03:07:02 -- accel/accel.sh@19 -- # IFS=: 00:06:28.623 03:07:02 -- accel/accel.sh@19 -- # read -r var val 00:06:28.623 03:07:02 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:28.623 03:07:02 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:28.623 03:07:02 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:28.623 00:06:28.623 real 0m1.521s 00:06:28.623 user 0m1.375s 00:06:28.623 sys 0m0.148s 00:06:28.623 03:07:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:28.623 03:07:02 -- common/autotest_common.sh@10 -- # set +x 00:06:28.623 ************************************ 00:06:28.623 END TEST accel_deomp_full_mthread 00:06:28.623 ************************************ 00:06:28.623 03:07:02 -- accel/accel.sh@124 -- # [[ n == y ]] 00:06:28.623 03:07:02 -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:28.623 03:07:02 -- accel/accel.sh@137 -- # build_accel_config 00:06:28.623 03:07:02 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:28.623 03:07:02 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:28.623 03:07:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:28.623 03:07:02 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:28.623 03:07:02 -- common/autotest_common.sh@10 -- # set +x 00:06:28.623 03:07:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:28.623 03:07:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:28.623 03:07:02 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:28.623 03:07:02 -- accel/accel.sh@40 -- # local IFS=, 00:06:28.623 03:07:02 -- accel/accel.sh@41 -- # jq -r . 00:06:28.623 ************************************ 00:06:28.623 START TEST accel_dif_functional_tests 00:06:28.623 ************************************ 00:06:28.623 03:07:02 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:28.623 [2024-04-25 03:07:02.926559] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:06:28.623 [2024-04-25 03:07:02.926642] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1389036 ] 00:06:28.623 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.623 [2024-04-25 03:07:02.996062] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:28.623 [2024-04-25 03:07:03.117841] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:28.623 [2024-04-25 03:07:03.117904] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:28.623 [2024-04-25 03:07:03.117908] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.882 00:06:28.882 00:06:28.882 CUnit - A unit testing framework for C - Version 2.1-3 00:06:28.882 http://cunit.sourceforge.net/ 00:06:28.882 00:06:28.882 00:06:28.882 Suite: accel_dif 00:06:28.882 Test: verify: DIF generated, GUARD check ...passed 00:06:28.882 Test: verify: DIF generated, APPTAG check ...passed 00:06:28.882 Test: verify: DIF generated, REFTAG check ...passed 00:06:28.882 Test: verify: DIF not generated, GUARD check ...[2024-04-25 03:07:03.217078] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:28.882 [2024-04-25 03:07:03.217150] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:28.882 passed 00:06:28.882 Test: verify: DIF not generated, APPTAG check ...[2024-04-25 03:07:03.217191] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:28.882 [2024-04-25 03:07:03.217221] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:28.882 passed 00:06:28.882 Test: verify: DIF not generated, REFTAG check ...[2024-04-25 03:07:03.217257] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:28.882 [2024-04-25 03:07:03.217287] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:28.882 passed 00:06:28.882 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:28.882 Test: verify: APPTAG incorrect, APPTAG check ...[2024-04-25 03:07:03.217359] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:28.882 passed 00:06:28.882 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:28.882 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:28.882 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:28.882 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-04-25 03:07:03.217515] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:28.882 passed 00:06:28.882 Test: generate copy: DIF generated, GUARD check ...passed 00:06:28.882 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:28.882 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:28.882 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:28.882 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:28.882 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:28.882 Test: generate copy: iovecs-len validate ...[2024-04-25 03:07:03.217798] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:28.882 passed 00:06:28.882 Test: generate copy: buffer alignment validate ...passed 00:06:28.882 00:06:28.882 Run Summary: Type Total Ran Passed Failed Inactive 00:06:28.882 suites 1 1 n/a 0 0 00:06:28.882 tests 20 20 20 0 0 00:06:28.882 asserts 204 204 204 0 n/a 00:06:28.882 00:06:28.882 Elapsed time = 0.003 seconds 00:06:29.142 00:06:29.142 real 0m0.605s 00:06:29.142 user 0m0.877s 00:06:29.142 sys 0m0.183s 00:06:29.142 03:07:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:29.142 03:07:03 -- common/autotest_common.sh@10 -- # set +x 00:06:29.142 ************************************ 00:06:29.142 END TEST accel_dif_functional_tests 00:06:29.142 ************************************ 00:06:29.142 00:06:29.142 real 0m35.411s 00:06:29.142 user 0m37.698s 00:06:29.142 sys 0m5.455s 00:06:29.142 03:07:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:29.142 03:07:03 -- common/autotest_common.sh@10 -- # set +x 00:06:29.142 ************************************ 00:06:29.142 END TEST accel 00:06:29.142 ************************************ 00:06:29.142 03:07:03 -- spdk/autotest.sh@180 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:29.142 03:07:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:29.142 03:07:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:29.142 03:07:03 -- common/autotest_common.sh@10 -- # set +x 00:06:29.142 ************************************ 00:06:29.142 START TEST accel_rpc 00:06:29.143 ************************************ 00:06:29.143 03:07:03 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:29.401 * Looking for test storage... 00:06:29.401 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:29.401 03:07:03 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:29.401 03:07:03 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=1389121 00:06:29.401 03:07:03 -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:29.401 03:07:03 -- accel/accel_rpc.sh@15 -- # waitforlisten 1389121 00:06:29.401 03:07:03 -- common/autotest_common.sh@817 -- # '[' -z 1389121 ']' 00:06:29.401 03:07:03 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.401 03:07:03 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:29.401 03:07:03 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.401 03:07:03 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:29.401 03:07:03 -- common/autotest_common.sh@10 -- # set +x 00:06:29.401 [2024-04-25 03:07:03.744127] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:06:29.401 [2024-04-25 03:07:03.744219] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1389121 ] 00:06:29.401 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.401 [2024-04-25 03:07:03.803059] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.661 [2024-04-25 03:07:03.909599] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.661 03:07:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:29.661 03:07:03 -- common/autotest_common.sh@850 -- # return 0 00:06:29.661 03:07:03 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:29.661 03:07:03 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:29.661 03:07:03 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:29.661 03:07:03 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:29.661 03:07:03 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:29.661 03:07:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:29.661 03:07:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:29.661 03:07:03 -- common/autotest_common.sh@10 -- # set +x 00:06:29.661 ************************************ 00:06:29.661 START TEST accel_assign_opcode 00:06:29.661 ************************************ 00:06:29.661 03:07:04 -- common/autotest_common.sh@1111 -- # accel_assign_opcode_test_suite 00:06:29.661 03:07:04 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:29.661 03:07:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:29.661 03:07:04 -- common/autotest_common.sh@10 -- # set +x 00:06:29.661 [2024-04-25 03:07:04.050420] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:29.661 03:07:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:29.661 03:07:04 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:29.661 03:07:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:29.661 03:07:04 -- common/autotest_common.sh@10 -- # set +x 00:06:29.661 [2024-04-25 03:07:04.058418] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:29.661 03:07:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:29.661 03:07:04 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:29.661 03:07:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:29.661 03:07:04 -- common/autotest_common.sh@10 -- # set +x 00:06:29.921 03:07:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:29.921 03:07:04 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:29.921 03:07:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:29.921 03:07:04 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:29.921 03:07:04 -- common/autotest_common.sh@10 -- # set +x 00:06:29.921 03:07:04 -- accel/accel_rpc.sh@42 -- # grep software 00:06:29.921 03:07:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:29.921 software 00:06:29.921 00:06:29.921 real 0m0.301s 00:06:29.921 user 0m0.041s 00:06:29.921 sys 0m0.006s 00:06:29.921 03:07:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:29.921 03:07:04 -- common/autotest_common.sh@10 -- # set +x 00:06:29.921 ************************************ 00:06:29.921 END TEST accel_assign_opcode 00:06:29.921 ************************************ 00:06:29.921 03:07:04 -- accel/accel_rpc.sh@55 -- # killprocess 1389121 00:06:29.921 03:07:04 -- common/autotest_common.sh@936 -- # '[' -z 1389121 ']' 00:06:29.921 03:07:04 -- common/autotest_common.sh@940 -- # kill -0 1389121 00:06:29.921 03:07:04 -- common/autotest_common.sh@941 -- # uname 00:06:29.921 03:07:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:29.921 03:07:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1389121 00:06:29.921 03:07:04 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:29.921 03:07:04 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:29.921 03:07:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1389121' 00:06:29.921 killing process with pid 1389121 00:06:29.921 03:07:04 -- common/autotest_common.sh@955 -- # kill 1389121 00:06:29.921 03:07:04 -- common/autotest_common.sh@960 -- # wait 1389121 00:06:30.490 00:06:30.490 real 0m1.239s 00:06:30.490 user 0m1.202s 00:06:30.490 sys 0m0.454s 00:06:30.490 03:07:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:30.490 03:07:04 -- common/autotest_common.sh@10 -- # set +x 00:06:30.490 ************************************ 00:06:30.490 END TEST accel_rpc 00:06:30.490 ************************************ 00:06:30.490 03:07:04 -- spdk/autotest.sh@181 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:30.490 03:07:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:30.490 03:07:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:30.490 03:07:04 -- common/autotest_common.sh@10 -- # set +x 00:06:30.749 ************************************ 00:06:30.749 START TEST app_cmdline 00:06:30.749 ************************************ 00:06:30.749 03:07:05 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:30.749 * Looking for test storage... 00:06:30.749 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:30.749 03:07:05 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:30.749 03:07:05 -- app/cmdline.sh@17 -- # spdk_tgt_pid=1389453 00:06:30.749 03:07:05 -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:30.749 03:07:05 -- app/cmdline.sh@18 -- # waitforlisten 1389453 00:06:30.749 03:07:05 -- common/autotest_common.sh@817 -- # '[' -z 1389453 ']' 00:06:30.749 03:07:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:30.749 03:07:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:30.749 03:07:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:30.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:30.749 03:07:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:30.749 03:07:05 -- common/autotest_common.sh@10 -- # set +x 00:06:30.749 [2024-04-25 03:07:05.109951] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:06:30.749 [2024-04-25 03:07:05.110042] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1389453 ] 00:06:30.749 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.749 [2024-04-25 03:07:05.166930] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.008 [2024-04-25 03:07:05.271447] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.268 03:07:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:31.268 03:07:05 -- common/autotest_common.sh@850 -- # return 0 00:06:31.268 03:07:05 -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:31.268 { 00:06:31.268 "version": "SPDK v24.05-pre git sha1 abd932d6f", 00:06:31.268 "fields": { 00:06:31.268 "major": 24, 00:06:31.268 "minor": 5, 00:06:31.268 "patch": 0, 00:06:31.268 "suffix": "-pre", 00:06:31.268 "commit": "abd932d6f" 00:06:31.268 } 00:06:31.268 } 00:06:31.527 03:07:05 -- app/cmdline.sh@22 -- # expected_methods=() 00:06:31.527 03:07:05 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:31.527 03:07:05 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:31.527 03:07:05 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:31.527 03:07:05 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:31.527 03:07:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:31.527 03:07:05 -- common/autotest_common.sh@10 -- # set +x 00:06:31.527 03:07:05 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:31.527 03:07:05 -- app/cmdline.sh@26 -- # sort 00:06:31.527 03:07:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:31.527 03:07:05 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:31.527 03:07:05 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:31.528 03:07:05 -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:31.528 03:07:05 -- common/autotest_common.sh@638 -- # local es=0 00:06:31.528 03:07:05 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:31.528 03:07:05 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:31.528 03:07:05 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:31.528 03:07:05 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:31.528 03:07:05 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:31.528 03:07:05 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:31.528 03:07:05 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:31.528 03:07:05 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:31.528 03:07:05 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:31.528 03:07:05 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:31.787 request: 00:06:31.787 { 00:06:31.787 "method": "env_dpdk_get_mem_stats", 00:06:31.787 "req_id": 1 00:06:31.787 } 00:06:31.787 Got JSON-RPC error response 00:06:31.787 response: 00:06:31.787 { 00:06:31.787 "code": -32601, 00:06:31.787 "message": "Method not found" 00:06:31.787 } 00:06:31.787 03:07:06 -- common/autotest_common.sh@641 -- # es=1 00:06:31.787 03:07:06 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:31.787 03:07:06 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:06:31.787 03:07:06 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:31.787 03:07:06 -- app/cmdline.sh@1 -- # killprocess 1389453 00:06:31.787 03:07:06 -- common/autotest_common.sh@936 -- # '[' -z 1389453 ']' 00:06:31.787 03:07:06 -- common/autotest_common.sh@940 -- # kill -0 1389453 00:06:31.787 03:07:06 -- common/autotest_common.sh@941 -- # uname 00:06:31.787 03:07:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:31.787 03:07:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1389453 00:06:31.787 03:07:06 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:31.787 03:07:06 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:31.787 03:07:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1389453' 00:06:31.787 killing process with pid 1389453 00:06:31.787 03:07:06 -- common/autotest_common.sh@955 -- # kill 1389453 00:06:31.787 03:07:06 -- common/autotest_common.sh@960 -- # wait 1389453 00:06:32.357 00:06:32.357 real 0m1.611s 00:06:32.357 user 0m1.989s 00:06:32.357 sys 0m0.446s 00:06:32.357 03:07:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:32.357 03:07:06 -- common/autotest_common.sh@10 -- # set +x 00:06:32.357 ************************************ 00:06:32.357 END TEST app_cmdline 00:06:32.357 ************************************ 00:06:32.357 03:07:06 -- spdk/autotest.sh@182 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:32.357 03:07:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:32.357 03:07:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:32.357 03:07:06 -- common/autotest_common.sh@10 -- # set +x 00:06:32.357 ************************************ 00:06:32.357 START TEST version 00:06:32.357 ************************************ 00:06:32.357 03:07:06 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:32.357 * Looking for test storage... 00:06:32.357 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:32.357 03:07:06 -- app/version.sh@17 -- # get_header_version major 00:06:32.357 03:07:06 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:32.357 03:07:06 -- app/version.sh@14 -- # cut -f2 00:06:32.357 03:07:06 -- app/version.sh@14 -- # tr -d '"' 00:06:32.357 03:07:06 -- app/version.sh@17 -- # major=24 00:06:32.357 03:07:06 -- app/version.sh@18 -- # get_header_version minor 00:06:32.357 03:07:06 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:32.357 03:07:06 -- app/version.sh@14 -- # cut -f2 00:06:32.357 03:07:06 -- app/version.sh@14 -- # tr -d '"' 00:06:32.357 03:07:06 -- app/version.sh@18 -- # minor=5 00:06:32.357 03:07:06 -- app/version.sh@19 -- # get_header_version patch 00:06:32.357 03:07:06 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:32.357 03:07:06 -- app/version.sh@14 -- # cut -f2 00:06:32.357 03:07:06 -- app/version.sh@14 -- # tr -d '"' 00:06:32.357 03:07:06 -- app/version.sh@19 -- # patch=0 00:06:32.357 03:07:06 -- app/version.sh@20 -- # get_header_version suffix 00:06:32.357 03:07:06 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:32.357 03:07:06 -- app/version.sh@14 -- # cut -f2 00:06:32.357 03:07:06 -- app/version.sh@14 -- # tr -d '"' 00:06:32.357 03:07:06 -- app/version.sh@20 -- # suffix=-pre 00:06:32.357 03:07:06 -- app/version.sh@22 -- # version=24.5 00:06:32.357 03:07:06 -- app/version.sh@25 -- # (( patch != 0 )) 00:06:32.357 03:07:06 -- app/version.sh@28 -- # version=24.5rc0 00:06:32.357 03:07:06 -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:32.357 03:07:06 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:32.616 03:07:06 -- app/version.sh@30 -- # py_version=24.5rc0 00:06:32.616 03:07:06 -- app/version.sh@31 -- # [[ 24.5rc0 == \2\4\.\5\r\c\0 ]] 00:06:32.616 00:06:32.616 real 0m0.112s 00:06:32.616 user 0m0.063s 00:06:32.616 sys 0m0.072s 00:06:32.616 03:07:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:32.616 03:07:06 -- common/autotest_common.sh@10 -- # set +x 00:06:32.617 ************************************ 00:06:32.617 END TEST version 00:06:32.617 ************************************ 00:06:32.617 03:07:06 -- spdk/autotest.sh@184 -- # '[' 0 -eq 1 ']' 00:06:32.617 03:07:06 -- spdk/autotest.sh@194 -- # uname -s 00:06:32.617 03:07:06 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:32.617 03:07:06 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:32.617 03:07:06 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:32.617 03:07:06 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:06:32.617 03:07:06 -- spdk/autotest.sh@254 -- # '[' 0 -eq 1 ']' 00:06:32.617 03:07:06 -- spdk/autotest.sh@258 -- # timing_exit lib 00:06:32.617 03:07:06 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:32.617 03:07:06 -- common/autotest_common.sh@10 -- # set +x 00:06:32.617 03:07:06 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:06:32.617 03:07:06 -- spdk/autotest.sh@268 -- # '[' 0 -eq 1 ']' 00:06:32.617 03:07:06 -- spdk/autotest.sh@277 -- # '[' 1 -eq 1 ']' 00:06:32.617 03:07:06 -- spdk/autotest.sh@278 -- # export NET_TYPE 00:06:32.617 03:07:06 -- spdk/autotest.sh@281 -- # '[' tcp = rdma ']' 00:06:32.617 03:07:06 -- spdk/autotest.sh@284 -- # '[' tcp = tcp ']' 00:06:32.617 03:07:06 -- spdk/autotest.sh@285 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:32.617 03:07:06 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:32.617 03:07:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:32.617 03:07:06 -- common/autotest_common.sh@10 -- # set +x 00:06:32.617 ************************************ 00:06:32.617 START TEST nvmf_tcp 00:06:32.617 ************************************ 00:06:32.617 03:07:07 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:32.617 * Looking for test storage... 00:06:32.617 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:32.617 03:07:07 -- nvmf/nvmf.sh@10 -- # uname -s 00:06:32.617 03:07:07 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:32.617 03:07:07 -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:32.617 03:07:07 -- nvmf/common.sh@7 -- # uname -s 00:06:32.617 03:07:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:32.617 03:07:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:32.617 03:07:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:32.617 03:07:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:32.617 03:07:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:32.617 03:07:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:32.617 03:07:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:32.617 03:07:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:32.617 03:07:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:32.617 03:07:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:32.617 03:07:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:32.617 03:07:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:32.617 03:07:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:32.617 03:07:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:32.617 03:07:07 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:32.617 03:07:07 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:32.617 03:07:07 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:32.617 03:07:07 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:32.617 03:07:07 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:32.617 03:07:07 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:32.617 03:07:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.617 03:07:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.617 03:07:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.617 03:07:07 -- paths/export.sh@5 -- # export PATH 00:06:32.617 03:07:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.617 03:07:07 -- nvmf/common.sh@47 -- # : 0 00:06:32.617 03:07:07 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:32.617 03:07:07 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:32.617 03:07:07 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:32.617 03:07:07 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:32.617 03:07:07 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:32.617 03:07:07 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:32.617 03:07:07 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:32.617 03:07:07 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:32.617 03:07:07 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:32.617 03:07:07 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:06:32.617 03:07:07 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:06:32.617 03:07:07 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:32.617 03:07:07 -- common/autotest_common.sh@10 -- # set +x 00:06:32.617 03:07:07 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:06:32.617 03:07:07 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:32.617 03:07:07 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:32.617 03:07:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:32.617 03:07:07 -- common/autotest_common.sh@10 -- # set +x 00:06:32.877 ************************************ 00:06:32.877 START TEST nvmf_example 00:06:32.877 ************************************ 00:06:32.877 03:07:07 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:32.877 * Looking for test storage... 00:06:32.877 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:32.877 03:07:07 -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:32.877 03:07:07 -- nvmf/common.sh@7 -- # uname -s 00:06:32.877 03:07:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:32.877 03:07:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:32.877 03:07:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:32.877 03:07:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:32.877 03:07:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:32.877 03:07:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:32.877 03:07:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:32.877 03:07:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:32.877 03:07:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:32.877 03:07:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:32.877 03:07:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:32.877 03:07:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:32.877 03:07:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:32.877 03:07:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:32.877 03:07:07 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:32.877 03:07:07 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:32.877 03:07:07 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:32.877 03:07:07 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:32.877 03:07:07 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:32.877 03:07:07 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:32.877 03:07:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.877 03:07:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.877 03:07:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.877 03:07:07 -- paths/export.sh@5 -- # export PATH 00:06:32.877 03:07:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.877 03:07:07 -- nvmf/common.sh@47 -- # : 0 00:06:32.877 03:07:07 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:32.877 03:07:07 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:32.877 03:07:07 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:32.877 03:07:07 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:32.877 03:07:07 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:32.877 03:07:07 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:32.877 03:07:07 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:32.877 03:07:07 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:32.877 03:07:07 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:06:32.877 03:07:07 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:06:32.877 03:07:07 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:06:32.877 03:07:07 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:06:32.877 03:07:07 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:06:32.877 03:07:07 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:06:32.877 03:07:07 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:06:32.877 03:07:07 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:06:32.877 03:07:07 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:32.877 03:07:07 -- common/autotest_common.sh@10 -- # set +x 00:06:32.877 03:07:07 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:06:32.877 03:07:07 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:06:32.877 03:07:07 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:32.877 03:07:07 -- nvmf/common.sh@437 -- # prepare_net_devs 00:06:32.877 03:07:07 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:06:32.877 03:07:07 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:06:32.877 03:07:07 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:32.877 03:07:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:32.877 03:07:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:32.877 03:07:07 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:06:32.877 03:07:07 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:06:32.877 03:07:07 -- nvmf/common.sh@285 -- # xtrace_disable 00:06:32.877 03:07:07 -- common/autotest_common.sh@10 -- # set +x 00:06:35.409 03:07:09 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:06:35.409 03:07:09 -- nvmf/common.sh@291 -- # pci_devs=() 00:06:35.409 03:07:09 -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:35.409 03:07:09 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:35.409 03:07:09 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:35.409 03:07:09 -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:35.409 03:07:09 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:35.409 03:07:09 -- nvmf/common.sh@295 -- # net_devs=() 00:06:35.409 03:07:09 -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:35.409 03:07:09 -- nvmf/common.sh@296 -- # e810=() 00:06:35.409 03:07:09 -- nvmf/common.sh@296 -- # local -ga e810 00:06:35.409 03:07:09 -- nvmf/common.sh@297 -- # x722=() 00:06:35.409 03:07:09 -- nvmf/common.sh@297 -- # local -ga x722 00:06:35.409 03:07:09 -- nvmf/common.sh@298 -- # mlx=() 00:06:35.409 03:07:09 -- nvmf/common.sh@298 -- # local -ga mlx 00:06:35.410 03:07:09 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:35.410 03:07:09 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:35.410 03:07:09 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:35.410 03:07:09 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:35.410 03:07:09 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:35.410 03:07:09 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:35.410 03:07:09 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:35.410 03:07:09 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:35.410 03:07:09 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:35.410 03:07:09 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:35.410 03:07:09 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:35.410 03:07:09 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:35.410 03:07:09 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:35.410 03:07:09 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:35.410 03:07:09 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:35.410 03:07:09 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:35.410 03:07:09 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:35.410 03:07:09 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:35.410 03:07:09 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:35.410 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:35.410 03:07:09 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:35.410 03:07:09 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:35.410 03:07:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:35.410 03:07:09 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:35.410 03:07:09 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:35.410 03:07:09 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:35.410 03:07:09 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:35.410 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:35.410 03:07:09 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:35.410 03:07:09 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:35.410 03:07:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:35.410 03:07:09 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:35.410 03:07:09 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:35.410 03:07:09 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:35.410 03:07:09 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:35.410 03:07:09 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:35.410 03:07:09 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:35.410 03:07:09 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:35.410 03:07:09 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:06:35.410 03:07:09 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:35.410 03:07:09 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:35.410 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:35.410 03:07:09 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:06:35.410 03:07:09 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:35.410 03:07:09 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:35.410 03:07:09 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:06:35.410 03:07:09 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:35.410 03:07:09 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:35.410 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:35.410 03:07:09 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:06:35.410 03:07:09 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:06:35.410 03:07:09 -- nvmf/common.sh@403 -- # is_hw=yes 00:06:35.410 03:07:09 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:06:35.410 03:07:09 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:06:35.410 03:07:09 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:06:35.410 03:07:09 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:35.410 03:07:09 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:35.410 03:07:09 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:35.410 03:07:09 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:35.410 03:07:09 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:35.410 03:07:09 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:35.410 03:07:09 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:35.410 03:07:09 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:35.410 03:07:09 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:35.410 03:07:09 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:35.410 03:07:09 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:35.410 03:07:09 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:35.410 03:07:09 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:35.410 03:07:09 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:35.410 03:07:09 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:35.410 03:07:09 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:35.410 03:07:09 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:35.410 03:07:09 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:35.410 03:07:09 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:35.410 03:07:09 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:35.410 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:35.410 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.147 ms 00:06:35.410 00:06:35.410 --- 10.0.0.2 ping statistics --- 00:06:35.410 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:35.410 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:06:35.410 03:07:09 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:35.410 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:35.410 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:06:35.410 00:06:35.410 --- 10.0.0.1 ping statistics --- 00:06:35.410 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:35.410 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:06:35.410 03:07:09 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:35.410 03:07:09 -- nvmf/common.sh@411 -- # return 0 00:06:35.410 03:07:09 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:06:35.410 03:07:09 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:35.410 03:07:09 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:06:35.410 03:07:09 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:06:35.410 03:07:09 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:35.410 03:07:09 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:06:35.410 03:07:09 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:06:35.410 03:07:09 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:06:35.410 03:07:09 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:06:35.410 03:07:09 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:35.410 03:07:09 -- common/autotest_common.sh@10 -- # set +x 00:06:35.410 03:07:09 -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:06:35.410 03:07:09 -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:06:35.410 03:07:09 -- target/nvmf_example.sh@34 -- # nvmfpid=1391496 00:06:35.410 03:07:09 -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:06:35.410 03:07:09 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:35.410 03:07:09 -- target/nvmf_example.sh@36 -- # waitforlisten 1391496 00:06:35.410 03:07:09 -- common/autotest_common.sh@817 -- # '[' -z 1391496 ']' 00:06:35.410 03:07:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.410 03:07:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:35.410 03:07:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.410 03:07:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:35.410 03:07:09 -- common/autotest_common.sh@10 -- # set +x 00:06:35.410 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.350 03:07:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:36.350 03:07:10 -- common/autotest_common.sh@850 -- # return 0 00:06:36.350 03:07:10 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:06:36.350 03:07:10 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:36.350 03:07:10 -- common/autotest_common.sh@10 -- # set +x 00:06:36.350 03:07:10 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:36.350 03:07:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:36.350 03:07:10 -- common/autotest_common.sh@10 -- # set +x 00:06:36.350 03:07:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:36.350 03:07:10 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:06:36.350 03:07:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:36.350 03:07:10 -- common/autotest_common.sh@10 -- # set +x 00:06:36.350 03:07:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:36.350 03:07:10 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:06:36.350 03:07:10 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:36.350 03:07:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:36.350 03:07:10 -- common/autotest_common.sh@10 -- # set +x 00:06:36.350 03:07:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:36.350 03:07:10 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:06:36.350 03:07:10 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:06:36.350 03:07:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:36.350 03:07:10 -- common/autotest_common.sh@10 -- # set +x 00:06:36.350 03:07:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:36.350 03:07:10 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:36.350 03:07:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:36.350 03:07:10 -- common/autotest_common.sh@10 -- # set +x 00:06:36.350 03:07:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:36.350 03:07:10 -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:06:36.350 03:07:10 -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:06:36.350 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.598 Initializing NVMe Controllers 00:06:48.598 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:48.598 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:48.598 Initialization complete. Launching workers. 00:06:48.598 ======================================================== 00:06:48.598 Latency(us) 00:06:48.598 Device Information : IOPS MiB/s Average min max 00:06:48.598 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14829.00 57.93 4317.62 883.09 19983.85 00:06:48.598 ======================================================== 00:06:48.598 Total : 14829.00 57.93 4317.62 883.09 19983.85 00:06:48.598 00:06:48.598 03:07:20 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:06:48.598 03:07:20 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:06:48.598 03:07:20 -- nvmf/common.sh@477 -- # nvmfcleanup 00:06:48.598 03:07:20 -- nvmf/common.sh@117 -- # sync 00:06:48.598 03:07:20 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:48.598 03:07:20 -- nvmf/common.sh@120 -- # set +e 00:06:48.598 03:07:20 -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:48.598 03:07:20 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:48.598 rmmod nvme_tcp 00:06:48.598 rmmod nvme_fabrics 00:06:48.598 rmmod nvme_keyring 00:06:48.598 03:07:20 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:48.598 03:07:20 -- nvmf/common.sh@124 -- # set -e 00:06:48.598 03:07:20 -- nvmf/common.sh@125 -- # return 0 00:06:48.598 03:07:20 -- nvmf/common.sh@478 -- # '[' -n 1391496 ']' 00:06:48.598 03:07:20 -- nvmf/common.sh@479 -- # killprocess 1391496 00:06:48.598 03:07:20 -- common/autotest_common.sh@936 -- # '[' -z 1391496 ']' 00:06:48.598 03:07:20 -- common/autotest_common.sh@940 -- # kill -0 1391496 00:06:48.598 03:07:20 -- common/autotest_common.sh@941 -- # uname 00:06:48.598 03:07:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:48.598 03:07:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1391496 00:06:48.598 03:07:20 -- common/autotest_common.sh@942 -- # process_name=nvmf 00:06:48.598 03:07:20 -- common/autotest_common.sh@946 -- # '[' nvmf = sudo ']' 00:06:48.598 03:07:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1391496' 00:06:48.599 killing process with pid 1391496 00:06:48.599 03:07:20 -- common/autotest_common.sh@955 -- # kill 1391496 00:06:48.599 03:07:20 -- common/autotest_common.sh@960 -- # wait 1391496 00:06:48.599 nvmf threads initialize successfully 00:06:48.599 bdev subsystem init successfully 00:06:48.599 created a nvmf target service 00:06:48.599 create targets's poll groups done 00:06:48.599 all subsystems of target started 00:06:48.599 nvmf target is running 00:06:48.599 all subsystems of target stopped 00:06:48.599 destroy targets's poll groups done 00:06:48.599 destroyed the nvmf target service 00:06:48.599 bdev subsystem finish successfully 00:06:48.599 nvmf threads destroy successfully 00:06:48.599 03:07:21 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:06:48.599 03:07:21 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:06:48.599 03:07:21 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:06:48.599 03:07:21 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:48.599 03:07:21 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:48.599 03:07:21 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:48.599 03:07:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:48.599 03:07:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:48.864 03:07:23 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:48.864 03:07:23 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:06:48.864 03:07:23 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:48.864 03:07:23 -- common/autotest_common.sh@10 -- # set +x 00:06:48.864 00:06:48.864 real 0m16.060s 00:06:48.864 user 0m45.485s 00:06:48.864 sys 0m3.298s 00:06:48.864 03:07:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:48.864 03:07:23 -- common/autotest_common.sh@10 -- # set +x 00:06:48.864 ************************************ 00:06:48.864 END TEST nvmf_example 00:06:48.864 ************************************ 00:06:48.864 03:07:23 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:48.864 03:07:23 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:48.864 03:07:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:48.864 03:07:23 -- common/autotest_common.sh@10 -- # set +x 00:06:49.125 ************************************ 00:06:49.125 START TEST nvmf_filesystem 00:06:49.125 ************************************ 00:06:49.125 03:07:23 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:49.125 * Looking for test storage... 00:06:49.125 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:49.125 03:07:23 -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:06:49.125 03:07:23 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:06:49.125 03:07:23 -- common/autotest_common.sh@34 -- # set -e 00:06:49.125 03:07:23 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:06:49.125 03:07:23 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:06:49.125 03:07:23 -- common/autotest_common.sh@38 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:06:49.125 03:07:23 -- common/autotest_common.sh@43 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:06:49.125 03:07:23 -- common/autotest_common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:06:49.125 03:07:23 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:49.125 03:07:23 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:49.125 03:07:23 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:49.125 03:07:23 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:49.125 03:07:23 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:06:49.125 03:07:23 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:49.125 03:07:23 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:49.125 03:07:23 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:49.125 03:07:23 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:49.125 03:07:23 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:49.125 03:07:23 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:49.125 03:07:23 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:49.125 03:07:23 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:49.125 03:07:23 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:49.125 03:07:23 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:49.125 03:07:23 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:49.125 03:07:23 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:06:49.125 03:07:23 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:49.125 03:07:23 -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:06:49.125 03:07:23 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:06:49.125 03:07:23 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:06:49.125 03:07:23 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:06:49.125 03:07:23 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:49.125 03:07:23 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:06:49.125 03:07:23 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:06:49.125 03:07:23 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:49.125 03:07:23 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:49.125 03:07:23 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:06:49.125 03:07:23 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:06:49.125 03:07:23 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:06:49.125 03:07:23 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:06:49.125 03:07:23 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:06:49.125 03:07:23 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:06:49.125 03:07:23 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:06:49.125 03:07:23 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:06:49.125 03:07:23 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:06:49.125 03:07:23 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:06:49.125 03:07:23 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:06:49.125 03:07:23 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:06:49.125 03:07:23 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:06:49.125 03:07:23 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:06:49.125 03:07:23 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:06:49.125 03:07:23 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:06:49.126 03:07:23 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:49.126 03:07:23 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:06:49.126 03:07:23 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:06:49.126 03:07:23 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:06:49.126 03:07:23 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:49.126 03:07:23 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:06:49.126 03:07:23 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:06:49.126 03:07:23 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:06:49.126 03:07:23 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:06:49.126 03:07:23 -- common/build_config.sh@53 -- # CONFIG_HAVE_EVP_MAC=y 00:06:49.126 03:07:23 -- common/build_config.sh@54 -- # CONFIG_URING_ZNS=n 00:06:49.126 03:07:23 -- common/build_config.sh@55 -- # CONFIG_WERROR=y 00:06:49.126 03:07:23 -- common/build_config.sh@56 -- # CONFIG_HAVE_LIBBSD=n 00:06:49.126 03:07:23 -- common/build_config.sh@57 -- # CONFIG_UBSAN=y 00:06:49.126 03:07:23 -- common/build_config.sh@58 -- # CONFIG_IPSEC_MB_DIR= 00:06:49.126 03:07:23 -- common/build_config.sh@59 -- # CONFIG_GOLANG=n 00:06:49.126 03:07:23 -- common/build_config.sh@60 -- # CONFIG_ISAL=y 00:06:49.126 03:07:23 -- common/build_config.sh@61 -- # CONFIG_IDXD_KERNEL=n 00:06:49.126 03:07:23 -- common/build_config.sh@62 -- # CONFIG_DPDK_LIB_DIR= 00:06:49.126 03:07:23 -- common/build_config.sh@63 -- # CONFIG_RDMA_PROV=verbs 00:06:49.126 03:07:23 -- common/build_config.sh@64 -- # CONFIG_APPS=y 00:06:49.126 03:07:23 -- common/build_config.sh@65 -- # CONFIG_SHARED=y 00:06:49.126 03:07:23 -- common/build_config.sh@66 -- # CONFIG_HAVE_KEYUTILS=n 00:06:49.126 03:07:23 -- common/build_config.sh@67 -- # CONFIG_FC_PATH= 00:06:49.126 03:07:23 -- common/build_config.sh@68 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:49.126 03:07:23 -- common/build_config.sh@69 -- # CONFIG_FC=n 00:06:49.126 03:07:23 -- common/build_config.sh@70 -- # CONFIG_AVAHI=n 00:06:49.126 03:07:23 -- common/build_config.sh@71 -- # CONFIG_FIO_PLUGIN=y 00:06:49.126 03:07:23 -- common/build_config.sh@72 -- # CONFIG_RAID5F=n 00:06:49.126 03:07:23 -- common/build_config.sh@73 -- # CONFIG_EXAMPLES=y 00:06:49.126 03:07:23 -- common/build_config.sh@74 -- # CONFIG_TESTS=y 00:06:49.126 03:07:23 -- common/build_config.sh@75 -- # CONFIG_CRYPTO_MLX5=n 00:06:49.126 03:07:23 -- common/build_config.sh@76 -- # CONFIG_MAX_LCORES= 00:06:49.126 03:07:23 -- common/build_config.sh@77 -- # CONFIG_IPSEC_MB=n 00:06:49.126 03:07:23 -- common/build_config.sh@78 -- # CONFIG_PGO_DIR= 00:06:49.126 03:07:23 -- common/build_config.sh@79 -- # CONFIG_DEBUG=y 00:06:49.126 03:07:23 -- common/build_config.sh@80 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:49.126 03:07:23 -- common/build_config.sh@81 -- # CONFIG_CROSS_PREFIX= 00:06:49.126 03:07:23 -- common/build_config.sh@82 -- # CONFIG_URING=n 00:06:49.126 03:07:23 -- common/autotest_common.sh@53 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:06:49.126 03:07:23 -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:06:49.126 03:07:23 -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:06:49.126 03:07:23 -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:06:49.126 03:07:23 -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:49.126 03:07:23 -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:49.126 03:07:23 -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:49.126 03:07:23 -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:49.126 03:07:23 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:06:49.126 03:07:23 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:06:49.126 03:07:23 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:06:49.126 03:07:23 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:06:49.126 03:07:23 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:06:49.126 03:07:23 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:06:49.126 03:07:23 -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:06:49.126 03:07:23 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:06:49.126 #define SPDK_CONFIG_H 00:06:49.126 #define SPDK_CONFIG_APPS 1 00:06:49.126 #define SPDK_CONFIG_ARCH native 00:06:49.126 #undef SPDK_CONFIG_ASAN 00:06:49.126 #undef SPDK_CONFIG_AVAHI 00:06:49.126 #undef SPDK_CONFIG_CET 00:06:49.126 #define SPDK_CONFIG_COVERAGE 1 00:06:49.126 #define SPDK_CONFIG_CROSS_PREFIX 00:06:49.126 #undef SPDK_CONFIG_CRYPTO 00:06:49.126 #undef SPDK_CONFIG_CRYPTO_MLX5 00:06:49.126 #undef SPDK_CONFIG_CUSTOMOCF 00:06:49.126 #undef SPDK_CONFIG_DAOS 00:06:49.126 #define SPDK_CONFIG_DAOS_DIR 00:06:49.126 #define SPDK_CONFIG_DEBUG 1 00:06:49.126 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:06:49.126 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:06:49.126 #define SPDK_CONFIG_DPDK_INC_DIR 00:06:49.126 #define SPDK_CONFIG_DPDK_LIB_DIR 00:06:49.126 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:06:49.126 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:06:49.126 #define SPDK_CONFIG_EXAMPLES 1 00:06:49.126 #undef SPDK_CONFIG_FC 00:06:49.126 #define SPDK_CONFIG_FC_PATH 00:06:49.126 #define SPDK_CONFIG_FIO_PLUGIN 1 00:06:49.126 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:06:49.126 #undef SPDK_CONFIG_FUSE 00:06:49.126 #undef SPDK_CONFIG_FUZZER 00:06:49.126 #define SPDK_CONFIG_FUZZER_LIB 00:06:49.126 #undef SPDK_CONFIG_GOLANG 00:06:49.126 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:06:49.126 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:06:49.126 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:06:49.126 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:06:49.126 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:06:49.126 #undef SPDK_CONFIG_HAVE_LIBBSD 00:06:49.126 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:06:49.126 #define SPDK_CONFIG_IDXD 1 00:06:49.126 #undef SPDK_CONFIG_IDXD_KERNEL 00:06:49.126 #undef SPDK_CONFIG_IPSEC_MB 00:06:49.126 #define SPDK_CONFIG_IPSEC_MB_DIR 00:06:49.126 #define SPDK_CONFIG_ISAL 1 00:06:49.126 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:06:49.126 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:06:49.126 #define SPDK_CONFIG_LIBDIR 00:06:49.126 #undef SPDK_CONFIG_LTO 00:06:49.126 #define SPDK_CONFIG_MAX_LCORES 00:06:49.126 #define SPDK_CONFIG_NVME_CUSE 1 00:06:49.126 #undef SPDK_CONFIG_OCF 00:06:49.126 #define SPDK_CONFIG_OCF_PATH 00:06:49.126 #define SPDK_CONFIG_OPENSSL_PATH 00:06:49.126 #undef SPDK_CONFIG_PGO_CAPTURE 00:06:49.126 #define SPDK_CONFIG_PGO_DIR 00:06:49.126 #undef SPDK_CONFIG_PGO_USE 00:06:49.126 #define SPDK_CONFIG_PREFIX /usr/local 00:06:49.126 #undef SPDK_CONFIG_RAID5F 00:06:49.126 #undef SPDK_CONFIG_RBD 00:06:49.126 #define SPDK_CONFIG_RDMA 1 00:06:49.126 #define SPDK_CONFIG_RDMA_PROV verbs 00:06:49.126 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:06:49.126 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:06:49.126 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:06:49.126 #define SPDK_CONFIG_SHARED 1 00:06:49.126 #undef SPDK_CONFIG_SMA 00:06:49.126 #define SPDK_CONFIG_TESTS 1 00:06:49.126 #undef SPDK_CONFIG_TSAN 00:06:49.126 #define SPDK_CONFIG_UBLK 1 00:06:49.126 #define SPDK_CONFIG_UBSAN 1 00:06:49.126 #undef SPDK_CONFIG_UNIT_TESTS 00:06:49.126 #undef SPDK_CONFIG_URING 00:06:49.126 #define SPDK_CONFIG_URING_PATH 00:06:49.126 #undef SPDK_CONFIG_URING_ZNS 00:06:49.126 #undef SPDK_CONFIG_USDT 00:06:49.126 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:06:49.126 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:06:49.126 #undef SPDK_CONFIG_VFIO_USER 00:06:49.126 #define SPDK_CONFIG_VFIO_USER_DIR 00:06:49.126 #define SPDK_CONFIG_VHOST 1 00:06:49.126 #define SPDK_CONFIG_VIRTIO 1 00:06:49.126 #undef SPDK_CONFIG_VTUNE 00:06:49.126 #define SPDK_CONFIG_VTUNE_DIR 00:06:49.126 #define SPDK_CONFIG_WERROR 1 00:06:49.126 #define SPDK_CONFIG_WPDK_DIR 00:06:49.126 #undef SPDK_CONFIG_XNVME 00:06:49.126 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:06:49.126 03:07:23 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:06:49.126 03:07:23 -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:49.126 03:07:23 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:49.126 03:07:23 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:49.126 03:07:23 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:49.126 03:07:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.126 03:07:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.126 03:07:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.126 03:07:23 -- paths/export.sh@5 -- # export PATH 00:06:49.126 03:07:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.126 03:07:23 -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:06:49.126 03:07:23 -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:06:49.126 03:07:23 -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:06:49.127 03:07:23 -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:06:49.127 03:07:23 -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:06:49.127 03:07:23 -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:49.127 03:07:23 -- pm/common@67 -- # TEST_TAG=N/A 00:06:49.127 03:07:23 -- pm/common@68 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:06:49.127 03:07:23 -- pm/common@70 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:06:49.127 03:07:23 -- pm/common@71 -- # uname -s 00:06:49.127 03:07:23 -- pm/common@71 -- # PM_OS=Linux 00:06:49.127 03:07:23 -- pm/common@73 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:06:49.127 03:07:23 -- pm/common@74 -- # [[ Linux == FreeBSD ]] 00:06:49.127 03:07:23 -- pm/common@76 -- # [[ Linux == Linux ]] 00:06:49.127 03:07:23 -- pm/common@76 -- # [[ ............................... != QEMU ]] 00:06:49.127 03:07:23 -- pm/common@76 -- # [[ ! -e /.dockerenv ]] 00:06:49.127 03:07:23 -- pm/common@79 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:06:49.127 03:07:23 -- pm/common@80 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:06:49.127 03:07:23 -- pm/common@83 -- # MONITOR_RESOURCES_PIDS=() 00:06:49.127 03:07:23 -- pm/common@83 -- # declare -A MONITOR_RESOURCES_PIDS 00:06:49.127 03:07:23 -- pm/common@85 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:06:49.127 03:07:23 -- common/autotest_common.sh@57 -- # : 1 00:06:49.127 03:07:23 -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:06:49.127 03:07:23 -- common/autotest_common.sh@61 -- # : 0 00:06:49.127 03:07:23 -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:06:49.127 03:07:23 -- common/autotest_common.sh@63 -- # : 0 00:06:49.127 03:07:23 -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:06:49.127 03:07:23 -- common/autotest_common.sh@65 -- # : 1 00:06:49.127 03:07:23 -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:06:49.127 03:07:23 -- common/autotest_common.sh@67 -- # : 0 00:06:49.127 03:07:23 -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:06:49.127 03:07:23 -- common/autotest_common.sh@69 -- # : 00:06:49.127 03:07:23 -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:06:49.127 03:07:23 -- common/autotest_common.sh@71 -- # : 0 00:06:49.127 03:07:23 -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:06:49.127 03:07:23 -- common/autotest_common.sh@73 -- # : 0 00:06:49.127 03:07:23 -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:06:49.127 03:07:23 -- common/autotest_common.sh@75 -- # : 0 00:06:49.127 03:07:23 -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:06:49.127 03:07:23 -- common/autotest_common.sh@77 -- # : 0 00:06:49.127 03:07:23 -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:06:49.127 03:07:23 -- common/autotest_common.sh@79 -- # : 0 00:06:49.127 03:07:23 -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:06:49.127 03:07:23 -- common/autotest_common.sh@81 -- # : 0 00:06:49.127 03:07:23 -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:06:49.127 03:07:23 -- common/autotest_common.sh@83 -- # : 0 00:06:49.127 03:07:23 -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:06:49.127 03:07:23 -- common/autotest_common.sh@85 -- # : 1 00:06:49.127 03:07:23 -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:06:49.127 03:07:23 -- common/autotest_common.sh@87 -- # : 0 00:06:49.127 03:07:23 -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:06:49.127 03:07:23 -- common/autotest_common.sh@89 -- # : 0 00:06:49.127 03:07:23 -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:06:49.127 03:07:23 -- common/autotest_common.sh@91 -- # : 1 00:06:49.127 03:07:23 -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:06:49.127 03:07:23 -- common/autotest_common.sh@93 -- # : 0 00:06:49.127 03:07:23 -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:06:49.127 03:07:23 -- common/autotest_common.sh@95 -- # : 0 00:06:49.127 03:07:23 -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:06:49.127 03:07:23 -- common/autotest_common.sh@97 -- # : 0 00:06:49.127 03:07:23 -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:06:49.127 03:07:23 -- common/autotest_common.sh@99 -- # : 0 00:06:49.127 03:07:23 -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:06:49.127 03:07:23 -- common/autotest_common.sh@101 -- # : tcp 00:06:49.127 03:07:23 -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:06:49.127 03:07:23 -- common/autotest_common.sh@103 -- # : 0 00:06:49.127 03:07:23 -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:06:49.127 03:07:23 -- common/autotest_common.sh@105 -- # : 0 00:06:49.127 03:07:23 -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:06:49.127 03:07:23 -- common/autotest_common.sh@107 -- # : 0 00:06:49.127 03:07:23 -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:06:49.127 03:07:23 -- common/autotest_common.sh@109 -- # : 0 00:06:49.127 03:07:23 -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:06:49.127 03:07:23 -- common/autotest_common.sh@111 -- # : 0 00:06:49.127 03:07:23 -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:06:49.127 03:07:23 -- common/autotest_common.sh@113 -- # : 0 00:06:49.127 03:07:23 -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:06:49.127 03:07:23 -- common/autotest_common.sh@115 -- # : 0 00:06:49.127 03:07:23 -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:06:49.127 03:07:23 -- common/autotest_common.sh@117 -- # : 0 00:06:49.127 03:07:23 -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:06:49.127 03:07:23 -- common/autotest_common.sh@119 -- # : 0 00:06:49.127 03:07:23 -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:06:49.127 03:07:23 -- common/autotest_common.sh@121 -- # : 1 00:06:49.127 03:07:23 -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:06:49.127 03:07:23 -- common/autotest_common.sh@123 -- # : 00:06:49.127 03:07:23 -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:06:49.127 03:07:23 -- common/autotest_common.sh@125 -- # : 0 00:06:49.127 03:07:23 -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:06:49.127 03:07:23 -- common/autotest_common.sh@127 -- # : 0 00:06:49.127 03:07:23 -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:06:49.127 03:07:23 -- common/autotest_common.sh@129 -- # : 0 00:06:49.127 03:07:23 -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:06:49.127 03:07:23 -- common/autotest_common.sh@131 -- # : 0 00:06:49.127 03:07:23 -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:06:49.127 03:07:23 -- common/autotest_common.sh@133 -- # : 0 00:06:49.127 03:07:23 -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:06:49.127 03:07:23 -- common/autotest_common.sh@135 -- # : 0 00:06:49.127 03:07:23 -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:06:49.127 03:07:23 -- common/autotest_common.sh@137 -- # : 00:06:49.127 03:07:23 -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:06:49.127 03:07:23 -- common/autotest_common.sh@139 -- # : true 00:06:49.127 03:07:23 -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:06:49.127 03:07:23 -- common/autotest_common.sh@141 -- # : 0 00:06:49.127 03:07:23 -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:06:49.127 03:07:23 -- common/autotest_common.sh@143 -- # : 0 00:06:49.127 03:07:23 -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:06:49.127 03:07:23 -- common/autotest_common.sh@145 -- # : 0 00:06:49.127 03:07:23 -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:06:49.127 03:07:23 -- common/autotest_common.sh@147 -- # : 0 00:06:49.127 03:07:23 -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:06:49.127 03:07:23 -- common/autotest_common.sh@149 -- # : 0 00:06:49.127 03:07:23 -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:06:49.127 03:07:23 -- common/autotest_common.sh@151 -- # : 0 00:06:49.127 03:07:23 -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:06:49.127 03:07:23 -- common/autotest_common.sh@153 -- # : e810 00:06:49.127 03:07:23 -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:06:49.127 03:07:23 -- common/autotest_common.sh@155 -- # : 0 00:06:49.127 03:07:23 -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:06:49.127 03:07:23 -- common/autotest_common.sh@157 -- # : 0 00:06:49.127 03:07:23 -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:06:49.127 03:07:23 -- common/autotest_common.sh@159 -- # : 0 00:06:49.127 03:07:23 -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:06:49.127 03:07:23 -- common/autotest_common.sh@161 -- # : 0 00:06:49.127 03:07:23 -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:06:49.127 03:07:23 -- common/autotest_common.sh@163 -- # : 0 00:06:49.127 03:07:23 -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:06:49.127 03:07:23 -- common/autotest_common.sh@166 -- # : 00:06:49.127 03:07:23 -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:06:49.127 03:07:23 -- common/autotest_common.sh@168 -- # : 0 00:06:49.127 03:07:23 -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:06:49.127 03:07:23 -- common/autotest_common.sh@170 -- # : 0 00:06:49.127 03:07:23 -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:06:49.127 03:07:23 -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:06:49.127 03:07:23 -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:06:49.127 03:07:23 -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:06:49.127 03:07:23 -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:06:49.127 03:07:23 -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:49.127 03:07:23 -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:49.127 03:07:23 -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:49.128 03:07:23 -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:49.128 03:07:23 -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:06:49.128 03:07:23 -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:06:49.128 03:07:23 -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:49.128 03:07:23 -- common/autotest_common.sh@184 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:49.128 03:07:23 -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:06:49.128 03:07:23 -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:06:49.128 03:07:23 -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:49.128 03:07:23 -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:49.128 03:07:23 -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:49.128 03:07:23 -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:49.128 03:07:23 -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:06:49.128 03:07:23 -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:06:49.128 03:07:23 -- common/autotest_common.sh@199 -- # cat 00:06:49.128 03:07:23 -- common/autotest_common.sh@225 -- # echo leak:libfuse3.so 00:06:49.128 03:07:23 -- common/autotest_common.sh@227 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:49.128 03:07:23 -- common/autotest_common.sh@227 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:49.128 03:07:23 -- common/autotest_common.sh@229 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:49.128 03:07:23 -- common/autotest_common.sh@229 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:49.128 03:07:23 -- common/autotest_common.sh@231 -- # '[' -z /var/spdk/dependencies ']' 00:06:49.128 03:07:23 -- common/autotest_common.sh@234 -- # export DEPENDENCY_DIR 00:06:49.128 03:07:23 -- common/autotest_common.sh@238 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:49.128 03:07:23 -- common/autotest_common.sh@238 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:49.128 03:07:23 -- common/autotest_common.sh@239 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:49.128 03:07:23 -- common/autotest_common.sh@239 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:49.128 03:07:23 -- common/autotest_common.sh@242 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:49.128 03:07:23 -- common/autotest_common.sh@242 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:49.128 03:07:23 -- common/autotest_common.sh@243 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:49.128 03:07:23 -- common/autotest_common.sh@243 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:49.128 03:07:23 -- common/autotest_common.sh@245 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:49.128 03:07:23 -- common/autotest_common.sh@245 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:49.128 03:07:23 -- common/autotest_common.sh@248 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:49.128 03:07:23 -- common/autotest_common.sh@248 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:49.128 03:07:23 -- common/autotest_common.sh@251 -- # '[' 0 -eq 0 ']' 00:06:49.128 03:07:23 -- common/autotest_common.sh@252 -- # export valgrind= 00:06:49.128 03:07:23 -- common/autotest_common.sh@252 -- # valgrind= 00:06:49.128 03:07:23 -- common/autotest_common.sh@258 -- # uname -s 00:06:49.128 03:07:23 -- common/autotest_common.sh@258 -- # '[' Linux = Linux ']' 00:06:49.128 03:07:23 -- common/autotest_common.sh@259 -- # HUGEMEM=4096 00:06:49.128 03:07:23 -- common/autotest_common.sh@260 -- # export CLEAR_HUGE=yes 00:06:49.128 03:07:23 -- common/autotest_common.sh@260 -- # CLEAR_HUGE=yes 00:06:49.128 03:07:23 -- common/autotest_common.sh@261 -- # [[ 0 -eq 1 ]] 00:06:49.128 03:07:23 -- common/autotest_common.sh@261 -- # [[ 0 -eq 1 ]] 00:06:49.128 03:07:23 -- common/autotest_common.sh@268 -- # MAKE=make 00:06:49.128 03:07:23 -- common/autotest_common.sh@269 -- # MAKEFLAGS=-j48 00:06:49.128 03:07:23 -- common/autotest_common.sh@285 -- # export HUGEMEM=4096 00:06:49.128 03:07:23 -- common/autotest_common.sh@285 -- # HUGEMEM=4096 00:06:49.128 03:07:23 -- common/autotest_common.sh@287 -- # NO_HUGE=() 00:06:49.128 03:07:23 -- common/autotest_common.sh@288 -- # TEST_MODE= 00:06:49.128 03:07:23 -- common/autotest_common.sh@289 -- # for i in "$@" 00:06:49.128 03:07:23 -- common/autotest_common.sh@290 -- # case "$i" in 00:06:49.128 03:07:23 -- common/autotest_common.sh@295 -- # TEST_TRANSPORT=tcp 00:06:49.128 03:07:23 -- common/autotest_common.sh@307 -- # [[ -z 1393222 ]] 00:06:49.128 03:07:23 -- common/autotest_common.sh@307 -- # kill -0 1393222 00:06:49.128 03:07:23 -- common/autotest_common.sh@1666 -- # set_test_storage 2147483648 00:06:49.128 03:07:23 -- common/autotest_common.sh@317 -- # [[ -v testdir ]] 00:06:49.128 03:07:23 -- common/autotest_common.sh@319 -- # local requested_size=2147483648 00:06:49.128 03:07:23 -- common/autotest_common.sh@320 -- # local mount target_dir 00:06:49.128 03:07:23 -- common/autotest_common.sh@322 -- # local -A mounts fss sizes avails uses 00:06:49.128 03:07:23 -- common/autotest_common.sh@323 -- # local source fs size avail mount use 00:06:49.128 03:07:23 -- common/autotest_common.sh@325 -- # local storage_fallback storage_candidates 00:06:49.128 03:07:23 -- common/autotest_common.sh@327 -- # mktemp -udt spdk.XXXXXX 00:06:49.128 03:07:23 -- common/autotest_common.sh@327 -- # storage_fallback=/tmp/spdk.x1j4u8 00:06:49.128 03:07:23 -- common/autotest_common.sh@332 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:06:49.128 03:07:23 -- common/autotest_common.sh@334 -- # [[ -n '' ]] 00:06:49.128 03:07:23 -- common/autotest_common.sh@339 -- # [[ -n '' ]] 00:06:49.128 03:07:23 -- common/autotest_common.sh@344 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.x1j4u8/tests/target /tmp/spdk.x1j4u8 00:06:49.128 03:07:23 -- common/autotest_common.sh@347 -- # requested_size=2214592512 00:06:49.128 03:07:23 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:06:49.128 03:07:23 -- common/autotest_common.sh@316 -- # df -T 00:06:49.128 03:07:23 -- common/autotest_common.sh@316 -- # grep -v Filesystem 00:06:49.128 03:07:23 -- common/autotest_common.sh@350 -- # mounts["$mount"]=spdk_devtmpfs 00:06:49.128 03:07:23 -- common/autotest_common.sh@350 -- # fss["$mount"]=devtmpfs 00:06:49.128 03:07:23 -- common/autotest_common.sh@351 -- # avails["$mount"]=67108864 00:06:49.128 03:07:23 -- common/autotest_common.sh@351 -- # sizes["$mount"]=67108864 00:06:49.128 03:07:23 -- common/autotest_common.sh@352 -- # uses["$mount"]=0 00:06:49.128 03:07:23 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:06:49.128 03:07:23 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/pmem0 00:06:49.128 03:07:23 -- common/autotest_common.sh@350 -- # fss["$mount"]=ext2 00:06:49.128 03:07:23 -- common/autotest_common.sh@351 -- # avails["$mount"]=1052192768 00:06:49.128 03:07:23 -- common/autotest_common.sh@351 -- # sizes["$mount"]=5284429824 00:06:49.128 03:07:23 -- common/autotest_common.sh@352 -- # uses["$mount"]=4232237056 00:06:49.128 03:07:23 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:06:49.128 03:07:23 -- common/autotest_common.sh@350 -- # mounts["$mount"]=spdk_root 00:06:49.128 03:07:23 -- common/autotest_common.sh@350 -- # fss["$mount"]=overlay 00:06:49.128 03:07:23 -- common/autotest_common.sh@351 -- # avails["$mount"]=48165822464 00:06:49.128 03:07:23 -- common/autotest_common.sh@351 -- # sizes["$mount"]=61994708992 00:06:49.128 03:07:23 -- common/autotest_common.sh@352 -- # uses["$mount"]=13828886528 00:06:49.128 03:07:23 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:06:49.128 03:07:23 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:06:49.128 03:07:23 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:06:49.128 03:07:23 -- common/autotest_common.sh@351 -- # avails["$mount"]=30994739200 00:06:49.128 03:07:23 -- common/autotest_common.sh@351 -- # sizes["$mount"]=30997352448 00:06:49.128 03:07:23 -- common/autotest_common.sh@352 -- # uses["$mount"]=2613248 00:06:49.128 03:07:23 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:06:49.128 03:07:23 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:06:49.128 03:07:23 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:06:49.128 03:07:23 -- common/autotest_common.sh@351 -- # avails["$mount"]=12390178816 00:06:49.128 03:07:23 -- common/autotest_common.sh@351 -- # sizes["$mount"]=12398944256 00:06:49.128 03:07:23 -- common/autotest_common.sh@352 -- # uses["$mount"]=8765440 00:06:49.128 03:07:23 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:06:49.128 03:07:23 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:06:49.128 03:07:23 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:06:49.128 03:07:23 -- common/autotest_common.sh@351 -- # avails["$mount"]=30996516864 00:06:49.128 03:07:23 -- common/autotest_common.sh@351 -- # sizes["$mount"]=30997356544 00:06:49.128 03:07:23 -- common/autotest_common.sh@352 -- # uses["$mount"]=839680 00:06:49.128 03:07:23 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:06:49.128 03:07:23 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:06:49.128 03:07:23 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:06:49.128 03:07:23 -- common/autotest_common.sh@351 -- # avails["$mount"]=6199463936 00:06:49.128 03:07:23 -- common/autotest_common.sh@351 -- # sizes["$mount"]=6199468032 00:06:49.128 03:07:23 -- common/autotest_common.sh@352 -- # uses["$mount"]=4096 00:06:49.128 03:07:23 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:06:49.128 03:07:23 -- common/autotest_common.sh@355 -- # printf '* Looking for test storage...\n' 00:06:49.128 * Looking for test storage... 00:06:49.128 03:07:23 -- common/autotest_common.sh@357 -- # local target_space new_size 00:06:49.128 03:07:23 -- common/autotest_common.sh@358 -- # for target_dir in "${storage_candidates[@]}" 00:06:49.128 03:07:23 -- common/autotest_common.sh@361 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:49.128 03:07:23 -- common/autotest_common.sh@361 -- # awk '$1 !~ /Filesystem/{print $6}' 00:06:49.128 03:07:23 -- common/autotest_common.sh@361 -- # mount=/ 00:06:49.128 03:07:23 -- common/autotest_common.sh@363 -- # target_space=48165822464 00:06:49.129 03:07:23 -- common/autotest_common.sh@364 -- # (( target_space == 0 || target_space < requested_size )) 00:06:49.129 03:07:23 -- common/autotest_common.sh@367 -- # (( target_space >= requested_size )) 00:06:49.129 03:07:23 -- common/autotest_common.sh@369 -- # [[ overlay == tmpfs ]] 00:06:49.129 03:07:23 -- common/autotest_common.sh@369 -- # [[ overlay == ramfs ]] 00:06:49.129 03:07:23 -- common/autotest_common.sh@369 -- # [[ / == / ]] 00:06:49.129 03:07:23 -- common/autotest_common.sh@370 -- # new_size=16043479040 00:06:49.129 03:07:23 -- common/autotest_common.sh@371 -- # (( new_size * 100 / sizes[/] > 95 )) 00:06:49.129 03:07:23 -- common/autotest_common.sh@376 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:49.129 03:07:23 -- common/autotest_common.sh@376 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:49.129 03:07:23 -- common/autotest_common.sh@377 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:49.129 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:49.129 03:07:23 -- common/autotest_common.sh@378 -- # return 0 00:06:49.129 03:07:23 -- common/autotest_common.sh@1668 -- # set -o errtrace 00:06:49.129 03:07:23 -- common/autotest_common.sh@1669 -- # shopt -s extdebug 00:06:49.129 03:07:23 -- common/autotest_common.sh@1670 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:06:49.129 03:07:23 -- common/autotest_common.sh@1672 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:06:49.129 03:07:23 -- common/autotest_common.sh@1673 -- # true 00:06:49.129 03:07:23 -- common/autotest_common.sh@1675 -- # xtrace_fd 00:06:49.129 03:07:23 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:06:49.129 03:07:23 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:06:49.129 03:07:23 -- common/autotest_common.sh@27 -- # exec 00:06:49.129 03:07:23 -- common/autotest_common.sh@29 -- # exec 00:06:49.129 03:07:23 -- common/autotest_common.sh@31 -- # xtrace_restore 00:06:49.129 03:07:23 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:06:49.129 03:07:23 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:06:49.129 03:07:23 -- common/autotest_common.sh@18 -- # set -x 00:06:49.129 03:07:23 -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:49.129 03:07:23 -- nvmf/common.sh@7 -- # uname -s 00:06:49.129 03:07:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:49.129 03:07:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:49.129 03:07:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:49.129 03:07:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:49.129 03:07:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:49.129 03:07:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:49.129 03:07:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:49.129 03:07:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:49.129 03:07:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:49.129 03:07:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:49.129 03:07:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:49.129 03:07:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:49.129 03:07:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:49.129 03:07:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:49.129 03:07:23 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:49.129 03:07:23 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:49.129 03:07:23 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:49.129 03:07:23 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:49.129 03:07:23 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:49.129 03:07:23 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:49.129 03:07:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.129 03:07:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.129 03:07:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.129 03:07:23 -- paths/export.sh@5 -- # export PATH 00:06:49.129 03:07:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.129 03:07:23 -- nvmf/common.sh@47 -- # : 0 00:06:49.129 03:07:23 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:49.129 03:07:23 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:49.129 03:07:23 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:49.129 03:07:23 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:49.129 03:07:23 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:49.129 03:07:23 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:49.129 03:07:23 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:49.129 03:07:23 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:49.129 03:07:23 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:06:49.129 03:07:23 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:06:49.129 03:07:23 -- target/filesystem.sh@15 -- # nvmftestinit 00:06:49.129 03:07:23 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:06:49.129 03:07:23 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:49.129 03:07:23 -- nvmf/common.sh@437 -- # prepare_net_devs 00:06:49.129 03:07:23 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:06:49.129 03:07:23 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:06:49.129 03:07:23 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:49.129 03:07:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:49.129 03:07:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:49.129 03:07:23 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:06:49.129 03:07:23 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:06:49.129 03:07:23 -- nvmf/common.sh@285 -- # xtrace_disable 00:06:49.129 03:07:23 -- common/autotest_common.sh@10 -- # set +x 00:06:51.668 03:07:25 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:06:51.668 03:07:25 -- nvmf/common.sh@291 -- # pci_devs=() 00:06:51.668 03:07:25 -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:51.668 03:07:25 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:51.668 03:07:25 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:51.668 03:07:25 -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:51.668 03:07:25 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:51.668 03:07:25 -- nvmf/common.sh@295 -- # net_devs=() 00:06:51.668 03:07:25 -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:51.668 03:07:25 -- nvmf/common.sh@296 -- # e810=() 00:06:51.668 03:07:25 -- nvmf/common.sh@296 -- # local -ga e810 00:06:51.668 03:07:25 -- nvmf/common.sh@297 -- # x722=() 00:06:51.668 03:07:25 -- nvmf/common.sh@297 -- # local -ga x722 00:06:51.668 03:07:25 -- nvmf/common.sh@298 -- # mlx=() 00:06:51.668 03:07:25 -- nvmf/common.sh@298 -- # local -ga mlx 00:06:51.668 03:07:25 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:51.668 03:07:25 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:51.668 03:07:25 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:51.668 03:07:25 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:51.668 03:07:25 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:51.668 03:07:25 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:51.668 03:07:25 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:51.668 03:07:25 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:51.668 03:07:25 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:51.668 03:07:25 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:51.668 03:07:25 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:51.668 03:07:25 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:51.668 03:07:25 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:51.668 03:07:25 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:51.668 03:07:25 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:51.668 03:07:25 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:51.668 03:07:25 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:51.668 03:07:25 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:51.668 03:07:25 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:51.668 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:51.668 03:07:25 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:51.668 03:07:25 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:51.668 03:07:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:51.668 03:07:25 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:51.668 03:07:25 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:51.668 03:07:25 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:51.668 03:07:25 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:51.668 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:51.668 03:07:25 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:51.668 03:07:25 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:51.668 03:07:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:51.668 03:07:25 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:51.668 03:07:25 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:51.668 03:07:25 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:51.668 03:07:25 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:51.668 03:07:25 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:51.668 03:07:25 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:51.668 03:07:25 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:51.668 03:07:25 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:06:51.668 03:07:25 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:51.668 03:07:25 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:51.668 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:51.668 03:07:25 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:06:51.668 03:07:25 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:51.668 03:07:25 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:51.668 03:07:25 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:06:51.668 03:07:25 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:51.669 03:07:25 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:51.669 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:51.669 03:07:25 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:06:51.669 03:07:25 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:06:51.669 03:07:25 -- nvmf/common.sh@403 -- # is_hw=yes 00:06:51.669 03:07:25 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:06:51.669 03:07:25 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:06:51.669 03:07:25 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:06:51.669 03:07:25 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:51.669 03:07:25 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:51.669 03:07:25 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:51.669 03:07:25 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:51.669 03:07:25 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:51.669 03:07:25 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:51.669 03:07:25 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:51.669 03:07:25 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:51.669 03:07:25 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:51.669 03:07:25 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:51.669 03:07:25 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:51.669 03:07:25 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:51.669 03:07:25 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:51.669 03:07:25 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:51.669 03:07:25 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:51.669 03:07:25 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:51.669 03:07:25 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:51.669 03:07:25 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:51.669 03:07:25 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:51.669 03:07:25 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:51.669 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:51.669 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.224 ms 00:06:51.669 00:06:51.669 --- 10.0.0.2 ping statistics --- 00:06:51.669 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:51.669 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:06:51.669 03:07:25 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:51.669 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:51.669 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:06:51.669 00:06:51.669 --- 10.0.0.1 ping statistics --- 00:06:51.669 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:51.669 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:06:51.669 03:07:25 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:51.669 03:07:25 -- nvmf/common.sh@411 -- # return 0 00:06:51.669 03:07:25 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:06:51.669 03:07:25 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:51.669 03:07:25 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:06:51.669 03:07:25 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:06:51.669 03:07:25 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:51.669 03:07:25 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:06:51.669 03:07:25 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:06:51.669 03:07:25 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:06:51.669 03:07:25 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:51.669 03:07:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:51.669 03:07:25 -- common/autotest_common.sh@10 -- # set +x 00:06:51.669 ************************************ 00:06:51.669 START TEST nvmf_filesystem_no_in_capsule 00:06:51.669 ************************************ 00:06:51.669 03:07:25 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_part 0 00:06:51.669 03:07:25 -- target/filesystem.sh@47 -- # in_capsule=0 00:06:51.669 03:07:25 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:06:51.669 03:07:25 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:06:51.669 03:07:25 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:51.669 03:07:25 -- common/autotest_common.sh@10 -- # set +x 00:06:51.669 03:07:25 -- nvmf/common.sh@470 -- # nvmfpid=1394860 00:06:51.669 03:07:25 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:51.669 03:07:25 -- nvmf/common.sh@471 -- # waitforlisten 1394860 00:06:51.669 03:07:25 -- common/autotest_common.sh@817 -- # '[' -z 1394860 ']' 00:06:51.669 03:07:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.669 03:07:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:51.669 03:07:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.669 03:07:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:51.669 03:07:25 -- common/autotest_common.sh@10 -- # set +x 00:06:51.669 [2024-04-25 03:07:25.904700] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:06:51.669 [2024-04-25 03:07:25.904785] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:51.669 EAL: No free 2048 kB hugepages reported on node 1 00:06:51.669 [2024-04-25 03:07:25.979809] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:51.669 [2024-04-25 03:07:26.104896] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:51.669 [2024-04-25 03:07:26.104958] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:51.669 [2024-04-25 03:07:26.104974] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:51.669 [2024-04-25 03:07:26.104988] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:51.669 [2024-04-25 03:07:26.104999] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:51.669 [2024-04-25 03:07:26.105079] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:51.669 [2024-04-25 03:07:26.105133] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:51.669 [2024-04-25 03:07:26.105166] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:51.669 [2024-04-25 03:07:26.105168] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.928 03:07:26 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:51.928 03:07:26 -- common/autotest_common.sh@850 -- # return 0 00:06:51.928 03:07:26 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:06:51.928 03:07:26 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:51.928 03:07:26 -- common/autotest_common.sh@10 -- # set +x 00:06:51.928 03:07:26 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:51.928 03:07:26 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:06:51.928 03:07:26 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:06:51.928 03:07:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:51.928 03:07:26 -- common/autotest_common.sh@10 -- # set +x 00:06:51.928 [2024-04-25 03:07:26.258451] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:51.928 03:07:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:51.928 03:07:26 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:06:51.928 03:07:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:51.928 03:07:26 -- common/autotest_common.sh@10 -- # set +x 00:06:51.928 Malloc1 00:06:51.928 03:07:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:51.928 03:07:26 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:51.928 03:07:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:51.928 03:07:26 -- common/autotest_common.sh@10 -- # set +x 00:06:51.928 03:07:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:51.928 03:07:26 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:51.928 03:07:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:51.928 03:07:26 -- common/autotest_common.sh@10 -- # set +x 00:06:52.188 03:07:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:52.188 03:07:26 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:52.188 03:07:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:52.188 03:07:26 -- common/autotest_common.sh@10 -- # set +x 00:06:52.188 [2024-04-25 03:07:26.436254] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:52.188 03:07:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:52.188 03:07:26 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:06:52.188 03:07:26 -- common/autotest_common.sh@1364 -- # local bdev_name=Malloc1 00:06:52.188 03:07:26 -- common/autotest_common.sh@1365 -- # local bdev_info 00:06:52.188 03:07:26 -- common/autotest_common.sh@1366 -- # local bs 00:06:52.188 03:07:26 -- common/autotest_common.sh@1367 -- # local nb 00:06:52.188 03:07:26 -- common/autotest_common.sh@1368 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:06:52.188 03:07:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:52.188 03:07:26 -- common/autotest_common.sh@10 -- # set +x 00:06:52.188 03:07:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:52.188 03:07:26 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:06:52.188 { 00:06:52.188 "name": "Malloc1", 00:06:52.188 "aliases": [ 00:06:52.188 "d2c95a61-968b-4806-9f4b-4c1ec3e80301" 00:06:52.188 ], 00:06:52.188 "product_name": "Malloc disk", 00:06:52.188 "block_size": 512, 00:06:52.188 "num_blocks": 1048576, 00:06:52.188 "uuid": "d2c95a61-968b-4806-9f4b-4c1ec3e80301", 00:06:52.188 "assigned_rate_limits": { 00:06:52.188 "rw_ios_per_sec": 0, 00:06:52.188 "rw_mbytes_per_sec": 0, 00:06:52.188 "r_mbytes_per_sec": 0, 00:06:52.188 "w_mbytes_per_sec": 0 00:06:52.188 }, 00:06:52.188 "claimed": true, 00:06:52.188 "claim_type": "exclusive_write", 00:06:52.188 "zoned": false, 00:06:52.188 "supported_io_types": { 00:06:52.188 "read": true, 00:06:52.188 "write": true, 00:06:52.188 "unmap": true, 00:06:52.188 "write_zeroes": true, 00:06:52.188 "flush": true, 00:06:52.188 "reset": true, 00:06:52.188 "compare": false, 00:06:52.188 "compare_and_write": false, 00:06:52.188 "abort": true, 00:06:52.188 "nvme_admin": false, 00:06:52.188 "nvme_io": false 00:06:52.188 }, 00:06:52.188 "memory_domains": [ 00:06:52.188 { 00:06:52.188 "dma_device_id": "system", 00:06:52.188 "dma_device_type": 1 00:06:52.188 }, 00:06:52.188 { 00:06:52.188 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:52.188 "dma_device_type": 2 00:06:52.188 } 00:06:52.188 ], 00:06:52.188 "driver_specific": {} 00:06:52.188 } 00:06:52.188 ]' 00:06:52.188 03:07:26 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:06:52.188 03:07:26 -- common/autotest_common.sh@1369 -- # bs=512 00:06:52.188 03:07:26 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:06:52.188 03:07:26 -- common/autotest_common.sh@1370 -- # nb=1048576 00:06:52.188 03:07:26 -- common/autotest_common.sh@1373 -- # bdev_size=512 00:06:52.188 03:07:26 -- common/autotest_common.sh@1374 -- # echo 512 00:06:52.188 03:07:26 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:06:52.188 03:07:26 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:52.753 03:07:27 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:06:52.753 03:07:27 -- common/autotest_common.sh@1184 -- # local i=0 00:06:52.753 03:07:27 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:06:52.753 03:07:27 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:06:52.753 03:07:27 -- common/autotest_common.sh@1191 -- # sleep 2 00:06:55.280 03:07:29 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:06:55.280 03:07:29 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:06:55.280 03:07:29 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:06:55.280 03:07:29 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:06:55.280 03:07:29 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:06:55.280 03:07:29 -- common/autotest_common.sh@1194 -- # return 0 00:06:55.280 03:07:29 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:06:55.280 03:07:29 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:06:55.280 03:07:29 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:06:55.280 03:07:29 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:06:55.280 03:07:29 -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:55.280 03:07:29 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:55.280 03:07:29 -- setup/common.sh@80 -- # echo 536870912 00:06:55.280 03:07:29 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:06:55.280 03:07:29 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:06:55.280 03:07:29 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:06:55.280 03:07:29 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:06:55.280 03:07:29 -- target/filesystem.sh@69 -- # partprobe 00:06:55.538 03:07:29 -- target/filesystem.sh@70 -- # sleep 1 00:06:56.472 03:07:30 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:06:56.472 03:07:30 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:06:56.472 03:07:30 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:56.472 03:07:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:56.472 03:07:30 -- common/autotest_common.sh@10 -- # set +x 00:06:56.730 ************************************ 00:06:56.730 START TEST filesystem_ext4 00:06:56.730 ************************************ 00:06:56.730 03:07:31 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create ext4 nvme0n1 00:06:56.730 03:07:31 -- target/filesystem.sh@18 -- # fstype=ext4 00:06:56.730 03:07:31 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:56.730 03:07:31 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:06:56.731 03:07:31 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:06:56.731 03:07:31 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:06:56.731 03:07:31 -- common/autotest_common.sh@914 -- # local i=0 00:06:56.731 03:07:31 -- common/autotest_common.sh@915 -- # local force 00:06:56.731 03:07:31 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:06:56.731 03:07:31 -- common/autotest_common.sh@918 -- # force=-F 00:06:56.731 03:07:31 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:06:56.731 mke2fs 1.46.5 (30-Dec-2021) 00:06:56.731 Discarding device blocks: 0/522240 done 00:06:56.731 Creating filesystem with 522240 1k blocks and 130560 inodes 00:06:56.731 Filesystem UUID: f39cc8e3-75be-4141-b4be-e4d484b60d93 00:06:56.731 Superblock backups stored on blocks: 00:06:56.731 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:06:56.731 00:06:56.731 Allocating group tables: 0/64 done 00:06:56.731 Writing inode tables: 0/64 done 00:06:56.988 Creating journal (8192 blocks): done 00:06:56.988 Writing superblocks and filesystem accounting information: 0/64 done 00:06:56.988 00:06:56.988 03:07:31 -- common/autotest_common.sh@931 -- # return 0 00:06:56.988 03:07:31 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:56.988 03:07:31 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:57.246 03:07:31 -- target/filesystem.sh@25 -- # sync 00:06:57.246 03:07:31 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:57.246 03:07:31 -- target/filesystem.sh@27 -- # sync 00:06:57.246 03:07:31 -- target/filesystem.sh@29 -- # i=0 00:06:57.246 03:07:31 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:57.246 03:07:31 -- target/filesystem.sh@37 -- # kill -0 1394860 00:06:57.246 03:07:31 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:57.246 03:07:31 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:57.246 03:07:31 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:57.246 03:07:31 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:57.246 00:06:57.246 real 0m0.536s 00:06:57.246 user 0m0.024s 00:06:57.246 sys 0m0.052s 00:06:57.246 03:07:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:57.246 03:07:31 -- common/autotest_common.sh@10 -- # set +x 00:06:57.246 ************************************ 00:06:57.246 END TEST filesystem_ext4 00:06:57.246 ************************************ 00:06:57.246 03:07:31 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:06:57.246 03:07:31 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:57.246 03:07:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:57.246 03:07:31 -- common/autotest_common.sh@10 -- # set +x 00:06:57.246 ************************************ 00:06:57.246 START TEST filesystem_btrfs 00:06:57.246 ************************************ 00:06:57.246 03:07:31 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create btrfs nvme0n1 00:06:57.246 03:07:31 -- target/filesystem.sh@18 -- # fstype=btrfs 00:06:57.246 03:07:31 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:57.246 03:07:31 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:06:57.246 03:07:31 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:06:57.246 03:07:31 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:06:57.246 03:07:31 -- common/autotest_common.sh@914 -- # local i=0 00:06:57.246 03:07:31 -- common/autotest_common.sh@915 -- # local force 00:06:57.246 03:07:31 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:06:57.246 03:07:31 -- common/autotest_common.sh@920 -- # force=-f 00:06:57.246 03:07:31 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:06:57.811 btrfs-progs v6.6.2 00:06:57.811 See https://btrfs.readthedocs.io for more information. 00:06:57.811 00:06:57.811 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:06:57.811 NOTE: several default settings have changed in version 5.15, please make sure 00:06:57.811 this does not affect your deployments: 00:06:57.811 - DUP for metadata (-m dup) 00:06:57.811 - enabled no-holes (-O no-holes) 00:06:57.811 - enabled free-space-tree (-R free-space-tree) 00:06:57.811 00:06:57.811 Label: (null) 00:06:57.811 UUID: 79ddbd9c-ea30-4838-8547-61216320963d 00:06:57.811 Node size: 16384 00:06:57.811 Sector size: 4096 00:06:57.811 Filesystem size: 510.00MiB 00:06:57.811 Block group profiles: 00:06:57.811 Data: single 8.00MiB 00:06:57.811 Metadata: DUP 32.00MiB 00:06:57.811 System: DUP 8.00MiB 00:06:57.811 SSD detected: yes 00:06:57.811 Zoned device: no 00:06:57.811 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:06:57.811 Runtime features: free-space-tree 00:06:57.811 Checksum: crc32c 00:06:57.811 Number of devices: 1 00:06:57.811 Devices: 00:06:57.811 ID SIZE PATH 00:06:57.811 1 510.00MiB /dev/nvme0n1p1 00:06:57.811 00:06:57.811 03:07:32 -- common/autotest_common.sh@931 -- # return 0 00:06:57.811 03:07:32 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:58.743 03:07:32 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:58.744 03:07:32 -- target/filesystem.sh@25 -- # sync 00:06:58.744 03:07:32 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:58.744 03:07:32 -- target/filesystem.sh@27 -- # sync 00:06:58.744 03:07:32 -- target/filesystem.sh@29 -- # i=0 00:06:58.744 03:07:32 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:58.744 03:07:33 -- target/filesystem.sh@37 -- # kill -0 1394860 00:06:58.744 03:07:33 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:58.744 03:07:33 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:58.744 03:07:33 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:58.744 03:07:33 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:58.744 00:06:58.744 real 0m1.310s 00:06:58.744 user 0m0.027s 00:06:58.744 sys 0m0.110s 00:06:58.744 03:07:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:58.744 03:07:33 -- common/autotest_common.sh@10 -- # set +x 00:06:58.744 ************************************ 00:06:58.744 END TEST filesystem_btrfs 00:06:58.744 ************************************ 00:06:58.744 03:07:33 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:06:58.744 03:07:33 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:58.744 03:07:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:58.744 03:07:33 -- common/autotest_common.sh@10 -- # set +x 00:06:58.744 ************************************ 00:06:58.744 START TEST filesystem_xfs 00:06:58.744 ************************************ 00:06:58.744 03:07:33 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create xfs nvme0n1 00:06:58.744 03:07:33 -- target/filesystem.sh@18 -- # fstype=xfs 00:06:58.744 03:07:33 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:58.744 03:07:33 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:06:58.744 03:07:33 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:06:58.744 03:07:33 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:06:58.744 03:07:33 -- common/autotest_common.sh@914 -- # local i=0 00:06:58.744 03:07:33 -- common/autotest_common.sh@915 -- # local force 00:06:58.744 03:07:33 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:06:58.744 03:07:33 -- common/autotest_common.sh@920 -- # force=-f 00:06:58.744 03:07:33 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:06:59.004 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:06:59.004 = sectsz=512 attr=2, projid32bit=1 00:06:59.004 = crc=1 finobt=1, sparse=1, rmapbt=0 00:06:59.004 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:06:59.004 data = bsize=4096 blocks=130560, imaxpct=25 00:06:59.004 = sunit=0 swidth=0 blks 00:06:59.004 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:06:59.004 log =internal log bsize=4096 blocks=16384, version=2 00:06:59.004 = sectsz=512 sunit=0 blks, lazy-count=1 00:06:59.004 realtime =none extsz=4096 blocks=0, rtextents=0 00:06:59.937 Discarding blocks...Done. 00:06:59.937 03:07:34 -- common/autotest_common.sh@931 -- # return 0 00:06:59.937 03:07:34 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:01.835 03:07:36 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:01.835 03:07:36 -- target/filesystem.sh@25 -- # sync 00:07:01.835 03:07:36 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:01.835 03:07:36 -- target/filesystem.sh@27 -- # sync 00:07:01.835 03:07:36 -- target/filesystem.sh@29 -- # i=0 00:07:01.835 03:07:36 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:01.835 03:07:36 -- target/filesystem.sh@37 -- # kill -0 1394860 00:07:01.835 03:07:36 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:01.835 03:07:36 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:01.835 03:07:36 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:01.835 03:07:36 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:01.835 00:07:01.835 real 0m3.004s 00:07:01.835 user 0m0.025s 00:07:01.835 sys 0m0.055s 00:07:01.835 03:07:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:01.835 03:07:36 -- common/autotest_common.sh@10 -- # set +x 00:07:01.835 ************************************ 00:07:01.835 END TEST filesystem_xfs 00:07:01.835 ************************************ 00:07:01.835 03:07:36 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:01.835 03:07:36 -- target/filesystem.sh@93 -- # sync 00:07:01.835 03:07:36 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:01.835 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:01.835 03:07:36 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:01.835 03:07:36 -- common/autotest_common.sh@1205 -- # local i=0 00:07:01.835 03:07:36 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:07:01.835 03:07:36 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:01.835 03:07:36 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:07:01.835 03:07:36 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:01.835 03:07:36 -- common/autotest_common.sh@1217 -- # return 0 00:07:02.094 03:07:36 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:02.094 03:07:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:02.094 03:07:36 -- common/autotest_common.sh@10 -- # set +x 00:07:02.094 03:07:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:02.094 03:07:36 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:02.094 03:07:36 -- target/filesystem.sh@101 -- # killprocess 1394860 00:07:02.094 03:07:36 -- common/autotest_common.sh@936 -- # '[' -z 1394860 ']' 00:07:02.094 03:07:36 -- common/autotest_common.sh@940 -- # kill -0 1394860 00:07:02.094 03:07:36 -- common/autotest_common.sh@941 -- # uname 00:07:02.094 03:07:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:02.094 03:07:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1394860 00:07:02.094 03:07:36 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:02.094 03:07:36 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:02.094 03:07:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1394860' 00:07:02.094 killing process with pid 1394860 00:07:02.094 03:07:36 -- common/autotest_common.sh@955 -- # kill 1394860 00:07:02.094 03:07:36 -- common/autotest_common.sh@960 -- # wait 1394860 00:07:02.669 03:07:36 -- target/filesystem.sh@102 -- # nvmfpid= 00:07:02.669 00:07:02.669 real 0m11.007s 00:07:02.669 user 0m41.984s 00:07:02.669 sys 0m1.868s 00:07:02.669 03:07:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:02.669 03:07:36 -- common/autotest_common.sh@10 -- # set +x 00:07:02.669 ************************************ 00:07:02.669 END TEST nvmf_filesystem_no_in_capsule 00:07:02.669 ************************************ 00:07:02.669 03:07:36 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:07:02.669 03:07:36 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:02.669 03:07:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:02.669 03:07:36 -- common/autotest_common.sh@10 -- # set +x 00:07:02.669 ************************************ 00:07:02.669 START TEST nvmf_filesystem_in_capsule 00:07:02.669 ************************************ 00:07:02.669 03:07:36 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_part 4096 00:07:02.669 03:07:36 -- target/filesystem.sh@47 -- # in_capsule=4096 00:07:02.669 03:07:36 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:02.669 03:07:36 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:07:02.669 03:07:36 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:02.669 03:07:36 -- common/autotest_common.sh@10 -- # set +x 00:07:02.669 03:07:36 -- nvmf/common.sh@470 -- # nvmfpid=1396437 00:07:02.669 03:07:36 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:02.669 03:07:36 -- nvmf/common.sh@471 -- # waitforlisten 1396437 00:07:02.669 03:07:36 -- common/autotest_common.sh@817 -- # '[' -z 1396437 ']' 00:07:02.669 03:07:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:02.669 03:07:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:02.669 03:07:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:02.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:02.669 03:07:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:02.669 03:07:36 -- common/autotest_common.sh@10 -- # set +x 00:07:02.669 [2024-04-25 03:07:37.026376] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:07:02.669 [2024-04-25 03:07:37.026463] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:02.669 EAL: No free 2048 kB hugepages reported on node 1 00:07:02.669 [2024-04-25 03:07:37.099476] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:02.928 [2024-04-25 03:07:37.212820] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:02.928 [2024-04-25 03:07:37.212876] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:02.928 [2024-04-25 03:07:37.212908] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:02.928 [2024-04-25 03:07:37.212920] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:02.928 [2024-04-25 03:07:37.212931] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:02.928 [2024-04-25 03:07:37.213000] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:02.928 [2024-04-25 03:07:37.213055] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:02.928 [2024-04-25 03:07:37.213057] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.928 [2024-04-25 03:07:37.213026] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:02.928 03:07:37 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:02.928 03:07:37 -- common/autotest_common.sh@850 -- # return 0 00:07:02.928 03:07:37 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:07:02.928 03:07:37 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:02.928 03:07:37 -- common/autotest_common.sh@10 -- # set +x 00:07:02.928 03:07:37 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:02.928 03:07:37 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:02.928 03:07:37 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:07:02.928 03:07:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:02.928 03:07:37 -- common/autotest_common.sh@10 -- # set +x 00:07:02.928 [2024-04-25 03:07:37.368553] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:02.928 03:07:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:02.928 03:07:37 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:02.928 03:07:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:02.928 03:07:37 -- common/autotest_common.sh@10 -- # set +x 00:07:03.187 Malloc1 00:07:03.187 03:07:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:03.187 03:07:37 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:03.187 03:07:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:03.187 03:07:37 -- common/autotest_common.sh@10 -- # set +x 00:07:03.187 03:07:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:03.187 03:07:37 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:03.187 03:07:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:03.187 03:07:37 -- common/autotest_common.sh@10 -- # set +x 00:07:03.187 03:07:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:03.187 03:07:37 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:03.187 03:07:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:03.187 03:07:37 -- common/autotest_common.sh@10 -- # set +x 00:07:03.187 [2024-04-25 03:07:37.553206] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:03.187 03:07:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:03.187 03:07:37 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:03.187 03:07:37 -- common/autotest_common.sh@1364 -- # local bdev_name=Malloc1 00:07:03.187 03:07:37 -- common/autotest_common.sh@1365 -- # local bdev_info 00:07:03.187 03:07:37 -- common/autotest_common.sh@1366 -- # local bs 00:07:03.187 03:07:37 -- common/autotest_common.sh@1367 -- # local nb 00:07:03.187 03:07:37 -- common/autotest_common.sh@1368 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:03.187 03:07:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:03.187 03:07:37 -- common/autotest_common.sh@10 -- # set +x 00:07:03.187 03:07:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:03.187 03:07:37 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:07:03.187 { 00:07:03.187 "name": "Malloc1", 00:07:03.187 "aliases": [ 00:07:03.187 "7e4684f1-e2a1-43e1-b3f6-0175b50fe601" 00:07:03.187 ], 00:07:03.187 "product_name": "Malloc disk", 00:07:03.187 "block_size": 512, 00:07:03.187 "num_blocks": 1048576, 00:07:03.187 "uuid": "7e4684f1-e2a1-43e1-b3f6-0175b50fe601", 00:07:03.187 "assigned_rate_limits": { 00:07:03.187 "rw_ios_per_sec": 0, 00:07:03.187 "rw_mbytes_per_sec": 0, 00:07:03.187 "r_mbytes_per_sec": 0, 00:07:03.187 "w_mbytes_per_sec": 0 00:07:03.187 }, 00:07:03.187 "claimed": true, 00:07:03.187 "claim_type": "exclusive_write", 00:07:03.187 "zoned": false, 00:07:03.187 "supported_io_types": { 00:07:03.187 "read": true, 00:07:03.187 "write": true, 00:07:03.187 "unmap": true, 00:07:03.187 "write_zeroes": true, 00:07:03.187 "flush": true, 00:07:03.187 "reset": true, 00:07:03.187 "compare": false, 00:07:03.187 "compare_and_write": false, 00:07:03.187 "abort": true, 00:07:03.187 "nvme_admin": false, 00:07:03.187 "nvme_io": false 00:07:03.187 }, 00:07:03.187 "memory_domains": [ 00:07:03.187 { 00:07:03.187 "dma_device_id": "system", 00:07:03.187 "dma_device_type": 1 00:07:03.187 }, 00:07:03.187 { 00:07:03.187 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:03.187 "dma_device_type": 2 00:07:03.188 } 00:07:03.188 ], 00:07:03.188 "driver_specific": {} 00:07:03.188 } 00:07:03.188 ]' 00:07:03.188 03:07:37 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:07:03.188 03:07:37 -- common/autotest_common.sh@1369 -- # bs=512 00:07:03.188 03:07:37 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:07:03.188 03:07:37 -- common/autotest_common.sh@1370 -- # nb=1048576 00:07:03.188 03:07:37 -- common/autotest_common.sh@1373 -- # bdev_size=512 00:07:03.188 03:07:37 -- common/autotest_common.sh@1374 -- # echo 512 00:07:03.188 03:07:37 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:03.188 03:07:37 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:04.122 03:07:38 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:04.122 03:07:38 -- common/autotest_common.sh@1184 -- # local i=0 00:07:04.122 03:07:38 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:07:04.122 03:07:38 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:07:04.122 03:07:38 -- common/autotest_common.sh@1191 -- # sleep 2 00:07:06.019 03:07:40 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:07:06.019 03:07:40 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:07:06.019 03:07:40 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:07:06.019 03:07:40 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:07:06.019 03:07:40 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:07:06.019 03:07:40 -- common/autotest_common.sh@1194 -- # return 0 00:07:06.019 03:07:40 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:06.019 03:07:40 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:06.019 03:07:40 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:06.019 03:07:40 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:06.019 03:07:40 -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:06.019 03:07:40 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:06.019 03:07:40 -- setup/common.sh@80 -- # echo 536870912 00:07:06.019 03:07:40 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:06.019 03:07:40 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:06.019 03:07:40 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:06.019 03:07:40 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:06.277 03:07:40 -- target/filesystem.sh@69 -- # partprobe 00:07:06.534 03:07:40 -- target/filesystem.sh@70 -- # sleep 1 00:07:07.467 03:07:41 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:07:07.467 03:07:41 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:07.467 03:07:41 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:07.467 03:07:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:07.467 03:07:41 -- common/autotest_common.sh@10 -- # set +x 00:07:07.727 ************************************ 00:07:07.727 START TEST filesystem_in_capsule_ext4 00:07:07.727 ************************************ 00:07:07.727 03:07:42 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:07.727 03:07:42 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:07.727 03:07:42 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:07.727 03:07:42 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:07.727 03:07:42 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:07:07.727 03:07:42 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:07.727 03:07:42 -- common/autotest_common.sh@914 -- # local i=0 00:07:07.727 03:07:42 -- common/autotest_common.sh@915 -- # local force 00:07:07.727 03:07:42 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:07:07.727 03:07:42 -- common/autotest_common.sh@918 -- # force=-F 00:07:07.727 03:07:42 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:07.727 mke2fs 1.46.5 (30-Dec-2021) 00:07:07.727 Discarding device blocks: 0/522240 done 00:07:07.727 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:07.727 Filesystem UUID: 6e19d7c9-e1bf-42c4-8659-f6695c4f6475 00:07:07.727 Superblock backups stored on blocks: 00:07:07.727 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:07.727 00:07:07.727 Allocating group tables: 0/64 done 00:07:07.727 Writing inode tables: 0/64 done 00:07:07.986 Creating journal (8192 blocks): done 00:07:07.986 Writing superblocks and filesystem accounting information: 0/64 done 00:07:07.986 00:07:07.986 03:07:42 -- common/autotest_common.sh@931 -- # return 0 00:07:07.986 03:07:42 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:08.919 03:07:43 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:08.919 03:07:43 -- target/filesystem.sh@25 -- # sync 00:07:08.919 03:07:43 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:08.919 03:07:43 -- target/filesystem.sh@27 -- # sync 00:07:08.919 03:07:43 -- target/filesystem.sh@29 -- # i=0 00:07:08.919 03:07:43 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:08.919 03:07:43 -- target/filesystem.sh@37 -- # kill -0 1396437 00:07:08.919 03:07:43 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:08.919 03:07:43 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:08.919 03:07:43 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:08.919 03:07:43 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:08.919 00:07:08.919 real 0m1.269s 00:07:08.919 user 0m0.018s 00:07:08.919 sys 0m0.059s 00:07:08.919 03:07:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:08.919 03:07:43 -- common/autotest_common.sh@10 -- # set +x 00:07:08.919 ************************************ 00:07:08.919 END TEST filesystem_in_capsule_ext4 00:07:08.919 ************************************ 00:07:08.919 03:07:43 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:08.919 03:07:43 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:08.919 03:07:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:08.919 03:07:43 -- common/autotest_common.sh@10 -- # set +x 00:07:08.919 ************************************ 00:07:08.919 START TEST filesystem_in_capsule_btrfs 00:07:08.919 ************************************ 00:07:08.919 03:07:43 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:08.919 03:07:43 -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:08.919 03:07:43 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:08.919 03:07:43 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:08.919 03:07:43 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:07:08.919 03:07:43 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:08.919 03:07:43 -- common/autotest_common.sh@914 -- # local i=0 00:07:08.919 03:07:43 -- common/autotest_common.sh@915 -- # local force 00:07:08.919 03:07:43 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:07:08.919 03:07:43 -- common/autotest_common.sh@920 -- # force=-f 00:07:08.919 03:07:43 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:09.176 btrfs-progs v6.6.2 00:07:09.176 See https://btrfs.readthedocs.io for more information. 00:07:09.176 00:07:09.177 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:09.177 NOTE: several default settings have changed in version 5.15, please make sure 00:07:09.177 this does not affect your deployments: 00:07:09.177 - DUP for metadata (-m dup) 00:07:09.177 - enabled no-holes (-O no-holes) 00:07:09.177 - enabled free-space-tree (-R free-space-tree) 00:07:09.177 00:07:09.177 Label: (null) 00:07:09.177 UUID: 8632c933-83ee-4e6e-8aa7-6b0f88982766 00:07:09.177 Node size: 16384 00:07:09.177 Sector size: 4096 00:07:09.177 Filesystem size: 510.00MiB 00:07:09.177 Block group profiles: 00:07:09.177 Data: single 8.00MiB 00:07:09.177 Metadata: DUP 32.00MiB 00:07:09.177 System: DUP 8.00MiB 00:07:09.177 SSD detected: yes 00:07:09.177 Zoned device: no 00:07:09.177 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:09.177 Runtime features: free-space-tree 00:07:09.177 Checksum: crc32c 00:07:09.177 Number of devices: 1 00:07:09.177 Devices: 00:07:09.177 ID SIZE PATH 00:07:09.177 1 510.00MiB /dev/nvme0n1p1 00:07:09.177 00:07:09.177 03:07:43 -- common/autotest_common.sh@931 -- # return 0 00:07:09.177 03:07:43 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:10.110 03:07:44 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:10.110 03:07:44 -- target/filesystem.sh@25 -- # sync 00:07:10.110 03:07:44 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:10.110 03:07:44 -- target/filesystem.sh@27 -- # sync 00:07:10.110 03:07:44 -- target/filesystem.sh@29 -- # i=0 00:07:10.110 03:07:44 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:10.110 03:07:44 -- target/filesystem.sh@37 -- # kill -0 1396437 00:07:10.110 03:07:44 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:10.110 03:07:44 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:10.110 03:07:44 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:10.110 03:07:44 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:10.110 00:07:10.110 real 0m1.090s 00:07:10.110 user 0m0.024s 00:07:10.110 sys 0m0.107s 00:07:10.110 03:07:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:10.110 03:07:44 -- common/autotest_common.sh@10 -- # set +x 00:07:10.110 ************************************ 00:07:10.110 END TEST filesystem_in_capsule_btrfs 00:07:10.110 ************************************ 00:07:10.110 03:07:44 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:07:10.110 03:07:44 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:10.110 03:07:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:10.110 03:07:44 -- common/autotest_common.sh@10 -- # set +x 00:07:10.110 ************************************ 00:07:10.110 START TEST filesystem_in_capsule_xfs 00:07:10.110 ************************************ 00:07:10.110 03:07:44 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create xfs nvme0n1 00:07:10.110 03:07:44 -- target/filesystem.sh@18 -- # fstype=xfs 00:07:10.110 03:07:44 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:10.110 03:07:44 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:10.110 03:07:44 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:07:10.110 03:07:44 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:10.110 03:07:44 -- common/autotest_common.sh@914 -- # local i=0 00:07:10.110 03:07:44 -- common/autotest_common.sh@915 -- # local force 00:07:10.110 03:07:44 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:07:10.110 03:07:44 -- common/autotest_common.sh@920 -- # force=-f 00:07:10.110 03:07:44 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:10.368 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:10.368 = sectsz=512 attr=2, projid32bit=1 00:07:10.368 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:10.368 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:10.368 data = bsize=4096 blocks=130560, imaxpct=25 00:07:10.368 = sunit=0 swidth=0 blks 00:07:10.368 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:10.368 log =internal log bsize=4096 blocks=16384, version=2 00:07:10.368 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:10.368 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:11.300 Discarding blocks...Done. 00:07:11.300 03:07:45 -- common/autotest_common.sh@931 -- # return 0 00:07:11.300 03:07:45 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:13.872 03:07:48 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:13.872 03:07:48 -- target/filesystem.sh@25 -- # sync 00:07:13.872 03:07:48 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:13.872 03:07:48 -- target/filesystem.sh@27 -- # sync 00:07:13.872 03:07:48 -- target/filesystem.sh@29 -- # i=0 00:07:13.872 03:07:48 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:13.872 03:07:48 -- target/filesystem.sh@37 -- # kill -0 1396437 00:07:13.872 03:07:48 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:13.872 03:07:48 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:13.872 03:07:48 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:13.872 03:07:48 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:13.872 00:07:13.872 real 0m3.578s 00:07:13.872 user 0m0.023s 00:07:13.872 sys 0m0.057s 00:07:13.872 03:07:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:13.872 03:07:48 -- common/autotest_common.sh@10 -- # set +x 00:07:13.872 ************************************ 00:07:13.872 END TEST filesystem_in_capsule_xfs 00:07:13.872 ************************************ 00:07:13.872 03:07:48 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:14.130 03:07:48 -- target/filesystem.sh@93 -- # sync 00:07:14.130 03:07:48 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:14.130 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:14.130 03:07:48 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:14.130 03:07:48 -- common/autotest_common.sh@1205 -- # local i=0 00:07:14.130 03:07:48 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:07:14.130 03:07:48 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:14.130 03:07:48 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:07:14.130 03:07:48 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:14.130 03:07:48 -- common/autotest_common.sh@1217 -- # return 0 00:07:14.130 03:07:48 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:14.130 03:07:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:14.130 03:07:48 -- common/autotest_common.sh@10 -- # set +x 00:07:14.130 03:07:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:14.130 03:07:48 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:14.130 03:07:48 -- target/filesystem.sh@101 -- # killprocess 1396437 00:07:14.130 03:07:48 -- common/autotest_common.sh@936 -- # '[' -z 1396437 ']' 00:07:14.130 03:07:48 -- common/autotest_common.sh@940 -- # kill -0 1396437 00:07:14.130 03:07:48 -- common/autotest_common.sh@941 -- # uname 00:07:14.130 03:07:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:14.130 03:07:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1396437 00:07:14.130 03:07:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:14.130 03:07:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:14.130 03:07:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1396437' 00:07:14.130 killing process with pid 1396437 00:07:14.130 03:07:48 -- common/autotest_common.sh@955 -- # kill 1396437 00:07:14.130 03:07:48 -- common/autotest_common.sh@960 -- # wait 1396437 00:07:14.697 03:07:49 -- target/filesystem.sh@102 -- # nvmfpid= 00:07:14.697 00:07:14.697 real 0m12.132s 00:07:14.697 user 0m46.509s 00:07:14.697 sys 0m1.894s 00:07:14.697 03:07:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:14.697 03:07:49 -- common/autotest_common.sh@10 -- # set +x 00:07:14.697 ************************************ 00:07:14.697 END TEST nvmf_filesystem_in_capsule 00:07:14.697 ************************************ 00:07:14.697 03:07:49 -- target/filesystem.sh@108 -- # nvmftestfini 00:07:14.697 03:07:49 -- nvmf/common.sh@477 -- # nvmfcleanup 00:07:14.697 03:07:49 -- nvmf/common.sh@117 -- # sync 00:07:14.697 03:07:49 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:14.697 03:07:49 -- nvmf/common.sh@120 -- # set +e 00:07:14.697 03:07:49 -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:14.697 03:07:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:14.697 rmmod nvme_tcp 00:07:14.697 rmmod nvme_fabrics 00:07:14.697 rmmod nvme_keyring 00:07:14.697 03:07:49 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:14.697 03:07:49 -- nvmf/common.sh@124 -- # set -e 00:07:14.697 03:07:49 -- nvmf/common.sh@125 -- # return 0 00:07:14.697 03:07:49 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:07:14.697 03:07:49 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:07:14.697 03:07:49 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:07:14.697 03:07:49 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:07:14.697 03:07:49 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:14.697 03:07:49 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:14.697 03:07:49 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:14.697 03:07:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:14.697 03:07:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:17.235 03:07:51 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:17.235 00:07:17.235 real 0m27.849s 00:07:17.235 user 1m29.459s 00:07:17.235 sys 0m5.494s 00:07:17.235 03:07:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:17.235 03:07:51 -- common/autotest_common.sh@10 -- # set +x 00:07:17.235 ************************************ 00:07:17.235 END TEST nvmf_filesystem 00:07:17.235 ************************************ 00:07:17.235 03:07:51 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:17.235 03:07:51 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:17.235 03:07:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:17.235 03:07:51 -- common/autotest_common.sh@10 -- # set +x 00:07:17.235 ************************************ 00:07:17.235 START TEST nvmf_discovery 00:07:17.235 ************************************ 00:07:17.235 03:07:51 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:17.235 * Looking for test storage... 00:07:17.235 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:17.235 03:07:51 -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:17.235 03:07:51 -- nvmf/common.sh@7 -- # uname -s 00:07:17.235 03:07:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:17.235 03:07:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:17.235 03:07:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:17.235 03:07:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:17.235 03:07:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:17.235 03:07:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:17.235 03:07:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:17.235 03:07:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:17.235 03:07:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:17.235 03:07:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:17.235 03:07:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:17.235 03:07:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:17.235 03:07:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:17.235 03:07:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:17.235 03:07:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:17.235 03:07:51 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:17.235 03:07:51 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:17.235 03:07:51 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:17.235 03:07:51 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:17.235 03:07:51 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:17.235 03:07:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.235 03:07:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.235 03:07:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.235 03:07:51 -- paths/export.sh@5 -- # export PATH 00:07:17.235 03:07:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.235 03:07:51 -- nvmf/common.sh@47 -- # : 0 00:07:17.235 03:07:51 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:17.235 03:07:51 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:17.235 03:07:51 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:17.235 03:07:51 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:17.235 03:07:51 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:17.235 03:07:51 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:17.235 03:07:51 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:17.235 03:07:51 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:17.235 03:07:51 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:07:17.235 03:07:51 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:07:17.235 03:07:51 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:07:17.235 03:07:51 -- target/discovery.sh@15 -- # hash nvme 00:07:17.235 03:07:51 -- target/discovery.sh@20 -- # nvmftestinit 00:07:17.235 03:07:51 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:07:17.235 03:07:51 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:17.235 03:07:51 -- nvmf/common.sh@437 -- # prepare_net_devs 00:07:17.235 03:07:51 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:07:17.235 03:07:51 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:07:17.235 03:07:51 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:17.235 03:07:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:17.235 03:07:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:17.235 03:07:51 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:07:17.235 03:07:51 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:07:17.235 03:07:51 -- nvmf/common.sh@285 -- # xtrace_disable 00:07:17.235 03:07:51 -- common/autotest_common.sh@10 -- # set +x 00:07:19.148 03:07:53 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:19.148 03:07:53 -- nvmf/common.sh@291 -- # pci_devs=() 00:07:19.148 03:07:53 -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:19.148 03:07:53 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:19.149 03:07:53 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:19.149 03:07:53 -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:19.149 03:07:53 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:19.149 03:07:53 -- nvmf/common.sh@295 -- # net_devs=() 00:07:19.149 03:07:53 -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:19.149 03:07:53 -- nvmf/common.sh@296 -- # e810=() 00:07:19.149 03:07:53 -- nvmf/common.sh@296 -- # local -ga e810 00:07:19.149 03:07:53 -- nvmf/common.sh@297 -- # x722=() 00:07:19.149 03:07:53 -- nvmf/common.sh@297 -- # local -ga x722 00:07:19.149 03:07:53 -- nvmf/common.sh@298 -- # mlx=() 00:07:19.149 03:07:53 -- nvmf/common.sh@298 -- # local -ga mlx 00:07:19.149 03:07:53 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:19.149 03:07:53 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:19.149 03:07:53 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:19.149 03:07:53 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:19.149 03:07:53 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:19.149 03:07:53 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:19.149 03:07:53 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:19.149 03:07:53 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:19.149 03:07:53 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:19.149 03:07:53 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:19.149 03:07:53 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:19.149 03:07:53 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:19.149 03:07:53 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:19.149 03:07:53 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:19.149 03:07:53 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:19.149 03:07:53 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:19.149 03:07:53 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:19.149 03:07:53 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:19.149 03:07:53 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:19.149 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:19.149 03:07:53 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:19.149 03:07:53 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:19.149 03:07:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:19.149 03:07:53 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:19.149 03:07:53 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:19.149 03:07:53 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:19.149 03:07:53 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:19.149 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:19.149 03:07:53 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:19.149 03:07:53 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:19.149 03:07:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:19.149 03:07:53 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:19.149 03:07:53 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:19.149 03:07:53 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:19.149 03:07:53 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:19.149 03:07:53 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:19.149 03:07:53 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:19.149 03:07:53 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:19.149 03:07:53 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:19.149 03:07:53 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:19.149 03:07:53 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:19.149 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:19.149 03:07:53 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:19.149 03:07:53 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:19.149 03:07:53 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:19.149 03:07:53 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:19.149 03:07:53 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:19.149 03:07:53 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:19.149 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:19.149 03:07:53 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:19.149 03:07:53 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:07:19.149 03:07:53 -- nvmf/common.sh@403 -- # is_hw=yes 00:07:19.149 03:07:53 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:07:19.149 03:07:53 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:07:19.149 03:07:53 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:07:19.149 03:07:53 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:19.149 03:07:53 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:19.149 03:07:53 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:19.149 03:07:53 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:19.149 03:07:53 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:19.149 03:07:53 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:19.149 03:07:53 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:19.149 03:07:53 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:19.149 03:07:53 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:19.149 03:07:53 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:19.149 03:07:53 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:19.149 03:07:53 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:19.149 03:07:53 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:19.149 03:07:53 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:19.149 03:07:53 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:19.149 03:07:53 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:19.149 03:07:53 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:19.149 03:07:53 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:19.149 03:07:53 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:19.149 03:07:53 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:19.149 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:19.149 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.223 ms 00:07:19.149 00:07:19.149 --- 10.0.0.2 ping statistics --- 00:07:19.149 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:19.149 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:07:19.149 03:07:53 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:19.149 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:19.149 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:07:19.149 00:07:19.149 --- 10.0.0.1 ping statistics --- 00:07:19.149 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:19.149 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:07:19.149 03:07:53 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:19.149 03:07:53 -- nvmf/common.sh@411 -- # return 0 00:07:19.149 03:07:53 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:07:19.149 03:07:53 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:19.149 03:07:53 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:07:19.149 03:07:53 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:07:19.149 03:07:53 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:19.149 03:07:53 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:07:19.149 03:07:53 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:07:19.149 03:07:53 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:07:19.149 03:07:53 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:07:19.149 03:07:53 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:19.149 03:07:53 -- common/autotest_common.sh@10 -- # set +x 00:07:19.149 03:07:53 -- nvmf/common.sh@470 -- # nvmfpid=1399941 00:07:19.149 03:07:53 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:19.149 03:07:53 -- nvmf/common.sh@471 -- # waitforlisten 1399941 00:07:19.149 03:07:53 -- common/autotest_common.sh@817 -- # '[' -z 1399941 ']' 00:07:19.149 03:07:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:19.149 03:07:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:19.149 03:07:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:19.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:19.149 03:07:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:19.149 03:07:53 -- common/autotest_common.sh@10 -- # set +x 00:07:19.149 [2024-04-25 03:07:53.590239] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:07:19.149 [2024-04-25 03:07:53.590318] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:19.149 EAL: No free 2048 kB hugepages reported on node 1 00:07:19.506 [2024-04-25 03:07:53.662625] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:19.506 [2024-04-25 03:07:53.783551] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:19.506 [2024-04-25 03:07:53.783608] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:19.506 [2024-04-25 03:07:53.783625] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:19.506 [2024-04-25 03:07:53.783649] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:19.506 [2024-04-25 03:07:53.783662] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:19.506 [2024-04-25 03:07:53.783721] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:19.506 [2024-04-25 03:07:53.783779] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:19.506 [2024-04-25 03:07:53.783812] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:19.506 [2024-04-25 03:07:53.783814] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.506 03:07:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:19.506 03:07:53 -- common/autotest_common.sh@850 -- # return 0 00:07:19.506 03:07:53 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:07:19.506 03:07:53 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:19.506 03:07:53 -- common/autotest_common.sh@10 -- # set +x 00:07:19.506 03:07:53 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:19.506 03:07:53 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:19.506 03:07:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:19.506 03:07:53 -- common/autotest_common.sh@10 -- # set +x 00:07:19.506 [2024-04-25 03:07:53.943398] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:19.506 03:07:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:19.506 03:07:53 -- target/discovery.sh@26 -- # seq 1 4 00:07:19.506 03:07:53 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:19.506 03:07:53 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:07:19.506 03:07:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:19.506 03:07:53 -- common/autotest_common.sh@10 -- # set +x 00:07:19.506 Null1 00:07:19.506 03:07:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:19.506 03:07:53 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:19.506 03:07:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:19.506 03:07:53 -- common/autotest_common.sh@10 -- # set +x 00:07:19.506 03:07:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:19.506 03:07:53 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:07:19.506 03:07:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:19.506 03:07:53 -- common/autotest_common.sh@10 -- # set +x 00:07:19.506 03:07:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:19.506 03:07:53 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:19.506 03:07:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:19.506 03:07:53 -- common/autotest_common.sh@10 -- # set +x 00:07:19.506 [2024-04-25 03:07:53.983714] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:19.506 03:07:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:19.506 03:07:53 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:19.506 03:07:53 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:07:19.506 03:07:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:19.506 03:07:53 -- common/autotest_common.sh@10 -- # set +x 00:07:19.506 Null2 00:07:19.506 03:07:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:19.506 03:07:53 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:07:19.506 03:07:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:19.506 03:07:53 -- common/autotest_common.sh@10 -- # set +x 00:07:19.506 03:07:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:19.506 03:07:54 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:07:19.506 03:07:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:19.506 03:07:54 -- common/autotest_common.sh@10 -- # set +x 00:07:19.763 03:07:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:19.763 03:07:54 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:07:19.763 03:07:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:19.763 03:07:54 -- common/autotest_common.sh@10 -- # set +x 00:07:19.763 03:07:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:19.763 03:07:54 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:19.763 03:07:54 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:07:19.763 03:07:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:19.763 03:07:54 -- common/autotest_common.sh@10 -- # set +x 00:07:19.763 Null3 00:07:19.763 03:07:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:19.763 03:07:54 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:07:19.763 03:07:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:19.763 03:07:54 -- common/autotest_common.sh@10 -- # set +x 00:07:19.763 03:07:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:19.763 03:07:54 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:07:19.763 03:07:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:19.763 03:07:54 -- common/autotest_common.sh@10 -- # set +x 00:07:19.763 03:07:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:19.763 03:07:54 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:07:19.763 03:07:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:19.763 03:07:54 -- common/autotest_common.sh@10 -- # set +x 00:07:19.763 03:07:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:19.763 03:07:54 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:19.763 03:07:54 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:07:19.763 03:07:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:19.763 03:07:54 -- common/autotest_common.sh@10 -- # set +x 00:07:19.763 Null4 00:07:19.763 03:07:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:19.763 03:07:54 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:07:19.763 03:07:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:19.763 03:07:54 -- common/autotest_common.sh@10 -- # set +x 00:07:19.763 03:07:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:19.763 03:07:54 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:07:19.763 03:07:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:19.763 03:07:54 -- common/autotest_common.sh@10 -- # set +x 00:07:19.763 03:07:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:19.763 03:07:54 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:07:19.763 03:07:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:19.763 03:07:54 -- common/autotest_common.sh@10 -- # set +x 00:07:19.763 03:07:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:19.763 03:07:54 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:19.763 03:07:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:19.763 03:07:54 -- common/autotest_common.sh@10 -- # set +x 00:07:19.763 03:07:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:19.763 03:07:54 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:07:19.763 03:07:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:19.763 03:07:54 -- common/autotest_common.sh@10 -- # set +x 00:07:19.763 03:07:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:19.763 03:07:54 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:07:20.021 00:07:20.021 Discovery Log Number of Records 6, Generation counter 6 00:07:20.021 =====Discovery Log Entry 0====== 00:07:20.021 trtype: tcp 00:07:20.021 adrfam: ipv4 00:07:20.021 subtype: current discovery subsystem 00:07:20.021 treq: not required 00:07:20.021 portid: 0 00:07:20.021 trsvcid: 4420 00:07:20.021 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:20.021 traddr: 10.0.0.2 00:07:20.021 eflags: explicit discovery connections, duplicate discovery information 00:07:20.021 sectype: none 00:07:20.021 =====Discovery Log Entry 1====== 00:07:20.021 trtype: tcp 00:07:20.021 adrfam: ipv4 00:07:20.021 subtype: nvme subsystem 00:07:20.021 treq: not required 00:07:20.021 portid: 0 00:07:20.021 trsvcid: 4420 00:07:20.021 subnqn: nqn.2016-06.io.spdk:cnode1 00:07:20.021 traddr: 10.0.0.2 00:07:20.021 eflags: none 00:07:20.021 sectype: none 00:07:20.021 =====Discovery Log Entry 2====== 00:07:20.021 trtype: tcp 00:07:20.021 adrfam: ipv4 00:07:20.021 subtype: nvme subsystem 00:07:20.021 treq: not required 00:07:20.021 portid: 0 00:07:20.021 trsvcid: 4420 00:07:20.021 subnqn: nqn.2016-06.io.spdk:cnode2 00:07:20.021 traddr: 10.0.0.2 00:07:20.021 eflags: none 00:07:20.021 sectype: none 00:07:20.021 =====Discovery Log Entry 3====== 00:07:20.021 trtype: tcp 00:07:20.021 adrfam: ipv4 00:07:20.021 subtype: nvme subsystem 00:07:20.021 treq: not required 00:07:20.021 portid: 0 00:07:20.021 trsvcid: 4420 00:07:20.021 subnqn: nqn.2016-06.io.spdk:cnode3 00:07:20.021 traddr: 10.0.0.2 00:07:20.021 eflags: none 00:07:20.021 sectype: none 00:07:20.021 =====Discovery Log Entry 4====== 00:07:20.021 trtype: tcp 00:07:20.021 adrfam: ipv4 00:07:20.021 subtype: nvme subsystem 00:07:20.021 treq: not required 00:07:20.021 portid: 0 00:07:20.021 trsvcid: 4420 00:07:20.021 subnqn: nqn.2016-06.io.spdk:cnode4 00:07:20.021 traddr: 10.0.0.2 00:07:20.021 eflags: none 00:07:20.021 sectype: none 00:07:20.021 =====Discovery Log Entry 5====== 00:07:20.021 trtype: tcp 00:07:20.021 adrfam: ipv4 00:07:20.021 subtype: discovery subsystem referral 00:07:20.021 treq: not required 00:07:20.021 portid: 0 00:07:20.021 trsvcid: 4430 00:07:20.021 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:20.021 traddr: 10.0.0.2 00:07:20.021 eflags: none 00:07:20.021 sectype: none 00:07:20.021 03:07:54 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:07:20.021 Perform nvmf subsystem discovery via RPC 00:07:20.021 03:07:54 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:07:20.021 03:07:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:20.021 03:07:54 -- common/autotest_common.sh@10 -- # set +x 00:07:20.021 [2024-04-25 03:07:54.320636] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:07:20.021 [ 00:07:20.021 { 00:07:20.021 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:07:20.021 "subtype": "Discovery", 00:07:20.021 "listen_addresses": [ 00:07:20.021 { 00:07:20.021 "transport": "TCP", 00:07:20.021 "trtype": "TCP", 00:07:20.021 "adrfam": "IPv4", 00:07:20.021 "traddr": "10.0.0.2", 00:07:20.021 "trsvcid": "4420" 00:07:20.021 } 00:07:20.021 ], 00:07:20.021 "allow_any_host": true, 00:07:20.021 "hosts": [] 00:07:20.021 }, 00:07:20.021 { 00:07:20.021 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:07:20.021 "subtype": "NVMe", 00:07:20.021 "listen_addresses": [ 00:07:20.021 { 00:07:20.021 "transport": "TCP", 00:07:20.021 "trtype": "TCP", 00:07:20.021 "adrfam": "IPv4", 00:07:20.021 "traddr": "10.0.0.2", 00:07:20.021 "trsvcid": "4420" 00:07:20.021 } 00:07:20.021 ], 00:07:20.021 "allow_any_host": true, 00:07:20.021 "hosts": [], 00:07:20.021 "serial_number": "SPDK00000000000001", 00:07:20.021 "model_number": "SPDK bdev Controller", 00:07:20.021 "max_namespaces": 32, 00:07:20.021 "min_cntlid": 1, 00:07:20.021 "max_cntlid": 65519, 00:07:20.021 "namespaces": [ 00:07:20.021 { 00:07:20.021 "nsid": 1, 00:07:20.021 "bdev_name": "Null1", 00:07:20.021 "name": "Null1", 00:07:20.021 "nguid": "519F7CBA5DCA489286A1A707D690F819", 00:07:20.021 "uuid": "519f7cba-5dca-4892-86a1-a707d690f819" 00:07:20.021 } 00:07:20.021 ] 00:07:20.021 }, 00:07:20.021 { 00:07:20.021 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:07:20.021 "subtype": "NVMe", 00:07:20.021 "listen_addresses": [ 00:07:20.021 { 00:07:20.021 "transport": "TCP", 00:07:20.021 "trtype": "TCP", 00:07:20.021 "adrfam": "IPv4", 00:07:20.021 "traddr": "10.0.0.2", 00:07:20.021 "trsvcid": "4420" 00:07:20.021 } 00:07:20.021 ], 00:07:20.021 "allow_any_host": true, 00:07:20.021 "hosts": [], 00:07:20.021 "serial_number": "SPDK00000000000002", 00:07:20.021 "model_number": "SPDK bdev Controller", 00:07:20.022 "max_namespaces": 32, 00:07:20.022 "min_cntlid": 1, 00:07:20.022 "max_cntlid": 65519, 00:07:20.022 "namespaces": [ 00:07:20.022 { 00:07:20.022 "nsid": 1, 00:07:20.022 "bdev_name": "Null2", 00:07:20.022 "name": "Null2", 00:07:20.022 "nguid": "C8BBAE3F8E9A4C39AA15D61AF7B5CD35", 00:07:20.022 "uuid": "c8bbae3f-8e9a-4c39-aa15-d61af7b5cd35" 00:07:20.022 } 00:07:20.022 ] 00:07:20.022 }, 00:07:20.022 { 00:07:20.022 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:07:20.022 "subtype": "NVMe", 00:07:20.022 "listen_addresses": [ 00:07:20.022 { 00:07:20.022 "transport": "TCP", 00:07:20.022 "trtype": "TCP", 00:07:20.022 "adrfam": "IPv4", 00:07:20.022 "traddr": "10.0.0.2", 00:07:20.022 "trsvcid": "4420" 00:07:20.022 } 00:07:20.022 ], 00:07:20.022 "allow_any_host": true, 00:07:20.022 "hosts": [], 00:07:20.022 "serial_number": "SPDK00000000000003", 00:07:20.022 "model_number": "SPDK bdev Controller", 00:07:20.022 "max_namespaces": 32, 00:07:20.022 "min_cntlid": 1, 00:07:20.022 "max_cntlid": 65519, 00:07:20.022 "namespaces": [ 00:07:20.022 { 00:07:20.022 "nsid": 1, 00:07:20.022 "bdev_name": "Null3", 00:07:20.022 "name": "Null3", 00:07:20.022 "nguid": "7CB6285DE2E6419C91BFA31F453857DE", 00:07:20.022 "uuid": "7cb6285d-e2e6-419c-91bf-a31f453857de" 00:07:20.022 } 00:07:20.022 ] 00:07:20.022 }, 00:07:20.022 { 00:07:20.022 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:07:20.022 "subtype": "NVMe", 00:07:20.022 "listen_addresses": [ 00:07:20.022 { 00:07:20.022 "transport": "TCP", 00:07:20.022 "trtype": "TCP", 00:07:20.022 "adrfam": "IPv4", 00:07:20.022 "traddr": "10.0.0.2", 00:07:20.022 "trsvcid": "4420" 00:07:20.022 } 00:07:20.022 ], 00:07:20.022 "allow_any_host": true, 00:07:20.022 "hosts": [], 00:07:20.022 "serial_number": "SPDK00000000000004", 00:07:20.022 "model_number": "SPDK bdev Controller", 00:07:20.022 "max_namespaces": 32, 00:07:20.022 "min_cntlid": 1, 00:07:20.022 "max_cntlid": 65519, 00:07:20.022 "namespaces": [ 00:07:20.022 { 00:07:20.022 "nsid": 1, 00:07:20.022 "bdev_name": "Null4", 00:07:20.022 "name": "Null4", 00:07:20.022 "nguid": "ACDE7349B5264C259A81EABFA258CC41", 00:07:20.022 "uuid": "acde7349-b526-4c25-9a81-eabfa258cc41" 00:07:20.022 } 00:07:20.022 ] 00:07:20.022 } 00:07:20.022 ] 00:07:20.022 03:07:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:20.022 03:07:54 -- target/discovery.sh@42 -- # seq 1 4 00:07:20.022 03:07:54 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:20.022 03:07:54 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:20.022 03:07:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:20.022 03:07:54 -- common/autotest_common.sh@10 -- # set +x 00:07:20.022 03:07:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:20.022 03:07:54 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:07:20.022 03:07:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:20.022 03:07:54 -- common/autotest_common.sh@10 -- # set +x 00:07:20.022 03:07:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:20.022 03:07:54 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:20.022 03:07:54 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:07:20.022 03:07:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:20.022 03:07:54 -- common/autotest_common.sh@10 -- # set +x 00:07:20.022 03:07:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:20.022 03:07:54 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:07:20.022 03:07:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:20.022 03:07:54 -- common/autotest_common.sh@10 -- # set +x 00:07:20.022 03:07:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:20.022 03:07:54 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:20.022 03:07:54 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:07:20.022 03:07:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:20.022 03:07:54 -- common/autotest_common.sh@10 -- # set +x 00:07:20.022 03:07:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:20.022 03:07:54 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:07:20.022 03:07:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:20.022 03:07:54 -- common/autotest_common.sh@10 -- # set +x 00:07:20.022 03:07:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:20.022 03:07:54 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:20.022 03:07:54 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:07:20.022 03:07:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:20.022 03:07:54 -- common/autotest_common.sh@10 -- # set +x 00:07:20.022 03:07:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:20.022 03:07:54 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:07:20.022 03:07:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:20.022 03:07:54 -- common/autotest_common.sh@10 -- # set +x 00:07:20.022 03:07:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:20.022 03:07:54 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:07:20.022 03:07:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:20.022 03:07:54 -- common/autotest_common.sh@10 -- # set +x 00:07:20.022 03:07:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:20.022 03:07:54 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:07:20.022 03:07:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:20.022 03:07:54 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:07:20.022 03:07:54 -- common/autotest_common.sh@10 -- # set +x 00:07:20.022 03:07:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:20.022 03:07:54 -- target/discovery.sh@49 -- # check_bdevs= 00:07:20.022 03:07:54 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:07:20.022 03:07:54 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:07:20.022 03:07:54 -- target/discovery.sh@57 -- # nvmftestfini 00:07:20.022 03:07:54 -- nvmf/common.sh@477 -- # nvmfcleanup 00:07:20.022 03:07:54 -- nvmf/common.sh@117 -- # sync 00:07:20.022 03:07:54 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:20.022 03:07:54 -- nvmf/common.sh@120 -- # set +e 00:07:20.022 03:07:54 -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:20.022 03:07:54 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:20.022 rmmod nvme_tcp 00:07:20.022 rmmod nvme_fabrics 00:07:20.022 rmmod nvme_keyring 00:07:20.022 03:07:54 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:20.022 03:07:54 -- nvmf/common.sh@124 -- # set -e 00:07:20.022 03:07:54 -- nvmf/common.sh@125 -- # return 0 00:07:20.022 03:07:54 -- nvmf/common.sh@478 -- # '[' -n 1399941 ']' 00:07:20.022 03:07:54 -- nvmf/common.sh@479 -- # killprocess 1399941 00:07:20.022 03:07:54 -- common/autotest_common.sh@936 -- # '[' -z 1399941 ']' 00:07:20.022 03:07:54 -- common/autotest_common.sh@940 -- # kill -0 1399941 00:07:20.022 03:07:54 -- common/autotest_common.sh@941 -- # uname 00:07:20.022 03:07:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:20.022 03:07:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1399941 00:07:20.280 03:07:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:20.280 03:07:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:20.280 03:07:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1399941' 00:07:20.280 killing process with pid 1399941 00:07:20.280 03:07:54 -- common/autotest_common.sh@955 -- # kill 1399941 00:07:20.280 [2024-04-25 03:07:54.530937] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:07:20.280 03:07:54 -- common/autotest_common.sh@960 -- # wait 1399941 00:07:20.538 03:07:54 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:07:20.538 03:07:54 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:07:20.538 03:07:54 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:07:20.538 03:07:54 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:20.538 03:07:54 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:20.538 03:07:54 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:20.538 03:07:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:20.538 03:07:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:22.441 03:07:56 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:22.441 00:07:22.441 real 0m5.490s 00:07:22.441 user 0m4.692s 00:07:22.441 sys 0m1.862s 00:07:22.441 03:07:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:22.441 03:07:56 -- common/autotest_common.sh@10 -- # set +x 00:07:22.441 ************************************ 00:07:22.441 END TEST nvmf_discovery 00:07:22.441 ************************************ 00:07:22.441 03:07:56 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:22.441 03:07:56 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:22.441 03:07:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:22.441 03:07:56 -- common/autotest_common.sh@10 -- # set +x 00:07:22.700 ************************************ 00:07:22.700 START TEST nvmf_referrals 00:07:22.700 ************************************ 00:07:22.700 03:07:56 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:22.700 * Looking for test storage... 00:07:22.700 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:22.700 03:07:57 -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:22.700 03:07:57 -- nvmf/common.sh@7 -- # uname -s 00:07:22.700 03:07:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:22.700 03:07:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:22.700 03:07:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:22.700 03:07:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:22.700 03:07:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:22.700 03:07:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:22.700 03:07:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:22.700 03:07:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:22.700 03:07:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:22.700 03:07:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:22.700 03:07:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:22.700 03:07:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:22.700 03:07:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:22.700 03:07:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:22.700 03:07:57 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:22.700 03:07:57 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:22.700 03:07:57 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:22.700 03:07:57 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:22.700 03:07:57 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:22.700 03:07:57 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:22.700 03:07:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.700 03:07:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.700 03:07:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.700 03:07:57 -- paths/export.sh@5 -- # export PATH 00:07:22.700 03:07:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.700 03:07:57 -- nvmf/common.sh@47 -- # : 0 00:07:22.700 03:07:57 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:22.700 03:07:57 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:22.700 03:07:57 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:22.700 03:07:57 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:22.700 03:07:57 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:22.700 03:07:57 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:22.700 03:07:57 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:22.700 03:07:57 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:22.700 03:07:57 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:07:22.700 03:07:57 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:07:22.700 03:07:57 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:07:22.700 03:07:57 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:07:22.700 03:07:57 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:07:22.700 03:07:57 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:07:22.700 03:07:57 -- target/referrals.sh@37 -- # nvmftestinit 00:07:22.700 03:07:57 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:07:22.700 03:07:57 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:22.700 03:07:57 -- nvmf/common.sh@437 -- # prepare_net_devs 00:07:22.700 03:07:57 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:07:22.700 03:07:57 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:07:22.700 03:07:57 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:22.700 03:07:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:22.700 03:07:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:22.700 03:07:57 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:07:22.700 03:07:57 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:07:22.700 03:07:57 -- nvmf/common.sh@285 -- # xtrace_disable 00:07:22.700 03:07:57 -- common/autotest_common.sh@10 -- # set +x 00:07:24.604 03:07:59 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:24.604 03:07:59 -- nvmf/common.sh@291 -- # pci_devs=() 00:07:24.604 03:07:59 -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:24.604 03:07:59 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:24.604 03:07:59 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:24.604 03:07:59 -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:24.604 03:07:59 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:24.604 03:07:59 -- nvmf/common.sh@295 -- # net_devs=() 00:07:24.604 03:07:59 -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:24.604 03:07:59 -- nvmf/common.sh@296 -- # e810=() 00:07:24.604 03:07:59 -- nvmf/common.sh@296 -- # local -ga e810 00:07:24.604 03:07:59 -- nvmf/common.sh@297 -- # x722=() 00:07:24.604 03:07:59 -- nvmf/common.sh@297 -- # local -ga x722 00:07:24.604 03:07:59 -- nvmf/common.sh@298 -- # mlx=() 00:07:24.604 03:07:59 -- nvmf/common.sh@298 -- # local -ga mlx 00:07:24.604 03:07:59 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:24.604 03:07:59 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:24.604 03:07:59 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:24.604 03:07:59 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:24.604 03:07:59 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:24.604 03:07:59 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:24.604 03:07:59 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:24.604 03:07:59 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:24.604 03:07:59 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:24.604 03:07:59 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:24.604 03:07:59 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:24.604 03:07:59 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:24.604 03:07:59 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:24.604 03:07:59 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:24.604 03:07:59 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:24.604 03:07:59 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:24.604 03:07:59 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:24.604 03:07:59 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:24.604 03:07:59 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:24.604 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:24.604 03:07:59 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:24.604 03:07:59 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:24.604 03:07:59 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:24.604 03:07:59 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:24.604 03:07:59 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:24.604 03:07:59 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:24.604 03:07:59 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:24.604 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:24.604 03:07:59 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:24.604 03:07:59 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:24.604 03:07:59 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:24.604 03:07:59 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:24.604 03:07:59 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:24.604 03:07:59 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:24.604 03:07:59 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:24.604 03:07:59 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:24.604 03:07:59 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:24.604 03:07:59 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:24.604 03:07:59 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:24.604 03:07:59 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:24.604 03:07:59 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:24.604 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:24.604 03:07:59 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:24.604 03:07:59 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:24.604 03:07:59 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:24.604 03:07:59 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:24.604 03:07:59 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:24.604 03:07:59 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:24.604 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:24.604 03:07:59 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:24.604 03:07:59 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:07:24.604 03:07:59 -- nvmf/common.sh@403 -- # is_hw=yes 00:07:24.604 03:07:59 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:07:24.604 03:07:59 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:07:24.604 03:07:59 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:07:24.604 03:07:59 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:24.604 03:07:59 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:24.604 03:07:59 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:24.604 03:07:59 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:24.604 03:07:59 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:24.604 03:07:59 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:24.604 03:07:59 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:24.604 03:07:59 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:24.604 03:07:59 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:24.604 03:07:59 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:24.604 03:07:59 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:24.604 03:07:59 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:24.604 03:07:59 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:24.863 03:07:59 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:24.863 03:07:59 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:24.863 03:07:59 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:24.863 03:07:59 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:24.863 03:07:59 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:24.863 03:07:59 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:24.863 03:07:59 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:24.863 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:24.863 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.267 ms 00:07:24.863 00:07:24.863 --- 10.0.0.2 ping statistics --- 00:07:24.863 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:24.863 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:07:24.863 03:07:59 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:24.863 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:24.863 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:07:24.863 00:07:24.863 --- 10.0.0.1 ping statistics --- 00:07:24.863 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:24.863 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:07:24.863 03:07:59 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:24.863 03:07:59 -- nvmf/common.sh@411 -- # return 0 00:07:24.863 03:07:59 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:07:24.863 03:07:59 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:24.863 03:07:59 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:07:24.863 03:07:59 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:07:24.863 03:07:59 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:24.863 03:07:59 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:07:24.863 03:07:59 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:07:24.863 03:07:59 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:07:24.863 03:07:59 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:07:24.863 03:07:59 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:24.863 03:07:59 -- common/autotest_common.sh@10 -- # set +x 00:07:24.863 03:07:59 -- nvmf/common.sh@470 -- # nvmfpid=1402048 00:07:24.863 03:07:59 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:24.863 03:07:59 -- nvmf/common.sh@471 -- # waitforlisten 1402048 00:07:24.863 03:07:59 -- common/autotest_common.sh@817 -- # '[' -z 1402048 ']' 00:07:24.863 03:07:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:24.863 03:07:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:24.863 03:07:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:24.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:24.863 03:07:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:24.863 03:07:59 -- common/autotest_common.sh@10 -- # set +x 00:07:24.863 [2024-04-25 03:07:59.301862] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:07:24.864 [2024-04-25 03:07:59.301957] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:24.864 EAL: No free 2048 kB hugepages reported on node 1 00:07:25.122 [2024-04-25 03:07:59.367070] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:25.122 [2024-04-25 03:07:59.475250] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:25.122 [2024-04-25 03:07:59.475308] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:25.122 [2024-04-25 03:07:59.475338] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:25.122 [2024-04-25 03:07:59.475350] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:25.122 [2024-04-25 03:07:59.475360] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:25.122 [2024-04-25 03:07:59.475503] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:25.122 [2024-04-25 03:07:59.475568] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:25.122 [2024-04-25 03:07:59.475598] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:25.122 [2024-04-25 03:07:59.475601] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.122 03:07:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:25.122 03:07:59 -- common/autotest_common.sh@850 -- # return 0 00:07:25.122 03:07:59 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:07:25.122 03:07:59 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:25.122 03:07:59 -- common/autotest_common.sh@10 -- # set +x 00:07:25.380 03:07:59 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:25.380 03:07:59 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:25.380 03:07:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:25.380 03:07:59 -- common/autotest_common.sh@10 -- # set +x 00:07:25.380 [2024-04-25 03:07:59.637439] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:25.380 03:07:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:25.380 03:07:59 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:07:25.380 03:07:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:25.380 03:07:59 -- common/autotest_common.sh@10 -- # set +x 00:07:25.380 [2024-04-25 03:07:59.649682] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:07:25.380 03:07:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:25.380 03:07:59 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:07:25.380 03:07:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:25.380 03:07:59 -- common/autotest_common.sh@10 -- # set +x 00:07:25.380 03:07:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:25.380 03:07:59 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:07:25.380 03:07:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:25.381 03:07:59 -- common/autotest_common.sh@10 -- # set +x 00:07:25.381 03:07:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:25.381 03:07:59 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:07:25.381 03:07:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:25.381 03:07:59 -- common/autotest_common.sh@10 -- # set +x 00:07:25.381 03:07:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:25.381 03:07:59 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:25.381 03:07:59 -- target/referrals.sh@48 -- # jq length 00:07:25.381 03:07:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:25.381 03:07:59 -- common/autotest_common.sh@10 -- # set +x 00:07:25.381 03:07:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:25.381 03:07:59 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:07:25.381 03:07:59 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:07:25.381 03:07:59 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:25.381 03:07:59 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:25.381 03:07:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:25.381 03:07:59 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:25.381 03:07:59 -- common/autotest_common.sh@10 -- # set +x 00:07:25.381 03:07:59 -- target/referrals.sh@21 -- # sort 00:07:25.381 03:07:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:25.381 03:07:59 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:25.381 03:07:59 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:25.381 03:07:59 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:07:25.381 03:07:59 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:25.381 03:07:59 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:25.381 03:07:59 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:25.381 03:07:59 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:25.381 03:07:59 -- target/referrals.sh@26 -- # sort 00:07:25.639 03:07:59 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:25.639 03:07:59 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:25.639 03:07:59 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:07:25.639 03:07:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:25.639 03:07:59 -- common/autotest_common.sh@10 -- # set +x 00:07:25.639 03:07:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:25.639 03:07:59 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:07:25.639 03:07:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:25.639 03:07:59 -- common/autotest_common.sh@10 -- # set +x 00:07:25.639 03:07:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:25.639 03:07:59 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:07:25.639 03:07:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:25.639 03:07:59 -- common/autotest_common.sh@10 -- # set +x 00:07:25.639 03:07:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:25.639 03:07:59 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:25.639 03:07:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:25.639 03:07:59 -- target/referrals.sh@56 -- # jq length 00:07:25.639 03:07:59 -- common/autotest_common.sh@10 -- # set +x 00:07:25.639 03:07:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:25.639 03:07:59 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:07:25.639 03:07:59 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:07:25.639 03:07:59 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:25.639 03:07:59 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:25.639 03:07:59 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:25.639 03:07:59 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:25.639 03:07:59 -- target/referrals.sh@26 -- # sort 00:07:25.639 03:08:00 -- target/referrals.sh@26 -- # echo 00:07:25.639 03:08:00 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:07:25.639 03:08:00 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:07:25.639 03:08:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:25.639 03:08:00 -- common/autotest_common.sh@10 -- # set +x 00:07:25.639 03:08:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:25.639 03:08:00 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:25.639 03:08:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:25.639 03:08:00 -- common/autotest_common.sh@10 -- # set +x 00:07:25.639 03:08:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:25.639 03:08:00 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:07:25.639 03:08:00 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:25.639 03:08:00 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:25.639 03:08:00 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:25.639 03:08:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:25.639 03:08:00 -- common/autotest_common.sh@10 -- # set +x 00:07:25.639 03:08:00 -- target/referrals.sh@21 -- # sort 00:07:25.639 03:08:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:25.639 03:08:00 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:07:25.639 03:08:00 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:25.639 03:08:00 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:07:25.639 03:08:00 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:25.639 03:08:00 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:25.640 03:08:00 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:25.640 03:08:00 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:25.640 03:08:00 -- target/referrals.sh@26 -- # sort 00:07:25.900 03:08:00 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:07:25.900 03:08:00 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:25.900 03:08:00 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:07:25.900 03:08:00 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:25.900 03:08:00 -- target/referrals.sh@67 -- # jq -r .subnqn 00:07:25.901 03:08:00 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:25.901 03:08:00 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:25.901 03:08:00 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:07:25.901 03:08:00 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:07:25.901 03:08:00 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:25.901 03:08:00 -- target/referrals.sh@68 -- # jq -r .subnqn 00:07:25.901 03:08:00 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:25.901 03:08:00 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:26.160 03:08:00 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:26.160 03:08:00 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:26.160 03:08:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:26.160 03:08:00 -- common/autotest_common.sh@10 -- # set +x 00:07:26.160 03:08:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:26.160 03:08:00 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:07:26.160 03:08:00 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:26.160 03:08:00 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:26.160 03:08:00 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:26.160 03:08:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:26.160 03:08:00 -- common/autotest_common.sh@10 -- # set +x 00:07:26.160 03:08:00 -- target/referrals.sh@21 -- # sort 00:07:26.160 03:08:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:26.160 03:08:00 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:07:26.160 03:08:00 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:26.160 03:08:00 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:07:26.160 03:08:00 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:26.160 03:08:00 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:26.160 03:08:00 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:26.160 03:08:00 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:26.160 03:08:00 -- target/referrals.sh@26 -- # sort 00:07:26.160 03:08:00 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:07:26.160 03:08:00 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:26.160 03:08:00 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:07:26.160 03:08:00 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:26.160 03:08:00 -- target/referrals.sh@75 -- # jq -r .subnqn 00:07:26.160 03:08:00 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:26.160 03:08:00 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:26.160 03:08:00 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:07:26.160 03:08:00 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:07:26.160 03:08:00 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:26.160 03:08:00 -- target/referrals.sh@76 -- # jq -r .subnqn 00:07:26.160 03:08:00 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:26.160 03:08:00 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:26.419 03:08:00 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:26.419 03:08:00 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:07:26.419 03:08:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:26.419 03:08:00 -- common/autotest_common.sh@10 -- # set +x 00:07:26.419 03:08:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:26.419 03:08:00 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:26.419 03:08:00 -- target/referrals.sh@82 -- # jq length 00:07:26.419 03:08:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:26.419 03:08:00 -- common/autotest_common.sh@10 -- # set +x 00:07:26.419 03:08:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:26.419 03:08:00 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:07:26.419 03:08:00 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:07:26.419 03:08:00 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:26.419 03:08:00 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:26.419 03:08:00 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:26.419 03:08:00 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:26.419 03:08:00 -- target/referrals.sh@26 -- # sort 00:07:26.677 03:08:00 -- target/referrals.sh@26 -- # echo 00:07:26.677 03:08:00 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:07:26.678 03:08:00 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:07:26.678 03:08:00 -- target/referrals.sh@86 -- # nvmftestfini 00:07:26.678 03:08:00 -- nvmf/common.sh@477 -- # nvmfcleanup 00:07:26.678 03:08:00 -- nvmf/common.sh@117 -- # sync 00:07:26.678 03:08:00 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:26.678 03:08:00 -- nvmf/common.sh@120 -- # set +e 00:07:26.678 03:08:00 -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:26.678 03:08:00 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:26.678 rmmod nvme_tcp 00:07:26.678 rmmod nvme_fabrics 00:07:26.678 rmmod nvme_keyring 00:07:26.678 03:08:01 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:26.678 03:08:01 -- nvmf/common.sh@124 -- # set -e 00:07:26.678 03:08:01 -- nvmf/common.sh@125 -- # return 0 00:07:26.678 03:08:01 -- nvmf/common.sh@478 -- # '[' -n 1402048 ']' 00:07:26.678 03:08:01 -- nvmf/common.sh@479 -- # killprocess 1402048 00:07:26.678 03:08:01 -- common/autotest_common.sh@936 -- # '[' -z 1402048 ']' 00:07:26.678 03:08:01 -- common/autotest_common.sh@940 -- # kill -0 1402048 00:07:26.678 03:08:01 -- common/autotest_common.sh@941 -- # uname 00:07:26.678 03:08:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:26.678 03:08:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1402048 00:07:26.678 03:08:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:26.678 03:08:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:26.678 03:08:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1402048' 00:07:26.678 killing process with pid 1402048 00:07:26.678 03:08:01 -- common/autotest_common.sh@955 -- # kill 1402048 00:07:26.678 03:08:01 -- common/autotest_common.sh@960 -- # wait 1402048 00:07:26.937 03:08:01 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:07:26.937 03:08:01 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:07:26.937 03:08:01 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:07:26.937 03:08:01 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:26.937 03:08:01 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:26.937 03:08:01 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:26.937 03:08:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:26.937 03:08:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:29.472 03:08:03 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:29.472 00:07:29.472 real 0m6.366s 00:07:29.472 user 0m8.463s 00:07:29.472 sys 0m2.169s 00:07:29.472 03:08:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:29.472 03:08:03 -- common/autotest_common.sh@10 -- # set +x 00:07:29.472 ************************************ 00:07:29.472 END TEST nvmf_referrals 00:07:29.472 ************************************ 00:07:29.472 03:08:03 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:29.472 03:08:03 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:29.472 03:08:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:29.472 03:08:03 -- common/autotest_common.sh@10 -- # set +x 00:07:29.472 ************************************ 00:07:29.472 START TEST nvmf_connect_disconnect 00:07:29.472 ************************************ 00:07:29.472 03:08:03 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:29.472 * Looking for test storage... 00:07:29.472 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:29.472 03:08:03 -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:29.472 03:08:03 -- nvmf/common.sh@7 -- # uname -s 00:07:29.472 03:08:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:29.472 03:08:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:29.472 03:08:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:29.472 03:08:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:29.472 03:08:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:29.472 03:08:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:29.472 03:08:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:29.472 03:08:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:29.472 03:08:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:29.472 03:08:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:29.472 03:08:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:29.472 03:08:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:29.472 03:08:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:29.472 03:08:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:29.472 03:08:03 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:29.472 03:08:03 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:29.472 03:08:03 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:29.472 03:08:03 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:29.472 03:08:03 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:29.472 03:08:03 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:29.472 03:08:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.472 03:08:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.472 03:08:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.473 03:08:03 -- paths/export.sh@5 -- # export PATH 00:07:29.473 03:08:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.473 03:08:03 -- nvmf/common.sh@47 -- # : 0 00:07:29.473 03:08:03 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:29.473 03:08:03 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:29.473 03:08:03 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:29.473 03:08:03 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:29.473 03:08:03 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:29.473 03:08:03 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:29.473 03:08:03 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:29.473 03:08:03 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:29.473 03:08:03 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:29.473 03:08:03 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:29.473 03:08:03 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:07:29.473 03:08:03 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:07:29.473 03:08:03 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:29.473 03:08:03 -- nvmf/common.sh@437 -- # prepare_net_devs 00:07:29.473 03:08:03 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:07:29.473 03:08:03 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:07:29.473 03:08:03 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:29.473 03:08:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:29.473 03:08:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:29.473 03:08:03 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:07:29.473 03:08:03 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:07:29.473 03:08:03 -- nvmf/common.sh@285 -- # xtrace_disable 00:07:29.473 03:08:03 -- common/autotest_common.sh@10 -- # set +x 00:07:31.374 03:08:05 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:31.374 03:08:05 -- nvmf/common.sh@291 -- # pci_devs=() 00:07:31.374 03:08:05 -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:31.374 03:08:05 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:31.374 03:08:05 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:31.374 03:08:05 -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:31.374 03:08:05 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:31.374 03:08:05 -- nvmf/common.sh@295 -- # net_devs=() 00:07:31.374 03:08:05 -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:31.374 03:08:05 -- nvmf/common.sh@296 -- # e810=() 00:07:31.374 03:08:05 -- nvmf/common.sh@296 -- # local -ga e810 00:07:31.374 03:08:05 -- nvmf/common.sh@297 -- # x722=() 00:07:31.374 03:08:05 -- nvmf/common.sh@297 -- # local -ga x722 00:07:31.374 03:08:05 -- nvmf/common.sh@298 -- # mlx=() 00:07:31.374 03:08:05 -- nvmf/common.sh@298 -- # local -ga mlx 00:07:31.374 03:08:05 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:31.374 03:08:05 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:31.374 03:08:05 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:31.374 03:08:05 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:31.374 03:08:05 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:31.375 03:08:05 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:31.375 03:08:05 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:31.375 03:08:05 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:31.375 03:08:05 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:31.375 03:08:05 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:31.375 03:08:05 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:31.375 03:08:05 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:31.375 03:08:05 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:31.375 03:08:05 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:31.375 03:08:05 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:31.375 03:08:05 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:31.375 03:08:05 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:31.375 03:08:05 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:31.375 03:08:05 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:31.375 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:31.375 03:08:05 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:31.375 03:08:05 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:31.375 03:08:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:31.375 03:08:05 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:31.375 03:08:05 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:31.375 03:08:05 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:31.375 03:08:05 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:31.375 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:31.375 03:08:05 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:31.375 03:08:05 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:31.375 03:08:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:31.375 03:08:05 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:31.375 03:08:05 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:31.375 03:08:05 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:31.375 03:08:05 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:31.375 03:08:05 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:31.375 03:08:05 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:31.375 03:08:05 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:31.375 03:08:05 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:31.375 03:08:05 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:31.375 03:08:05 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:31.375 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:31.375 03:08:05 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:31.375 03:08:05 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:31.375 03:08:05 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:31.375 03:08:05 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:31.375 03:08:05 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:31.375 03:08:05 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:31.375 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:31.375 03:08:05 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:31.375 03:08:05 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:07:31.375 03:08:05 -- nvmf/common.sh@403 -- # is_hw=yes 00:07:31.375 03:08:05 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:07:31.375 03:08:05 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:07:31.375 03:08:05 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:07:31.375 03:08:05 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:31.375 03:08:05 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:31.375 03:08:05 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:31.375 03:08:05 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:31.375 03:08:05 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:31.375 03:08:05 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:31.375 03:08:05 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:31.375 03:08:05 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:31.375 03:08:05 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:31.375 03:08:05 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:31.375 03:08:05 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:31.375 03:08:05 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:31.375 03:08:05 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:31.375 03:08:05 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:31.375 03:08:05 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:31.375 03:08:05 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:31.375 03:08:05 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:31.375 03:08:05 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:31.375 03:08:05 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:31.375 03:08:05 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:31.375 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:31.375 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.270 ms 00:07:31.375 00:07:31.375 --- 10.0.0.2 ping statistics --- 00:07:31.375 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:31.375 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:07:31.375 03:08:05 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:31.375 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:31.375 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:07:31.375 00:07:31.375 --- 10.0.0.1 ping statistics --- 00:07:31.375 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:31.375 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:07:31.375 03:08:05 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:31.375 03:08:05 -- nvmf/common.sh@411 -- # return 0 00:07:31.375 03:08:05 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:07:31.375 03:08:05 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:31.375 03:08:05 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:07:31.375 03:08:05 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:07:31.375 03:08:05 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:31.375 03:08:05 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:07:31.375 03:08:05 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:07:31.375 03:08:05 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:07:31.375 03:08:05 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:07:31.375 03:08:05 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:31.375 03:08:05 -- common/autotest_common.sh@10 -- # set +x 00:07:31.375 03:08:05 -- nvmf/common.sh@470 -- # nvmfpid=1404341 00:07:31.375 03:08:05 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:31.375 03:08:05 -- nvmf/common.sh@471 -- # waitforlisten 1404341 00:07:31.375 03:08:05 -- common/autotest_common.sh@817 -- # '[' -z 1404341 ']' 00:07:31.375 03:08:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:31.375 03:08:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:31.375 03:08:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:31.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:31.375 03:08:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:31.375 03:08:05 -- common/autotest_common.sh@10 -- # set +x 00:07:31.375 [2024-04-25 03:08:05.805431] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:07:31.375 [2024-04-25 03:08:05.805522] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:31.375 EAL: No free 2048 kB hugepages reported on node 1 00:07:31.375 [2024-04-25 03:08:05.868371] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:31.633 [2024-04-25 03:08:05.981909] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:31.633 [2024-04-25 03:08:05.981962] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:31.633 [2024-04-25 03:08:05.981990] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:31.633 [2024-04-25 03:08:05.982002] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:31.633 [2024-04-25 03:08:05.982012] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:31.633 [2024-04-25 03:08:05.982092] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:31.634 [2024-04-25 03:08:05.982142] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:31.634 [2024-04-25 03:08:05.982172] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:31.634 [2024-04-25 03:08:05.982174] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.634 03:08:06 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:31.634 03:08:06 -- common/autotest_common.sh@850 -- # return 0 00:07:31.634 03:08:06 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:07:31.634 03:08:06 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:31.634 03:08:06 -- common/autotest_common.sh@10 -- # set +x 00:07:31.892 03:08:06 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:31.892 03:08:06 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:31.892 03:08:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:31.892 03:08:06 -- common/autotest_common.sh@10 -- # set +x 00:07:31.892 [2024-04-25 03:08:06.144501] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:31.892 03:08:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:31.892 03:08:06 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:07:31.892 03:08:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:31.892 03:08:06 -- common/autotest_common.sh@10 -- # set +x 00:07:31.892 03:08:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:31.892 03:08:06 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:07:31.892 03:08:06 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:31.892 03:08:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:31.892 03:08:06 -- common/autotest_common.sh@10 -- # set +x 00:07:31.892 03:08:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:31.892 03:08:06 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:31.892 03:08:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:31.892 03:08:06 -- common/autotest_common.sh@10 -- # set +x 00:07:31.892 03:08:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:31.892 03:08:06 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:31.892 03:08:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:31.892 03:08:06 -- common/autotest_common.sh@10 -- # set +x 00:07:31.892 [2024-04-25 03:08:06.201888] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:31.892 03:08:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:31.892 03:08:06 -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:07:31.892 03:08:06 -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:07:31.892 03:08:06 -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:07:31.892 03:08:06 -- target/connect_disconnect.sh@34 -- # set +x 00:07:34.419 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:36.318 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:38.842 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:41.383 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:43.298 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:45.824 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:48.352 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:50.249 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:52.777 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:55.303 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:57.201 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:59.732 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:02.258 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:04.162 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:06.695 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:08.603 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:11.142 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:13.049 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:15.584 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:18.122 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:20.028 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:22.568 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:25.103 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:27.642 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:29.545 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:32.089 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:34.628 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:36.528 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:39.063 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:41.620 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:43.523 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:46.059 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:48.609 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:51.146 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:53.052 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:55.588 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:58.121 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:00.028 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:02.567 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:05.102 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:07.009 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:09.546 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:12.082 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:13.987 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:16.525 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:18.430 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:20.979 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:23.545 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:25.451 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:27.991 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:30.524 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:32.430 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:34.968 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:37.507 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:39.418 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:41.954 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:44.493 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:46.400 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:48.939 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:50.907 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:53.440 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:55.353 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:57.888 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:00.421 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:02.329 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:04.864 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:06.769 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:09.308 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:11.850 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:13.759 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:16.298 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:18.837 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:20.740 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:23.275 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:25.183 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:27.729 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:30.259 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:32.166 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:34.703 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:37.242 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:39.142 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:41.679 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:44.218 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:46.141 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:48.681 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:51.220 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:53.126 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:55.665 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:58.203 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:00.107 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:02.644 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:05.181 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:07.085 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:09.619 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:12.158 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:14.691 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:16.597 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:19.142 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:21.674 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:23.582 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:23.582 03:11:57 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:11:23.582 03:11:57 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:11:23.582 03:11:57 -- nvmf/common.sh@477 -- # nvmfcleanup 00:11:23.582 03:11:57 -- nvmf/common.sh@117 -- # sync 00:11:23.582 03:11:57 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:23.582 03:11:57 -- nvmf/common.sh@120 -- # set +e 00:11:23.582 03:11:57 -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:23.582 03:11:57 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:23.582 rmmod nvme_tcp 00:11:23.582 rmmod nvme_fabrics 00:11:23.582 rmmod nvme_keyring 00:11:23.582 03:11:57 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:23.582 03:11:57 -- nvmf/common.sh@124 -- # set -e 00:11:23.582 03:11:57 -- nvmf/common.sh@125 -- # return 0 00:11:23.582 03:11:57 -- nvmf/common.sh@478 -- # '[' -n 1404341 ']' 00:11:23.582 03:11:57 -- nvmf/common.sh@479 -- # killprocess 1404341 00:11:23.582 03:11:57 -- common/autotest_common.sh@936 -- # '[' -z 1404341 ']' 00:11:23.582 03:11:57 -- common/autotest_common.sh@940 -- # kill -0 1404341 00:11:23.582 03:11:57 -- common/autotest_common.sh@941 -- # uname 00:11:23.582 03:11:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:23.582 03:11:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1404341 00:11:23.582 03:11:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:23.582 03:11:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:23.582 03:11:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1404341' 00:11:23.582 killing process with pid 1404341 00:11:23.582 03:11:57 -- common/autotest_common.sh@955 -- # kill 1404341 00:11:23.582 03:11:57 -- common/autotest_common.sh@960 -- # wait 1404341 00:11:23.841 03:11:58 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:11:23.841 03:11:58 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:11:23.841 03:11:58 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:11:23.841 03:11:58 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:23.841 03:11:58 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:23.841 03:11:58 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:23.841 03:11:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:23.841 03:11:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:26.377 03:12:00 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:26.377 00:11:26.377 real 3m56.842s 00:11:26.377 user 15m1.921s 00:11:26.377 sys 0m34.518s 00:11:26.377 03:12:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:26.377 03:12:00 -- common/autotest_common.sh@10 -- # set +x 00:11:26.377 ************************************ 00:11:26.377 END TEST nvmf_connect_disconnect 00:11:26.377 ************************************ 00:11:26.377 03:12:00 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:26.377 03:12:00 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:26.377 03:12:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:26.377 03:12:00 -- common/autotest_common.sh@10 -- # set +x 00:11:26.377 ************************************ 00:11:26.377 START TEST nvmf_multitarget 00:11:26.377 ************************************ 00:11:26.377 03:12:00 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:26.377 * Looking for test storage... 00:11:26.377 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:26.377 03:12:00 -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:26.377 03:12:00 -- nvmf/common.sh@7 -- # uname -s 00:11:26.377 03:12:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:26.377 03:12:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:26.377 03:12:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:26.377 03:12:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:26.377 03:12:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:26.377 03:12:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:26.377 03:12:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:26.377 03:12:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:26.377 03:12:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:26.377 03:12:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:26.377 03:12:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:26.377 03:12:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:26.377 03:12:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:26.377 03:12:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:26.377 03:12:00 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:26.377 03:12:00 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:26.377 03:12:00 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:26.377 03:12:00 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:26.377 03:12:00 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:26.377 03:12:00 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:26.377 03:12:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.377 03:12:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.377 03:12:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.377 03:12:00 -- paths/export.sh@5 -- # export PATH 00:11:26.377 03:12:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.377 03:12:00 -- nvmf/common.sh@47 -- # : 0 00:11:26.377 03:12:00 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:26.377 03:12:00 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:26.377 03:12:00 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:26.377 03:12:00 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:26.377 03:12:00 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:26.377 03:12:00 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:26.377 03:12:00 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:26.377 03:12:00 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:26.377 03:12:00 -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:11:26.377 03:12:00 -- target/multitarget.sh@15 -- # nvmftestinit 00:11:26.377 03:12:00 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:11:26.377 03:12:00 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:26.377 03:12:00 -- nvmf/common.sh@437 -- # prepare_net_devs 00:11:26.377 03:12:00 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:11:26.377 03:12:00 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:11:26.377 03:12:00 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:26.377 03:12:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:26.377 03:12:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:26.377 03:12:00 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:11:26.377 03:12:00 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:11:26.377 03:12:00 -- nvmf/common.sh@285 -- # xtrace_disable 00:11:26.377 03:12:00 -- common/autotest_common.sh@10 -- # set +x 00:11:28.282 03:12:02 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:28.282 03:12:02 -- nvmf/common.sh@291 -- # pci_devs=() 00:11:28.282 03:12:02 -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:28.282 03:12:02 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:28.282 03:12:02 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:28.282 03:12:02 -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:28.282 03:12:02 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:28.282 03:12:02 -- nvmf/common.sh@295 -- # net_devs=() 00:11:28.282 03:12:02 -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:28.282 03:12:02 -- nvmf/common.sh@296 -- # e810=() 00:11:28.282 03:12:02 -- nvmf/common.sh@296 -- # local -ga e810 00:11:28.282 03:12:02 -- nvmf/common.sh@297 -- # x722=() 00:11:28.282 03:12:02 -- nvmf/common.sh@297 -- # local -ga x722 00:11:28.282 03:12:02 -- nvmf/common.sh@298 -- # mlx=() 00:11:28.282 03:12:02 -- nvmf/common.sh@298 -- # local -ga mlx 00:11:28.282 03:12:02 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:28.282 03:12:02 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:28.282 03:12:02 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:28.282 03:12:02 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:28.282 03:12:02 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:28.282 03:12:02 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:28.282 03:12:02 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:28.282 03:12:02 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:28.282 03:12:02 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:28.282 03:12:02 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:28.282 03:12:02 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:28.282 03:12:02 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:28.282 03:12:02 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:28.282 03:12:02 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:28.282 03:12:02 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:28.282 03:12:02 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:28.282 03:12:02 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:28.282 03:12:02 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:28.282 03:12:02 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:28.282 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:28.282 03:12:02 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:28.282 03:12:02 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:28.282 03:12:02 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:28.282 03:12:02 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:28.282 03:12:02 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:28.282 03:12:02 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:28.282 03:12:02 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:28.282 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:28.282 03:12:02 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:28.282 03:12:02 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:28.282 03:12:02 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:28.282 03:12:02 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:28.282 03:12:02 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:28.282 03:12:02 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:28.282 03:12:02 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:28.282 03:12:02 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:28.282 03:12:02 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:28.282 03:12:02 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:28.282 03:12:02 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:28.282 03:12:02 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:28.282 03:12:02 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:28.282 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:28.282 03:12:02 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:28.282 03:12:02 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:28.282 03:12:02 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:28.282 03:12:02 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:28.282 03:12:02 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:28.282 03:12:02 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:28.282 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:28.282 03:12:02 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:28.282 03:12:02 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:11:28.282 03:12:02 -- nvmf/common.sh@403 -- # is_hw=yes 00:11:28.282 03:12:02 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:11:28.282 03:12:02 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:11:28.282 03:12:02 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:11:28.282 03:12:02 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:28.282 03:12:02 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:28.282 03:12:02 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:28.282 03:12:02 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:28.282 03:12:02 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:28.282 03:12:02 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:28.282 03:12:02 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:28.282 03:12:02 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:28.282 03:12:02 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:28.282 03:12:02 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:28.282 03:12:02 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:28.282 03:12:02 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:28.282 03:12:02 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:28.282 03:12:02 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:28.282 03:12:02 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:28.282 03:12:02 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:28.282 03:12:02 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:28.282 03:12:02 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:28.282 03:12:02 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:28.282 03:12:02 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:28.282 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:28.282 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.158 ms 00:11:28.282 00:11:28.282 --- 10.0.0.2 ping statistics --- 00:11:28.282 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:28.282 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:11:28.282 03:12:02 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:28.282 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:28.282 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:11:28.282 00:11:28.282 --- 10.0.0.1 ping statistics --- 00:11:28.282 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:28.282 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:11:28.282 03:12:02 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:28.282 03:12:02 -- nvmf/common.sh@411 -- # return 0 00:11:28.282 03:12:02 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:11:28.282 03:12:02 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:28.283 03:12:02 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:11:28.283 03:12:02 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:11:28.283 03:12:02 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:28.283 03:12:02 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:11:28.283 03:12:02 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:11:28.283 03:12:02 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:11:28.283 03:12:02 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:28.283 03:12:02 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:28.283 03:12:02 -- common/autotest_common.sh@10 -- # set +x 00:11:28.283 03:12:02 -- nvmf/common.sh@470 -- # nvmfpid=1435526 00:11:28.283 03:12:02 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:28.283 03:12:02 -- nvmf/common.sh@471 -- # waitforlisten 1435526 00:11:28.283 03:12:02 -- common/autotest_common.sh@817 -- # '[' -z 1435526 ']' 00:11:28.283 03:12:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:28.283 03:12:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:28.283 03:12:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:28.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:28.283 03:12:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:28.283 03:12:02 -- common/autotest_common.sh@10 -- # set +x 00:11:28.283 [2024-04-25 03:12:02.507060] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:11:28.283 [2024-04-25 03:12:02.507131] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:28.283 EAL: No free 2048 kB hugepages reported on node 1 00:11:28.283 [2024-04-25 03:12:02.571593] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:28.283 [2024-04-25 03:12:02.680044] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:28.283 [2024-04-25 03:12:02.680100] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:28.283 [2024-04-25 03:12:02.680114] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:28.283 [2024-04-25 03:12:02.680125] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:28.283 [2024-04-25 03:12:02.680135] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:28.283 [2024-04-25 03:12:02.680184] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:28.283 [2024-04-25 03:12:02.680244] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:28.283 [2024-04-25 03:12:02.680319] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:28.283 [2024-04-25 03:12:02.680322] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:28.542 03:12:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:28.542 03:12:02 -- common/autotest_common.sh@850 -- # return 0 00:11:28.542 03:12:02 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:11:28.542 03:12:02 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:28.542 03:12:02 -- common/autotest_common.sh@10 -- # set +x 00:11:28.542 03:12:02 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:28.542 03:12:02 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:28.542 03:12:02 -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:28.542 03:12:02 -- target/multitarget.sh@21 -- # jq length 00:11:28.542 03:12:02 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:11:28.542 03:12:02 -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:11:28.801 "nvmf_tgt_1" 00:11:28.801 03:12:03 -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:11:28.801 "nvmf_tgt_2" 00:11:28.801 03:12:03 -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:28.801 03:12:03 -- target/multitarget.sh@28 -- # jq length 00:11:29.059 03:12:03 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:11:29.059 03:12:03 -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:11:29.059 true 00:11:29.059 03:12:03 -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:11:29.059 true 00:11:29.059 03:12:03 -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:29.059 03:12:03 -- target/multitarget.sh@35 -- # jq length 00:11:29.319 03:12:03 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:11:29.319 03:12:03 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:29.319 03:12:03 -- target/multitarget.sh@41 -- # nvmftestfini 00:11:29.319 03:12:03 -- nvmf/common.sh@477 -- # nvmfcleanup 00:11:29.319 03:12:03 -- nvmf/common.sh@117 -- # sync 00:11:29.319 03:12:03 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:29.319 03:12:03 -- nvmf/common.sh@120 -- # set +e 00:11:29.319 03:12:03 -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:29.319 03:12:03 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:29.319 rmmod nvme_tcp 00:11:29.319 rmmod nvme_fabrics 00:11:29.319 rmmod nvme_keyring 00:11:29.319 03:12:03 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:29.319 03:12:03 -- nvmf/common.sh@124 -- # set -e 00:11:29.319 03:12:03 -- nvmf/common.sh@125 -- # return 0 00:11:29.319 03:12:03 -- nvmf/common.sh@478 -- # '[' -n 1435526 ']' 00:11:29.319 03:12:03 -- nvmf/common.sh@479 -- # killprocess 1435526 00:11:29.319 03:12:03 -- common/autotest_common.sh@936 -- # '[' -z 1435526 ']' 00:11:29.319 03:12:03 -- common/autotest_common.sh@940 -- # kill -0 1435526 00:11:29.319 03:12:03 -- common/autotest_common.sh@941 -- # uname 00:11:29.319 03:12:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:29.319 03:12:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1435526 00:11:29.319 03:12:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:29.319 03:12:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:29.319 03:12:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1435526' 00:11:29.319 killing process with pid 1435526 00:11:29.319 03:12:03 -- common/autotest_common.sh@955 -- # kill 1435526 00:11:29.319 03:12:03 -- common/autotest_common.sh@960 -- # wait 1435526 00:11:29.578 03:12:03 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:11:29.578 03:12:03 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:11:29.578 03:12:03 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:11:29.578 03:12:03 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:29.578 03:12:03 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:29.578 03:12:03 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:29.578 03:12:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:29.578 03:12:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:32.121 03:12:06 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:32.121 00:11:32.121 real 0m5.579s 00:11:32.121 user 0m6.462s 00:11:32.121 sys 0m1.794s 00:11:32.121 03:12:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:32.121 03:12:06 -- common/autotest_common.sh@10 -- # set +x 00:11:32.121 ************************************ 00:11:32.121 END TEST nvmf_multitarget 00:11:32.121 ************************************ 00:11:32.121 03:12:06 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:32.121 03:12:06 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:32.121 03:12:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:32.121 03:12:06 -- common/autotest_common.sh@10 -- # set +x 00:11:32.121 ************************************ 00:11:32.121 START TEST nvmf_rpc 00:11:32.121 ************************************ 00:11:32.121 03:12:06 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:32.121 * Looking for test storage... 00:11:32.121 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:32.121 03:12:06 -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:32.121 03:12:06 -- nvmf/common.sh@7 -- # uname -s 00:11:32.121 03:12:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:32.121 03:12:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:32.121 03:12:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:32.121 03:12:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:32.122 03:12:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:32.122 03:12:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:32.122 03:12:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:32.122 03:12:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:32.122 03:12:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:32.122 03:12:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:32.122 03:12:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:32.122 03:12:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:32.122 03:12:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:32.122 03:12:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:32.122 03:12:06 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:32.122 03:12:06 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:32.122 03:12:06 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:32.122 03:12:06 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:32.122 03:12:06 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:32.122 03:12:06 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:32.122 03:12:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.122 03:12:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.122 03:12:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.122 03:12:06 -- paths/export.sh@5 -- # export PATH 00:11:32.122 03:12:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.122 03:12:06 -- nvmf/common.sh@47 -- # : 0 00:11:32.122 03:12:06 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:32.122 03:12:06 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:32.122 03:12:06 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:32.122 03:12:06 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:32.122 03:12:06 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:32.122 03:12:06 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:32.122 03:12:06 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:32.122 03:12:06 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:32.122 03:12:06 -- target/rpc.sh@11 -- # loops=5 00:11:32.122 03:12:06 -- target/rpc.sh@23 -- # nvmftestinit 00:11:32.122 03:12:06 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:11:32.122 03:12:06 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:32.122 03:12:06 -- nvmf/common.sh@437 -- # prepare_net_devs 00:11:32.122 03:12:06 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:11:32.122 03:12:06 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:11:32.122 03:12:06 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:32.122 03:12:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:32.122 03:12:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:32.122 03:12:06 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:11:32.122 03:12:06 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:11:32.122 03:12:06 -- nvmf/common.sh@285 -- # xtrace_disable 00:11:32.122 03:12:06 -- common/autotest_common.sh@10 -- # set +x 00:11:34.035 03:12:08 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:34.035 03:12:08 -- nvmf/common.sh@291 -- # pci_devs=() 00:11:34.035 03:12:08 -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:34.035 03:12:08 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:34.035 03:12:08 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:34.035 03:12:08 -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:34.035 03:12:08 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:34.035 03:12:08 -- nvmf/common.sh@295 -- # net_devs=() 00:11:34.035 03:12:08 -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:34.035 03:12:08 -- nvmf/common.sh@296 -- # e810=() 00:11:34.035 03:12:08 -- nvmf/common.sh@296 -- # local -ga e810 00:11:34.035 03:12:08 -- nvmf/common.sh@297 -- # x722=() 00:11:34.035 03:12:08 -- nvmf/common.sh@297 -- # local -ga x722 00:11:34.035 03:12:08 -- nvmf/common.sh@298 -- # mlx=() 00:11:34.035 03:12:08 -- nvmf/common.sh@298 -- # local -ga mlx 00:11:34.035 03:12:08 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:34.035 03:12:08 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:34.035 03:12:08 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:34.035 03:12:08 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:34.035 03:12:08 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:34.035 03:12:08 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:34.035 03:12:08 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:34.035 03:12:08 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:34.035 03:12:08 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:34.035 03:12:08 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:34.035 03:12:08 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:34.035 03:12:08 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:34.035 03:12:08 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:34.035 03:12:08 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:34.035 03:12:08 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:34.035 03:12:08 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:34.035 03:12:08 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:34.035 03:12:08 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:34.035 03:12:08 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:34.035 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:34.035 03:12:08 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:34.035 03:12:08 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:34.035 03:12:08 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:34.035 03:12:08 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:34.035 03:12:08 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:34.035 03:12:08 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:34.035 03:12:08 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:34.035 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:34.035 03:12:08 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:34.035 03:12:08 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:34.035 03:12:08 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:34.035 03:12:08 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:34.035 03:12:08 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:34.035 03:12:08 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:34.035 03:12:08 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:34.035 03:12:08 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:34.035 03:12:08 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:34.035 03:12:08 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:34.035 03:12:08 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:34.035 03:12:08 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:34.035 03:12:08 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:34.035 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:34.035 03:12:08 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:34.035 03:12:08 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:34.035 03:12:08 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:34.035 03:12:08 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:34.035 03:12:08 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:34.035 03:12:08 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:34.035 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:34.035 03:12:08 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:34.035 03:12:08 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:11:34.035 03:12:08 -- nvmf/common.sh@403 -- # is_hw=yes 00:11:34.035 03:12:08 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:11:34.035 03:12:08 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:11:34.035 03:12:08 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:11:34.035 03:12:08 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:34.035 03:12:08 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:34.035 03:12:08 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:34.035 03:12:08 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:34.035 03:12:08 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:34.035 03:12:08 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:34.035 03:12:08 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:34.035 03:12:08 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:34.035 03:12:08 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:34.035 03:12:08 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:34.035 03:12:08 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:34.035 03:12:08 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:34.035 03:12:08 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:34.035 03:12:08 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:34.035 03:12:08 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:34.035 03:12:08 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:34.035 03:12:08 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:34.035 03:12:08 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:34.035 03:12:08 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:34.035 03:12:08 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:34.035 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:34.035 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.267 ms 00:11:34.035 00:11:34.035 --- 10.0.0.2 ping statistics --- 00:11:34.035 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:34.035 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:11:34.035 03:12:08 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:34.035 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:34.035 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:11:34.035 00:11:34.035 --- 10.0.0.1 ping statistics --- 00:11:34.035 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:34.035 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:11:34.035 03:12:08 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:34.035 03:12:08 -- nvmf/common.sh@411 -- # return 0 00:11:34.035 03:12:08 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:11:34.035 03:12:08 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:34.035 03:12:08 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:11:34.035 03:12:08 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:11:34.035 03:12:08 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:34.035 03:12:08 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:11:34.035 03:12:08 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:11:34.035 03:12:08 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:11:34.035 03:12:08 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:34.035 03:12:08 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:34.035 03:12:08 -- common/autotest_common.sh@10 -- # set +x 00:11:34.035 03:12:08 -- nvmf/common.sh@470 -- # nvmfpid=1437991 00:11:34.035 03:12:08 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:34.035 03:12:08 -- nvmf/common.sh@471 -- # waitforlisten 1437991 00:11:34.035 03:12:08 -- common/autotest_common.sh@817 -- # '[' -z 1437991 ']' 00:11:34.035 03:12:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:34.035 03:12:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:34.035 03:12:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:34.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:34.035 03:12:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:34.035 03:12:08 -- common/autotest_common.sh@10 -- # set +x 00:11:34.035 [2024-04-25 03:12:08.467063] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:11:34.035 [2024-04-25 03:12:08.467141] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:34.035 EAL: No free 2048 kB hugepages reported on node 1 00:11:34.035 [2024-04-25 03:12:08.533306] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:34.295 [2024-04-25 03:12:08.642387] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:34.295 [2024-04-25 03:12:08.642451] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:34.295 [2024-04-25 03:12:08.642464] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:34.295 [2024-04-25 03:12:08.642476] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:34.295 [2024-04-25 03:12:08.642486] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:34.295 [2024-04-25 03:12:08.642562] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:34.295 [2024-04-25 03:12:08.642659] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:34.295 [2024-04-25 03:12:08.642689] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:34.295 [2024-04-25 03:12:08.642691] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:34.295 03:12:08 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:34.295 03:12:08 -- common/autotest_common.sh@850 -- # return 0 00:11:34.295 03:12:08 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:11:34.295 03:12:08 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:34.295 03:12:08 -- common/autotest_common.sh@10 -- # set +x 00:11:34.295 03:12:08 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:34.295 03:12:08 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:11:34.295 03:12:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:34.295 03:12:08 -- common/autotest_common.sh@10 -- # set +x 00:11:34.554 03:12:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:34.554 03:12:08 -- target/rpc.sh@26 -- # stats='{ 00:11:34.554 "tick_rate": 2700000000, 00:11:34.554 "poll_groups": [ 00:11:34.554 { 00:11:34.554 "name": "nvmf_tgt_poll_group_0", 00:11:34.554 "admin_qpairs": 0, 00:11:34.554 "io_qpairs": 0, 00:11:34.554 "current_admin_qpairs": 0, 00:11:34.554 "current_io_qpairs": 0, 00:11:34.554 "pending_bdev_io": 0, 00:11:34.554 "completed_nvme_io": 0, 00:11:34.554 "transports": [] 00:11:34.554 }, 00:11:34.554 { 00:11:34.554 "name": "nvmf_tgt_poll_group_1", 00:11:34.554 "admin_qpairs": 0, 00:11:34.554 "io_qpairs": 0, 00:11:34.554 "current_admin_qpairs": 0, 00:11:34.554 "current_io_qpairs": 0, 00:11:34.554 "pending_bdev_io": 0, 00:11:34.554 "completed_nvme_io": 0, 00:11:34.554 "transports": [] 00:11:34.554 }, 00:11:34.554 { 00:11:34.554 "name": "nvmf_tgt_poll_group_2", 00:11:34.554 "admin_qpairs": 0, 00:11:34.554 "io_qpairs": 0, 00:11:34.554 "current_admin_qpairs": 0, 00:11:34.554 "current_io_qpairs": 0, 00:11:34.554 "pending_bdev_io": 0, 00:11:34.554 "completed_nvme_io": 0, 00:11:34.554 "transports": [] 00:11:34.554 }, 00:11:34.554 { 00:11:34.554 "name": "nvmf_tgt_poll_group_3", 00:11:34.554 "admin_qpairs": 0, 00:11:34.554 "io_qpairs": 0, 00:11:34.554 "current_admin_qpairs": 0, 00:11:34.554 "current_io_qpairs": 0, 00:11:34.554 "pending_bdev_io": 0, 00:11:34.554 "completed_nvme_io": 0, 00:11:34.554 "transports": [] 00:11:34.554 } 00:11:34.554 ] 00:11:34.554 }' 00:11:34.554 03:12:08 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:11:34.554 03:12:08 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:11:34.554 03:12:08 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:11:34.554 03:12:08 -- target/rpc.sh@15 -- # wc -l 00:11:34.554 03:12:08 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:11:34.554 03:12:08 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:11:34.554 03:12:08 -- target/rpc.sh@29 -- # [[ null == null ]] 00:11:34.554 03:12:08 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:34.554 03:12:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:34.554 03:12:08 -- common/autotest_common.sh@10 -- # set +x 00:11:34.554 [2024-04-25 03:12:08.891722] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:34.554 03:12:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:34.554 03:12:08 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:11:34.554 03:12:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:34.554 03:12:08 -- common/autotest_common.sh@10 -- # set +x 00:11:34.554 03:12:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:34.554 03:12:08 -- target/rpc.sh@33 -- # stats='{ 00:11:34.554 "tick_rate": 2700000000, 00:11:34.554 "poll_groups": [ 00:11:34.554 { 00:11:34.554 "name": "nvmf_tgt_poll_group_0", 00:11:34.554 "admin_qpairs": 0, 00:11:34.554 "io_qpairs": 0, 00:11:34.554 "current_admin_qpairs": 0, 00:11:34.554 "current_io_qpairs": 0, 00:11:34.554 "pending_bdev_io": 0, 00:11:34.554 "completed_nvme_io": 0, 00:11:34.554 "transports": [ 00:11:34.554 { 00:11:34.554 "trtype": "TCP" 00:11:34.554 } 00:11:34.554 ] 00:11:34.554 }, 00:11:34.554 { 00:11:34.554 "name": "nvmf_tgt_poll_group_1", 00:11:34.554 "admin_qpairs": 0, 00:11:34.554 "io_qpairs": 0, 00:11:34.554 "current_admin_qpairs": 0, 00:11:34.554 "current_io_qpairs": 0, 00:11:34.554 "pending_bdev_io": 0, 00:11:34.554 "completed_nvme_io": 0, 00:11:34.554 "transports": [ 00:11:34.554 { 00:11:34.554 "trtype": "TCP" 00:11:34.554 } 00:11:34.554 ] 00:11:34.554 }, 00:11:34.554 { 00:11:34.554 "name": "nvmf_tgt_poll_group_2", 00:11:34.554 "admin_qpairs": 0, 00:11:34.554 "io_qpairs": 0, 00:11:34.554 "current_admin_qpairs": 0, 00:11:34.554 "current_io_qpairs": 0, 00:11:34.554 "pending_bdev_io": 0, 00:11:34.554 "completed_nvme_io": 0, 00:11:34.554 "transports": [ 00:11:34.554 { 00:11:34.554 "trtype": "TCP" 00:11:34.554 } 00:11:34.554 ] 00:11:34.554 }, 00:11:34.554 { 00:11:34.554 "name": "nvmf_tgt_poll_group_3", 00:11:34.554 "admin_qpairs": 0, 00:11:34.554 "io_qpairs": 0, 00:11:34.554 "current_admin_qpairs": 0, 00:11:34.554 "current_io_qpairs": 0, 00:11:34.554 "pending_bdev_io": 0, 00:11:34.554 "completed_nvme_io": 0, 00:11:34.554 "transports": [ 00:11:34.554 { 00:11:34.554 "trtype": "TCP" 00:11:34.554 } 00:11:34.554 ] 00:11:34.554 } 00:11:34.554 ] 00:11:34.554 }' 00:11:34.554 03:12:08 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:11:34.555 03:12:08 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:34.555 03:12:08 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:34.555 03:12:08 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:34.555 03:12:08 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:11:34.555 03:12:08 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:11:34.555 03:12:08 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:34.555 03:12:08 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:34.555 03:12:08 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:34.555 03:12:08 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:11:34.555 03:12:08 -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:11:34.555 03:12:08 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:11:34.555 03:12:08 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:11:34.555 03:12:08 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:11:34.555 03:12:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:34.555 03:12:08 -- common/autotest_common.sh@10 -- # set +x 00:11:34.555 Malloc1 00:11:34.555 03:12:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:34.555 03:12:09 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:34.555 03:12:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:34.555 03:12:09 -- common/autotest_common.sh@10 -- # set +x 00:11:34.555 03:12:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:34.555 03:12:09 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:34.555 03:12:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:34.555 03:12:09 -- common/autotest_common.sh@10 -- # set +x 00:11:34.555 03:12:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:34.555 03:12:09 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:11:34.555 03:12:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:34.555 03:12:09 -- common/autotest_common.sh@10 -- # set +x 00:11:34.555 03:12:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:34.555 03:12:09 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:34.555 03:12:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:34.555 03:12:09 -- common/autotest_common.sh@10 -- # set +x 00:11:34.555 [2024-04-25 03:12:09.044202] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:34.555 03:12:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:34.555 03:12:09 -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:11:34.555 03:12:09 -- common/autotest_common.sh@638 -- # local es=0 00:11:34.555 03:12:09 -- common/autotest_common.sh@640 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:11:34.555 03:12:09 -- common/autotest_common.sh@626 -- # local arg=nvme 00:11:34.555 03:12:09 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:34.555 03:12:09 -- common/autotest_common.sh@630 -- # type -t nvme 00:11:34.555 03:12:09 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:34.555 03:12:09 -- common/autotest_common.sh@632 -- # type -P nvme 00:11:34.555 03:12:09 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:34.555 03:12:09 -- common/autotest_common.sh@632 -- # arg=/usr/sbin/nvme 00:11:34.555 03:12:09 -- common/autotest_common.sh@632 -- # [[ -x /usr/sbin/nvme ]] 00:11:34.555 03:12:09 -- common/autotest_common.sh@641 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:11:34.814 [2024-04-25 03:12:09.066697] ctrlr.c: 766:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:11:34.814 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:34.814 could not add new controller: failed to write to nvme-fabrics device 00:11:34.814 03:12:09 -- common/autotest_common.sh@641 -- # es=1 00:11:34.814 03:12:09 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:11:34.814 03:12:09 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:11:34.814 03:12:09 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:11:34.814 03:12:09 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:34.814 03:12:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:34.814 03:12:09 -- common/autotest_common.sh@10 -- # set +x 00:11:34.814 03:12:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:34.814 03:12:09 -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:35.382 03:12:09 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:11:35.382 03:12:09 -- common/autotest_common.sh@1184 -- # local i=0 00:11:35.382 03:12:09 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:11:35.382 03:12:09 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:11:35.382 03:12:09 -- common/autotest_common.sh@1191 -- # sleep 2 00:11:37.286 03:12:11 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:11:37.286 03:12:11 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:11:37.286 03:12:11 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:11:37.286 03:12:11 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:11:37.286 03:12:11 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:11:37.286 03:12:11 -- common/autotest_common.sh@1194 -- # return 0 00:11:37.286 03:12:11 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:37.544 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:37.544 03:12:11 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:37.544 03:12:11 -- common/autotest_common.sh@1205 -- # local i=0 00:11:37.544 03:12:11 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:11:37.544 03:12:11 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:37.544 03:12:11 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:11:37.544 03:12:11 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:37.544 03:12:11 -- common/autotest_common.sh@1217 -- # return 0 00:11:37.544 03:12:11 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:37.544 03:12:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:37.544 03:12:11 -- common/autotest_common.sh@10 -- # set +x 00:11:37.544 03:12:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:37.544 03:12:11 -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:37.544 03:12:11 -- common/autotest_common.sh@638 -- # local es=0 00:11:37.544 03:12:11 -- common/autotest_common.sh@640 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:37.544 03:12:11 -- common/autotest_common.sh@626 -- # local arg=nvme 00:11:37.544 03:12:11 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:37.544 03:12:11 -- common/autotest_common.sh@630 -- # type -t nvme 00:11:37.544 03:12:11 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:37.544 03:12:11 -- common/autotest_common.sh@632 -- # type -P nvme 00:11:37.544 03:12:11 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:37.544 03:12:11 -- common/autotest_common.sh@632 -- # arg=/usr/sbin/nvme 00:11:37.544 03:12:11 -- common/autotest_common.sh@632 -- # [[ -x /usr/sbin/nvme ]] 00:11:37.544 03:12:11 -- common/autotest_common.sh@641 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:37.544 [2024-04-25 03:12:11.849477] ctrlr.c: 766:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:11:37.544 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:37.544 could not add new controller: failed to write to nvme-fabrics device 00:11:37.545 03:12:11 -- common/autotest_common.sh@641 -- # es=1 00:11:37.545 03:12:11 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:11:37.545 03:12:11 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:11:37.545 03:12:11 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:11:37.545 03:12:11 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:11:37.545 03:12:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:37.545 03:12:11 -- common/autotest_common.sh@10 -- # set +x 00:11:37.545 03:12:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:37.545 03:12:11 -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:38.114 03:12:12 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:11:38.114 03:12:12 -- common/autotest_common.sh@1184 -- # local i=0 00:11:38.114 03:12:12 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:11:38.114 03:12:12 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:11:38.114 03:12:12 -- common/autotest_common.sh@1191 -- # sleep 2 00:11:40.020 03:12:14 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:11:40.020 03:12:14 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:11:40.020 03:12:14 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:11:40.020 03:12:14 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:11:40.020 03:12:14 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:11:40.020 03:12:14 -- common/autotest_common.sh@1194 -- # return 0 00:11:40.020 03:12:14 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:40.278 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:40.278 03:12:14 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:40.278 03:12:14 -- common/autotest_common.sh@1205 -- # local i=0 00:11:40.278 03:12:14 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:11:40.278 03:12:14 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:40.278 03:12:14 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:11:40.278 03:12:14 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:40.278 03:12:14 -- common/autotest_common.sh@1217 -- # return 0 00:11:40.278 03:12:14 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:40.278 03:12:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:40.278 03:12:14 -- common/autotest_common.sh@10 -- # set +x 00:11:40.278 03:12:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:40.278 03:12:14 -- target/rpc.sh@81 -- # seq 1 5 00:11:40.278 03:12:14 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:40.278 03:12:14 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:40.278 03:12:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:40.278 03:12:14 -- common/autotest_common.sh@10 -- # set +x 00:11:40.278 03:12:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:40.278 03:12:14 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:40.278 03:12:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:40.278 03:12:14 -- common/autotest_common.sh@10 -- # set +x 00:11:40.278 [2024-04-25 03:12:14.643012] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:40.278 03:12:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:40.278 03:12:14 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:40.278 03:12:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:40.278 03:12:14 -- common/autotest_common.sh@10 -- # set +x 00:11:40.278 03:12:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:40.278 03:12:14 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:40.278 03:12:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:40.278 03:12:14 -- common/autotest_common.sh@10 -- # set +x 00:11:40.278 03:12:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:40.278 03:12:14 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:40.845 03:12:15 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:40.845 03:12:15 -- common/autotest_common.sh@1184 -- # local i=0 00:11:40.845 03:12:15 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:11:40.845 03:12:15 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:11:40.845 03:12:15 -- common/autotest_common.sh@1191 -- # sleep 2 00:11:43.385 03:12:17 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:11:43.385 03:12:17 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:11:43.385 03:12:17 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:11:43.385 03:12:17 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:11:43.385 03:12:17 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:11:43.385 03:12:17 -- common/autotest_common.sh@1194 -- # return 0 00:11:43.385 03:12:17 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:43.385 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:43.385 03:12:17 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:43.385 03:12:17 -- common/autotest_common.sh@1205 -- # local i=0 00:11:43.385 03:12:17 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:11:43.385 03:12:17 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:43.385 03:12:17 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:11:43.385 03:12:17 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:43.385 03:12:17 -- common/autotest_common.sh@1217 -- # return 0 00:11:43.385 03:12:17 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:43.385 03:12:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:43.385 03:12:17 -- common/autotest_common.sh@10 -- # set +x 00:11:43.385 03:12:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:43.385 03:12:17 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:43.385 03:12:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:43.385 03:12:17 -- common/autotest_common.sh@10 -- # set +x 00:11:43.385 03:12:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:43.385 03:12:17 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:43.385 03:12:17 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:43.385 03:12:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:43.385 03:12:17 -- common/autotest_common.sh@10 -- # set +x 00:11:43.385 03:12:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:43.385 03:12:17 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:43.385 03:12:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:43.385 03:12:17 -- common/autotest_common.sh@10 -- # set +x 00:11:43.385 [2024-04-25 03:12:17.375510] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:43.385 03:12:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:43.385 03:12:17 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:43.385 03:12:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:43.385 03:12:17 -- common/autotest_common.sh@10 -- # set +x 00:11:43.385 03:12:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:43.385 03:12:17 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:43.385 03:12:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:43.385 03:12:17 -- common/autotest_common.sh@10 -- # set +x 00:11:43.385 03:12:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:43.385 03:12:17 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:43.646 03:12:18 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:43.646 03:12:18 -- common/autotest_common.sh@1184 -- # local i=0 00:11:43.646 03:12:18 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:11:43.646 03:12:18 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:11:43.646 03:12:18 -- common/autotest_common.sh@1191 -- # sleep 2 00:11:46.184 03:12:20 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:11:46.184 03:12:20 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:11:46.184 03:12:20 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:11:46.184 03:12:20 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:11:46.184 03:12:20 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:11:46.184 03:12:20 -- common/autotest_common.sh@1194 -- # return 0 00:11:46.184 03:12:20 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:46.184 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:46.184 03:12:20 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:46.184 03:12:20 -- common/autotest_common.sh@1205 -- # local i=0 00:11:46.184 03:12:20 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:11:46.184 03:12:20 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:46.184 03:12:20 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:11:46.184 03:12:20 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:46.184 03:12:20 -- common/autotest_common.sh@1217 -- # return 0 00:11:46.184 03:12:20 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:46.184 03:12:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:46.184 03:12:20 -- common/autotest_common.sh@10 -- # set +x 00:11:46.184 03:12:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:46.184 03:12:20 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:46.184 03:12:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:46.184 03:12:20 -- common/autotest_common.sh@10 -- # set +x 00:11:46.184 03:12:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:46.184 03:12:20 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:46.184 03:12:20 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:46.184 03:12:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:46.184 03:12:20 -- common/autotest_common.sh@10 -- # set +x 00:11:46.184 03:12:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:46.184 03:12:20 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:46.184 03:12:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:46.184 03:12:20 -- common/autotest_common.sh@10 -- # set +x 00:11:46.184 [2024-04-25 03:12:20.190344] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:46.184 03:12:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:46.184 03:12:20 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:46.184 03:12:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:46.184 03:12:20 -- common/autotest_common.sh@10 -- # set +x 00:11:46.184 03:12:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:46.184 03:12:20 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:46.184 03:12:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:46.184 03:12:20 -- common/autotest_common.sh@10 -- # set +x 00:11:46.184 03:12:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:46.185 03:12:20 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:46.445 03:12:20 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:46.445 03:12:20 -- common/autotest_common.sh@1184 -- # local i=0 00:11:46.445 03:12:20 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:11:46.445 03:12:20 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:11:46.445 03:12:20 -- common/autotest_common.sh@1191 -- # sleep 2 00:11:48.354 03:12:22 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:11:48.354 03:12:22 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:11:48.354 03:12:22 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:11:48.354 03:12:22 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:11:48.354 03:12:22 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:11:48.354 03:12:22 -- common/autotest_common.sh@1194 -- # return 0 00:11:48.354 03:12:22 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:48.613 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:48.613 03:12:22 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:48.613 03:12:22 -- common/autotest_common.sh@1205 -- # local i=0 00:11:48.613 03:12:22 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:11:48.613 03:12:22 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:48.613 03:12:22 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:11:48.613 03:12:22 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:48.613 03:12:22 -- common/autotest_common.sh@1217 -- # return 0 00:11:48.613 03:12:22 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:48.613 03:12:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:48.613 03:12:22 -- common/autotest_common.sh@10 -- # set +x 00:11:48.613 03:12:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:48.613 03:12:22 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:48.613 03:12:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:48.613 03:12:22 -- common/autotest_common.sh@10 -- # set +x 00:11:48.613 03:12:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:48.613 03:12:22 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:48.613 03:12:22 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:48.613 03:12:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:48.613 03:12:22 -- common/autotest_common.sh@10 -- # set +x 00:11:48.613 03:12:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:48.613 03:12:22 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:48.613 03:12:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:48.613 03:12:22 -- common/autotest_common.sh@10 -- # set +x 00:11:48.613 [2024-04-25 03:12:22.983215] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:48.613 03:12:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:48.613 03:12:22 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:48.613 03:12:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:48.613 03:12:22 -- common/autotest_common.sh@10 -- # set +x 00:11:48.613 03:12:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:48.613 03:12:22 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:48.613 03:12:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:48.613 03:12:22 -- common/autotest_common.sh@10 -- # set +x 00:11:48.613 03:12:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:48.613 03:12:23 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:49.547 03:12:23 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:49.547 03:12:23 -- common/autotest_common.sh@1184 -- # local i=0 00:11:49.547 03:12:23 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:11:49.547 03:12:23 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:11:49.547 03:12:23 -- common/autotest_common.sh@1191 -- # sleep 2 00:11:51.452 03:12:25 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:11:51.452 03:12:25 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:11:51.452 03:12:25 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:11:51.452 03:12:25 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:11:51.452 03:12:25 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:11:51.452 03:12:25 -- common/autotest_common.sh@1194 -- # return 0 00:11:51.452 03:12:25 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:51.452 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:51.452 03:12:25 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:51.452 03:12:25 -- common/autotest_common.sh@1205 -- # local i=0 00:11:51.452 03:12:25 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:11:51.452 03:12:25 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:51.452 03:12:25 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:11:51.452 03:12:25 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:51.452 03:12:25 -- common/autotest_common.sh@1217 -- # return 0 00:11:51.452 03:12:25 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:51.452 03:12:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:51.452 03:12:25 -- common/autotest_common.sh@10 -- # set +x 00:11:51.452 03:12:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:51.452 03:12:25 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:51.452 03:12:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:51.452 03:12:25 -- common/autotest_common.sh@10 -- # set +x 00:11:51.452 03:12:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:51.452 03:12:25 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:51.452 03:12:25 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:51.452 03:12:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:51.452 03:12:25 -- common/autotest_common.sh@10 -- # set +x 00:11:51.452 03:12:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:51.452 03:12:25 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:51.452 03:12:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:51.452 03:12:25 -- common/autotest_common.sh@10 -- # set +x 00:11:51.452 [2024-04-25 03:12:25.842371] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:51.452 03:12:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:51.452 03:12:25 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:51.452 03:12:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:51.452 03:12:25 -- common/autotest_common.sh@10 -- # set +x 00:11:51.452 03:12:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:51.452 03:12:25 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:51.452 03:12:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:51.452 03:12:25 -- common/autotest_common.sh@10 -- # set +x 00:11:51.452 03:12:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:51.452 03:12:25 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:52.391 03:12:26 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:52.391 03:12:26 -- common/autotest_common.sh@1184 -- # local i=0 00:11:52.391 03:12:26 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:11:52.391 03:12:26 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:11:52.391 03:12:26 -- common/autotest_common.sh@1191 -- # sleep 2 00:11:54.299 03:12:28 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:11:54.299 03:12:28 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:11:54.299 03:12:28 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:11:54.299 03:12:28 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:11:54.299 03:12:28 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:11:54.299 03:12:28 -- common/autotest_common.sh@1194 -- # return 0 00:11:54.299 03:12:28 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:54.299 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:54.299 03:12:28 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:54.299 03:12:28 -- common/autotest_common.sh@1205 -- # local i=0 00:11:54.299 03:12:28 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:11:54.299 03:12:28 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:54.299 03:12:28 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:11:54.299 03:12:28 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:54.299 03:12:28 -- common/autotest_common.sh@1217 -- # return 0 00:11:54.299 03:12:28 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:54.299 03:12:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:54.299 03:12:28 -- common/autotest_common.sh@10 -- # set +x 00:11:54.299 03:12:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:54.299 03:12:28 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:54.299 03:12:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:54.299 03:12:28 -- common/autotest_common.sh@10 -- # set +x 00:11:54.299 03:12:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:54.299 03:12:28 -- target/rpc.sh@99 -- # seq 1 5 00:11:54.299 03:12:28 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:54.299 03:12:28 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:54.299 03:12:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:54.299 03:12:28 -- common/autotest_common.sh@10 -- # set +x 00:11:54.299 03:12:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:54.299 03:12:28 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:54.299 03:12:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:54.299 03:12:28 -- common/autotest_common.sh@10 -- # set +x 00:11:54.299 [2024-04-25 03:12:28.663562] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:54.299 03:12:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:54.299 03:12:28 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:54.299 03:12:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:54.299 03:12:28 -- common/autotest_common.sh@10 -- # set +x 00:11:54.299 03:12:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:54.299 03:12:28 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:54.299 03:12:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:54.299 03:12:28 -- common/autotest_common.sh@10 -- # set +x 00:11:54.299 03:12:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:54.299 03:12:28 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:54.299 03:12:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:54.299 03:12:28 -- common/autotest_common.sh@10 -- # set +x 00:11:54.299 03:12:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:54.299 03:12:28 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:54.299 03:12:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:54.299 03:12:28 -- common/autotest_common.sh@10 -- # set +x 00:11:54.299 03:12:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:54.299 03:12:28 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:54.299 03:12:28 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:54.299 03:12:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:54.299 03:12:28 -- common/autotest_common.sh@10 -- # set +x 00:11:54.299 03:12:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:54.299 03:12:28 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:54.299 03:12:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:54.299 03:12:28 -- common/autotest_common.sh@10 -- # set +x 00:11:54.299 [2024-04-25 03:12:28.711667] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:54.299 03:12:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:54.299 03:12:28 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:54.299 03:12:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:54.299 03:12:28 -- common/autotest_common.sh@10 -- # set +x 00:11:54.299 03:12:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:54.299 03:12:28 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:54.299 03:12:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:54.299 03:12:28 -- common/autotest_common.sh@10 -- # set +x 00:11:54.299 03:12:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:54.299 03:12:28 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:54.299 03:12:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:54.299 03:12:28 -- common/autotest_common.sh@10 -- # set +x 00:11:54.299 03:12:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:54.299 03:12:28 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:54.299 03:12:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:54.299 03:12:28 -- common/autotest_common.sh@10 -- # set +x 00:11:54.299 03:12:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:54.299 03:12:28 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:54.299 03:12:28 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:54.299 03:12:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:54.299 03:12:28 -- common/autotest_common.sh@10 -- # set +x 00:11:54.299 03:12:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:54.299 03:12:28 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:54.299 03:12:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:54.299 03:12:28 -- common/autotest_common.sh@10 -- # set +x 00:11:54.299 [2024-04-25 03:12:28.759853] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:54.299 03:12:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:54.299 03:12:28 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:54.299 03:12:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:54.299 03:12:28 -- common/autotest_common.sh@10 -- # set +x 00:11:54.299 03:12:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:54.299 03:12:28 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:54.299 03:12:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:54.299 03:12:28 -- common/autotest_common.sh@10 -- # set +x 00:11:54.299 03:12:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:54.299 03:12:28 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:54.299 03:12:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:54.299 03:12:28 -- common/autotest_common.sh@10 -- # set +x 00:11:54.299 03:12:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:54.299 03:12:28 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:54.299 03:12:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:54.299 03:12:28 -- common/autotest_common.sh@10 -- # set +x 00:11:54.299 03:12:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:54.299 03:12:28 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:54.299 03:12:28 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:54.299 03:12:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:54.299 03:12:28 -- common/autotest_common.sh@10 -- # set +x 00:11:54.558 03:12:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:54.558 03:12:28 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:54.558 03:12:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:54.558 03:12:28 -- common/autotest_common.sh@10 -- # set +x 00:11:54.558 [2024-04-25 03:12:28.808019] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:54.558 03:12:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:54.558 03:12:28 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:54.558 03:12:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:54.558 03:12:28 -- common/autotest_common.sh@10 -- # set +x 00:11:54.558 03:12:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:54.558 03:12:28 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:54.558 03:12:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:54.558 03:12:28 -- common/autotest_common.sh@10 -- # set +x 00:11:54.558 03:12:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:54.558 03:12:28 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:54.558 03:12:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:54.558 03:12:28 -- common/autotest_common.sh@10 -- # set +x 00:11:54.558 03:12:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:54.558 03:12:28 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:54.558 03:12:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:54.558 03:12:28 -- common/autotest_common.sh@10 -- # set +x 00:11:54.558 03:12:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:54.558 03:12:28 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:54.558 03:12:28 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:54.558 03:12:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:54.558 03:12:28 -- common/autotest_common.sh@10 -- # set +x 00:11:54.558 03:12:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:54.558 03:12:28 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:54.558 03:12:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:54.558 03:12:28 -- common/autotest_common.sh@10 -- # set +x 00:11:54.558 [2024-04-25 03:12:28.856201] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:54.558 03:12:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:54.558 03:12:28 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:54.558 03:12:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:54.558 03:12:28 -- common/autotest_common.sh@10 -- # set +x 00:11:54.558 03:12:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:54.558 03:12:28 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:54.558 03:12:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:54.558 03:12:28 -- common/autotest_common.sh@10 -- # set +x 00:11:54.558 03:12:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:54.558 03:12:28 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:54.558 03:12:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:54.558 03:12:28 -- common/autotest_common.sh@10 -- # set +x 00:11:54.558 03:12:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:54.558 03:12:28 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:54.558 03:12:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:54.558 03:12:28 -- common/autotest_common.sh@10 -- # set +x 00:11:54.558 03:12:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:54.558 03:12:28 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:11:54.558 03:12:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:54.558 03:12:28 -- common/autotest_common.sh@10 -- # set +x 00:11:54.558 03:12:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:54.558 03:12:28 -- target/rpc.sh@110 -- # stats='{ 00:11:54.558 "tick_rate": 2700000000, 00:11:54.558 "poll_groups": [ 00:11:54.558 { 00:11:54.558 "name": "nvmf_tgt_poll_group_0", 00:11:54.558 "admin_qpairs": 2, 00:11:54.558 "io_qpairs": 84, 00:11:54.558 "current_admin_qpairs": 0, 00:11:54.558 "current_io_qpairs": 0, 00:11:54.558 "pending_bdev_io": 0, 00:11:54.558 "completed_nvme_io": 88, 00:11:54.558 "transports": [ 00:11:54.558 { 00:11:54.558 "trtype": "TCP" 00:11:54.558 } 00:11:54.558 ] 00:11:54.558 }, 00:11:54.558 { 00:11:54.558 "name": "nvmf_tgt_poll_group_1", 00:11:54.558 "admin_qpairs": 2, 00:11:54.558 "io_qpairs": 84, 00:11:54.558 "current_admin_qpairs": 0, 00:11:54.558 "current_io_qpairs": 0, 00:11:54.558 "pending_bdev_io": 0, 00:11:54.558 "completed_nvme_io": 183, 00:11:54.558 "transports": [ 00:11:54.558 { 00:11:54.558 "trtype": "TCP" 00:11:54.558 } 00:11:54.558 ] 00:11:54.558 }, 00:11:54.558 { 00:11:54.558 "name": "nvmf_tgt_poll_group_2", 00:11:54.558 "admin_qpairs": 1, 00:11:54.558 "io_qpairs": 84, 00:11:54.558 "current_admin_qpairs": 0, 00:11:54.558 "current_io_qpairs": 0, 00:11:54.558 "pending_bdev_io": 0, 00:11:54.558 "completed_nvme_io": 183, 00:11:54.558 "transports": [ 00:11:54.558 { 00:11:54.558 "trtype": "TCP" 00:11:54.558 } 00:11:54.558 ] 00:11:54.558 }, 00:11:54.558 { 00:11:54.559 "name": "nvmf_tgt_poll_group_3", 00:11:54.559 "admin_qpairs": 2, 00:11:54.559 "io_qpairs": 84, 00:11:54.559 "current_admin_qpairs": 0, 00:11:54.559 "current_io_qpairs": 0, 00:11:54.559 "pending_bdev_io": 0, 00:11:54.559 "completed_nvme_io": 232, 00:11:54.559 "transports": [ 00:11:54.559 { 00:11:54.559 "trtype": "TCP" 00:11:54.559 } 00:11:54.559 ] 00:11:54.559 } 00:11:54.559 ] 00:11:54.559 }' 00:11:54.559 03:12:28 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:11:54.559 03:12:28 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:54.559 03:12:28 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:54.559 03:12:28 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:54.559 03:12:28 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:11:54.559 03:12:28 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:11:54.559 03:12:28 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:54.559 03:12:28 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:54.559 03:12:28 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:54.559 03:12:28 -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:11:54.559 03:12:28 -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:11:54.559 03:12:28 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:11:54.559 03:12:28 -- target/rpc.sh@123 -- # nvmftestfini 00:11:54.559 03:12:28 -- nvmf/common.sh@477 -- # nvmfcleanup 00:11:54.559 03:12:28 -- nvmf/common.sh@117 -- # sync 00:11:54.559 03:12:28 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:54.559 03:12:28 -- nvmf/common.sh@120 -- # set +e 00:11:54.559 03:12:28 -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:54.559 03:12:28 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:54.559 rmmod nvme_tcp 00:11:54.559 rmmod nvme_fabrics 00:11:54.559 rmmod nvme_keyring 00:11:54.559 03:12:29 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:54.559 03:12:29 -- nvmf/common.sh@124 -- # set -e 00:11:54.559 03:12:29 -- nvmf/common.sh@125 -- # return 0 00:11:54.559 03:12:29 -- nvmf/common.sh@478 -- # '[' -n 1437991 ']' 00:11:54.559 03:12:29 -- nvmf/common.sh@479 -- # killprocess 1437991 00:11:54.559 03:12:29 -- common/autotest_common.sh@936 -- # '[' -z 1437991 ']' 00:11:54.559 03:12:29 -- common/autotest_common.sh@940 -- # kill -0 1437991 00:11:54.559 03:12:29 -- common/autotest_common.sh@941 -- # uname 00:11:54.559 03:12:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:54.559 03:12:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1437991 00:11:54.819 03:12:29 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:54.819 03:12:29 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:54.819 03:12:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1437991' 00:11:54.819 killing process with pid 1437991 00:11:54.819 03:12:29 -- common/autotest_common.sh@955 -- # kill 1437991 00:11:54.819 03:12:29 -- common/autotest_common.sh@960 -- # wait 1437991 00:11:55.080 03:12:29 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:11:55.080 03:12:29 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:11:55.080 03:12:29 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:11:55.080 03:12:29 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:55.080 03:12:29 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:55.080 03:12:29 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:55.080 03:12:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:55.080 03:12:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:56.992 03:12:31 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:56.992 00:11:56.992 real 0m25.254s 00:11:56.992 user 1m21.916s 00:11:56.992 sys 0m4.106s 00:11:56.992 03:12:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:56.992 03:12:31 -- common/autotest_common.sh@10 -- # set +x 00:11:56.992 ************************************ 00:11:56.992 END TEST nvmf_rpc 00:11:56.992 ************************************ 00:11:56.992 03:12:31 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:11:56.992 03:12:31 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:56.992 03:12:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:56.992 03:12:31 -- common/autotest_common.sh@10 -- # set +x 00:11:57.252 ************************************ 00:11:57.252 START TEST nvmf_invalid 00:11:57.252 ************************************ 00:11:57.253 03:12:31 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:11:57.253 * Looking for test storage... 00:11:57.253 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:57.253 03:12:31 -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:57.253 03:12:31 -- nvmf/common.sh@7 -- # uname -s 00:11:57.253 03:12:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:57.253 03:12:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:57.253 03:12:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:57.253 03:12:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:57.253 03:12:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:57.253 03:12:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:57.253 03:12:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:57.253 03:12:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:57.253 03:12:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:57.253 03:12:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:57.253 03:12:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:57.253 03:12:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:57.253 03:12:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:57.253 03:12:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:57.253 03:12:31 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:57.253 03:12:31 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:57.253 03:12:31 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:57.253 03:12:31 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:57.253 03:12:31 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:57.253 03:12:31 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:57.253 03:12:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.253 03:12:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.253 03:12:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.253 03:12:31 -- paths/export.sh@5 -- # export PATH 00:11:57.253 03:12:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.253 03:12:31 -- nvmf/common.sh@47 -- # : 0 00:11:57.253 03:12:31 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:57.253 03:12:31 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:57.253 03:12:31 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:57.253 03:12:31 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:57.253 03:12:31 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:57.253 03:12:31 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:57.253 03:12:31 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:57.253 03:12:31 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:57.253 03:12:31 -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:11:57.253 03:12:31 -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:57.253 03:12:31 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:11:57.253 03:12:31 -- target/invalid.sh@14 -- # target=foobar 00:11:57.253 03:12:31 -- target/invalid.sh@16 -- # RANDOM=0 00:11:57.253 03:12:31 -- target/invalid.sh@34 -- # nvmftestinit 00:11:57.253 03:12:31 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:11:57.253 03:12:31 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:57.253 03:12:31 -- nvmf/common.sh@437 -- # prepare_net_devs 00:11:57.253 03:12:31 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:11:57.253 03:12:31 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:11:57.253 03:12:31 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:57.253 03:12:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:57.253 03:12:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:57.253 03:12:31 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:11:57.253 03:12:31 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:11:57.253 03:12:31 -- nvmf/common.sh@285 -- # xtrace_disable 00:11:57.253 03:12:31 -- common/autotest_common.sh@10 -- # set +x 00:11:59.166 03:12:33 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:59.166 03:12:33 -- nvmf/common.sh@291 -- # pci_devs=() 00:11:59.166 03:12:33 -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:59.166 03:12:33 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:59.166 03:12:33 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:59.166 03:12:33 -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:59.166 03:12:33 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:59.166 03:12:33 -- nvmf/common.sh@295 -- # net_devs=() 00:11:59.166 03:12:33 -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:59.166 03:12:33 -- nvmf/common.sh@296 -- # e810=() 00:11:59.166 03:12:33 -- nvmf/common.sh@296 -- # local -ga e810 00:11:59.166 03:12:33 -- nvmf/common.sh@297 -- # x722=() 00:11:59.166 03:12:33 -- nvmf/common.sh@297 -- # local -ga x722 00:11:59.166 03:12:33 -- nvmf/common.sh@298 -- # mlx=() 00:11:59.166 03:12:33 -- nvmf/common.sh@298 -- # local -ga mlx 00:11:59.166 03:12:33 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:59.166 03:12:33 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:59.166 03:12:33 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:59.166 03:12:33 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:59.166 03:12:33 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:59.166 03:12:33 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:59.166 03:12:33 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:59.166 03:12:33 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:59.166 03:12:33 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:59.166 03:12:33 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:59.166 03:12:33 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:59.166 03:12:33 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:59.166 03:12:33 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:59.166 03:12:33 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:59.166 03:12:33 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:59.166 03:12:33 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:59.166 03:12:33 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:59.166 03:12:33 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:59.166 03:12:33 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:59.167 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:59.167 03:12:33 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:59.167 03:12:33 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:59.167 03:12:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:59.167 03:12:33 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:59.167 03:12:33 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:59.167 03:12:33 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:59.167 03:12:33 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:59.167 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:59.167 03:12:33 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:59.167 03:12:33 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:59.167 03:12:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:59.167 03:12:33 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:59.167 03:12:33 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:59.167 03:12:33 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:59.167 03:12:33 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:59.167 03:12:33 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:59.167 03:12:33 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:59.167 03:12:33 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:59.167 03:12:33 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:59.167 03:12:33 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:59.167 03:12:33 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:59.167 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:59.167 03:12:33 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:59.167 03:12:33 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:59.167 03:12:33 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:59.167 03:12:33 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:59.167 03:12:33 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:59.167 03:12:33 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:59.167 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:59.167 03:12:33 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:59.167 03:12:33 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:11:59.167 03:12:33 -- nvmf/common.sh@403 -- # is_hw=yes 00:11:59.167 03:12:33 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:11:59.167 03:12:33 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:11:59.167 03:12:33 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:11:59.167 03:12:33 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:59.167 03:12:33 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:59.167 03:12:33 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:59.167 03:12:33 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:59.167 03:12:33 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:59.167 03:12:33 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:59.167 03:12:33 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:59.167 03:12:33 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:59.167 03:12:33 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:59.167 03:12:33 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:59.167 03:12:33 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:59.167 03:12:33 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:59.167 03:12:33 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:59.167 03:12:33 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:59.167 03:12:33 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:59.167 03:12:33 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:59.167 03:12:33 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:59.426 03:12:33 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:59.426 03:12:33 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:59.426 03:12:33 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:59.426 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:59.426 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.138 ms 00:11:59.426 00:11:59.426 --- 10.0.0.2 ping statistics --- 00:11:59.426 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:59.426 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:11:59.426 03:12:33 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:59.426 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:59.426 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:11:59.426 00:11:59.426 --- 10.0.0.1 ping statistics --- 00:11:59.426 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:59.426 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:11:59.426 03:12:33 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:59.426 03:12:33 -- nvmf/common.sh@411 -- # return 0 00:11:59.426 03:12:33 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:11:59.426 03:12:33 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:59.426 03:12:33 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:11:59.426 03:12:33 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:11:59.426 03:12:33 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:59.426 03:12:33 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:11:59.426 03:12:33 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:11:59.426 03:12:33 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:11:59.426 03:12:33 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:59.426 03:12:33 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:59.426 03:12:33 -- common/autotest_common.sh@10 -- # set +x 00:11:59.426 03:12:33 -- nvmf/common.sh@470 -- # nvmfpid=1442750 00:11:59.426 03:12:33 -- nvmf/common.sh@471 -- # waitforlisten 1442750 00:11:59.426 03:12:33 -- common/autotest_common.sh@817 -- # '[' -z 1442750 ']' 00:11:59.426 03:12:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:59.426 03:12:33 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:59.426 03:12:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:59.426 03:12:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:59.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:59.426 03:12:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:59.426 03:12:33 -- common/autotest_common.sh@10 -- # set +x 00:11:59.426 [2024-04-25 03:12:33.796363] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:11:59.426 [2024-04-25 03:12:33.796461] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:59.426 EAL: No free 2048 kB hugepages reported on node 1 00:11:59.426 [2024-04-25 03:12:33.866559] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:59.685 [2024-04-25 03:12:33.986154] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:59.685 [2024-04-25 03:12:33.986219] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:59.685 [2024-04-25 03:12:33.986237] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:59.685 [2024-04-25 03:12:33.986250] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:59.685 [2024-04-25 03:12:33.986262] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:59.685 [2024-04-25 03:12:33.986355] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:59.685 [2024-04-25 03:12:33.986412] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:59.685 [2024-04-25 03:12:33.986470] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:59.685 [2024-04-25 03:12:33.986467] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:00.275 03:12:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:00.275 03:12:34 -- common/autotest_common.sh@850 -- # return 0 00:12:00.275 03:12:34 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:12:00.275 03:12:34 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:00.275 03:12:34 -- common/autotest_common.sh@10 -- # set +x 00:12:00.275 03:12:34 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:00.275 03:12:34 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:00.275 03:12:34 -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode31758 00:12:00.534 [2024-04-25 03:12:34.966314] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:12:00.534 03:12:34 -- target/invalid.sh@40 -- # out='request: 00:12:00.534 { 00:12:00.534 "nqn": "nqn.2016-06.io.spdk:cnode31758", 00:12:00.534 "tgt_name": "foobar", 00:12:00.534 "method": "nvmf_create_subsystem", 00:12:00.534 "req_id": 1 00:12:00.534 } 00:12:00.534 Got JSON-RPC error response 00:12:00.534 response: 00:12:00.534 { 00:12:00.534 "code": -32603, 00:12:00.534 "message": "Unable to find target foobar" 00:12:00.534 }' 00:12:00.534 03:12:34 -- target/invalid.sh@41 -- # [[ request: 00:12:00.534 { 00:12:00.534 "nqn": "nqn.2016-06.io.spdk:cnode31758", 00:12:00.534 "tgt_name": "foobar", 00:12:00.534 "method": "nvmf_create_subsystem", 00:12:00.534 "req_id": 1 00:12:00.534 } 00:12:00.534 Got JSON-RPC error response 00:12:00.534 response: 00:12:00.534 { 00:12:00.534 "code": -32603, 00:12:00.534 "message": "Unable to find target foobar" 00:12:00.534 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:12:00.534 03:12:34 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:12:00.534 03:12:34 -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode13375 00:12:00.792 [2024-04-25 03:12:35.211189] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13375: invalid serial number 'SPDKISFASTANDAWESOME' 00:12:00.792 03:12:35 -- target/invalid.sh@45 -- # out='request: 00:12:00.792 { 00:12:00.792 "nqn": "nqn.2016-06.io.spdk:cnode13375", 00:12:00.792 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:00.792 "method": "nvmf_create_subsystem", 00:12:00.792 "req_id": 1 00:12:00.792 } 00:12:00.792 Got JSON-RPC error response 00:12:00.792 response: 00:12:00.792 { 00:12:00.792 "code": -32602, 00:12:00.792 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:00.792 }' 00:12:00.792 03:12:35 -- target/invalid.sh@46 -- # [[ request: 00:12:00.792 { 00:12:00.792 "nqn": "nqn.2016-06.io.spdk:cnode13375", 00:12:00.792 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:00.792 "method": "nvmf_create_subsystem", 00:12:00.792 "req_id": 1 00:12:00.792 } 00:12:00.792 Got JSON-RPC error response 00:12:00.792 response: 00:12:00.792 { 00:12:00.792 "code": -32602, 00:12:00.792 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:00.792 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:00.792 03:12:35 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:12:00.792 03:12:35 -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode5462 00:12:01.051 [2024-04-25 03:12:35.455989] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5462: invalid model number 'SPDK_Controller' 00:12:01.051 03:12:35 -- target/invalid.sh@50 -- # out='request: 00:12:01.051 { 00:12:01.051 "nqn": "nqn.2016-06.io.spdk:cnode5462", 00:12:01.051 "model_number": "SPDK_Controller\u001f", 00:12:01.051 "method": "nvmf_create_subsystem", 00:12:01.051 "req_id": 1 00:12:01.051 } 00:12:01.051 Got JSON-RPC error response 00:12:01.051 response: 00:12:01.051 { 00:12:01.051 "code": -32602, 00:12:01.051 "message": "Invalid MN SPDK_Controller\u001f" 00:12:01.051 }' 00:12:01.051 03:12:35 -- target/invalid.sh@51 -- # [[ request: 00:12:01.051 { 00:12:01.051 "nqn": "nqn.2016-06.io.spdk:cnode5462", 00:12:01.051 "model_number": "SPDK_Controller\u001f", 00:12:01.051 "method": "nvmf_create_subsystem", 00:12:01.051 "req_id": 1 00:12:01.051 } 00:12:01.051 Got JSON-RPC error response 00:12:01.051 response: 00:12:01.051 { 00:12:01.051 "code": -32602, 00:12:01.051 "message": "Invalid MN SPDK_Controller\u001f" 00:12:01.051 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:01.051 03:12:35 -- target/invalid.sh@54 -- # gen_random_s 21 00:12:01.051 03:12:35 -- target/invalid.sh@19 -- # local length=21 ll 00:12:01.051 03:12:35 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:01.051 03:12:35 -- target/invalid.sh@21 -- # local chars 00:12:01.051 03:12:35 -- target/invalid.sh@22 -- # local string 00:12:01.051 03:12:35 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:01.051 03:12:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:01.051 03:12:35 -- target/invalid.sh@25 -- # printf %x 75 00:12:01.051 03:12:35 -- target/invalid.sh@25 -- # echo -e '\x4b' 00:12:01.051 03:12:35 -- target/invalid.sh@25 -- # string+=K 00:12:01.051 03:12:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:01.051 03:12:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:01.051 03:12:35 -- target/invalid.sh@25 -- # printf %x 51 00:12:01.051 03:12:35 -- target/invalid.sh@25 -- # echo -e '\x33' 00:12:01.051 03:12:35 -- target/invalid.sh@25 -- # string+=3 00:12:01.051 03:12:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:01.051 03:12:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:01.051 03:12:35 -- target/invalid.sh@25 -- # printf %x 105 00:12:01.051 03:12:35 -- target/invalid.sh@25 -- # echo -e '\x69' 00:12:01.051 03:12:35 -- target/invalid.sh@25 -- # string+=i 00:12:01.051 03:12:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:01.051 03:12:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:01.051 03:12:35 -- target/invalid.sh@25 -- # printf %x 126 00:12:01.051 03:12:35 -- target/invalid.sh@25 -- # echo -e '\x7e' 00:12:01.051 03:12:35 -- target/invalid.sh@25 -- # string+='~' 00:12:01.051 03:12:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:01.051 03:12:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:01.051 03:12:35 -- target/invalid.sh@25 -- # printf %x 105 00:12:01.051 03:12:35 -- target/invalid.sh@25 -- # echo -e '\x69' 00:12:01.051 03:12:35 -- target/invalid.sh@25 -- # string+=i 00:12:01.051 03:12:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:01.051 03:12:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:01.051 03:12:35 -- target/invalid.sh@25 -- # printf %x 89 00:12:01.051 03:12:35 -- target/invalid.sh@25 -- # echo -e '\x59' 00:12:01.051 03:12:35 -- target/invalid.sh@25 -- # string+=Y 00:12:01.051 03:12:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:01.051 03:12:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:01.051 03:12:35 -- target/invalid.sh@25 -- # printf %x 53 00:12:01.051 03:12:35 -- target/invalid.sh@25 -- # echo -e '\x35' 00:12:01.051 03:12:35 -- target/invalid.sh@25 -- # string+=5 00:12:01.051 03:12:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:01.051 03:12:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:01.051 03:12:35 -- target/invalid.sh@25 -- # printf %x 57 00:12:01.051 03:12:35 -- target/invalid.sh@25 -- # echo -e '\x39' 00:12:01.051 03:12:35 -- target/invalid.sh@25 -- # string+=9 00:12:01.051 03:12:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:01.051 03:12:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:01.051 03:12:35 -- target/invalid.sh@25 -- # printf %x 81 00:12:01.051 03:12:35 -- target/invalid.sh@25 -- # echo -e '\x51' 00:12:01.051 03:12:35 -- target/invalid.sh@25 -- # string+=Q 00:12:01.051 03:12:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:01.051 03:12:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:01.051 03:12:35 -- target/invalid.sh@25 -- # printf %x 78 00:12:01.051 03:12:35 -- target/invalid.sh@25 -- # echo -e '\x4e' 00:12:01.051 03:12:35 -- target/invalid.sh@25 -- # string+=N 00:12:01.051 03:12:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:01.051 03:12:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:01.051 03:12:35 -- target/invalid.sh@25 -- # printf %x 89 00:12:01.051 03:12:35 -- target/invalid.sh@25 -- # echo -e '\x59' 00:12:01.051 03:12:35 -- target/invalid.sh@25 -- # string+=Y 00:12:01.051 03:12:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:01.051 03:12:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:01.051 03:12:35 -- target/invalid.sh@25 -- # printf %x 64 00:12:01.051 03:12:35 -- target/invalid.sh@25 -- # echo -e '\x40' 00:12:01.051 03:12:35 -- target/invalid.sh@25 -- # string+=@ 00:12:01.051 03:12:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:01.051 03:12:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:01.051 03:12:35 -- target/invalid.sh@25 -- # printf %x 66 00:12:01.051 03:12:35 -- target/invalid.sh@25 -- # echo -e '\x42' 00:12:01.051 03:12:35 -- target/invalid.sh@25 -- # string+=B 00:12:01.051 03:12:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:01.051 03:12:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:01.051 03:12:35 -- target/invalid.sh@25 -- # printf %x 80 00:12:01.051 03:12:35 -- target/invalid.sh@25 -- # echo -e '\x50' 00:12:01.051 03:12:35 -- target/invalid.sh@25 -- # string+=P 00:12:01.051 03:12:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:01.051 03:12:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:01.051 03:12:35 -- target/invalid.sh@25 -- # printf %x 124 00:12:01.051 03:12:35 -- target/invalid.sh@25 -- # echo -e '\x7c' 00:12:01.051 03:12:35 -- target/invalid.sh@25 -- # string+='|' 00:12:01.051 03:12:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:01.051 03:12:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:01.051 03:12:35 -- target/invalid.sh@25 -- # printf %x 120 00:12:01.051 03:12:35 -- target/invalid.sh@25 -- # echo -e '\x78' 00:12:01.051 03:12:35 -- target/invalid.sh@25 -- # string+=x 00:12:01.051 03:12:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:01.051 03:12:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:01.051 03:12:35 -- target/invalid.sh@25 -- # printf %x 100 00:12:01.052 03:12:35 -- target/invalid.sh@25 -- # echo -e '\x64' 00:12:01.052 03:12:35 -- target/invalid.sh@25 -- # string+=d 00:12:01.052 03:12:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:01.052 03:12:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:01.052 03:12:35 -- target/invalid.sh@25 -- # printf %x 108 00:12:01.052 03:12:35 -- target/invalid.sh@25 -- # echo -e '\x6c' 00:12:01.052 03:12:35 -- target/invalid.sh@25 -- # string+=l 00:12:01.052 03:12:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:01.052 03:12:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:01.052 03:12:35 -- target/invalid.sh@25 -- # printf %x 44 00:12:01.314 03:12:35 -- target/invalid.sh@25 -- # echo -e '\x2c' 00:12:01.314 03:12:35 -- target/invalid.sh@25 -- # string+=, 00:12:01.314 03:12:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:01.314 03:12:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:01.314 03:12:35 -- target/invalid.sh@25 -- # printf %x 103 00:12:01.314 03:12:35 -- target/invalid.sh@25 -- # echo -e '\x67' 00:12:01.314 03:12:35 -- target/invalid.sh@25 -- # string+=g 00:12:01.314 03:12:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:01.314 03:12:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:01.314 03:12:35 -- target/invalid.sh@25 -- # printf %x 43 00:12:01.314 03:12:35 -- target/invalid.sh@25 -- # echo -e '\x2b' 00:12:01.314 03:12:35 -- target/invalid.sh@25 -- # string+=+ 00:12:01.314 03:12:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:01.314 03:12:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:01.314 03:12:35 -- target/invalid.sh@28 -- # [[ K == \- ]] 00:12:01.314 03:12:35 -- target/invalid.sh@31 -- # echo 'K3i~iY59QNY@BP|xdl,g+' 00:12:01.314 03:12:35 -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'K3i~iY59QNY@BP|xdl,g+' nqn.2016-06.io.spdk:cnode7654 00:12:01.314 [2024-04-25 03:12:35.793121] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7654: invalid serial number 'K3i~iY59QNY@BP|xdl,g+' 00:12:01.573 03:12:35 -- target/invalid.sh@54 -- # out='request: 00:12:01.573 { 00:12:01.573 "nqn": "nqn.2016-06.io.spdk:cnode7654", 00:12:01.573 "serial_number": "K3i~iY59QNY@BP|xdl,g+", 00:12:01.573 "method": "nvmf_create_subsystem", 00:12:01.573 "req_id": 1 00:12:01.573 } 00:12:01.573 Got JSON-RPC error response 00:12:01.573 response: 00:12:01.573 { 00:12:01.573 "code": -32602, 00:12:01.573 "message": "Invalid SN K3i~iY59QNY@BP|xdl,g+" 00:12:01.573 }' 00:12:01.573 03:12:35 -- target/invalid.sh@55 -- # [[ request: 00:12:01.573 { 00:12:01.573 "nqn": "nqn.2016-06.io.spdk:cnode7654", 00:12:01.573 "serial_number": "K3i~iY59QNY@BP|xdl,g+", 00:12:01.573 "method": "nvmf_create_subsystem", 00:12:01.573 "req_id": 1 00:12:01.573 } 00:12:01.573 Got JSON-RPC error response 00:12:01.573 response: 00:12:01.573 { 00:12:01.573 "code": -32602, 00:12:01.573 "message": "Invalid SN K3i~iY59QNY@BP|xdl,g+" 00:12:01.573 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:01.573 03:12:35 -- target/invalid.sh@58 -- # gen_random_s 41 00:12:01.573 03:12:35 -- target/invalid.sh@19 -- # local length=41 ll 00:12:01.573 03:12:35 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:01.573 03:12:35 -- target/invalid.sh@21 -- # local chars 00:12:01.573 03:12:35 -- target/invalid.sh@22 -- # local string 00:12:01.573 03:12:35 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:01.573 03:12:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:01.573 03:12:35 -- target/invalid.sh@25 -- # printf %x 93 00:12:01.573 03:12:35 -- target/invalid.sh@25 -- # echo -e '\x5d' 00:12:01.573 03:12:35 -- target/invalid.sh@25 -- # string+=']' 00:12:01.573 03:12:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:01.573 03:12:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:01.573 03:12:35 -- target/invalid.sh@25 -- # printf %x 57 00:12:01.573 03:12:35 -- target/invalid.sh@25 -- # echo -e '\x39' 00:12:01.573 03:12:35 -- target/invalid.sh@25 -- # string+=9 00:12:01.573 03:12:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:01.573 03:12:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:01.573 03:12:35 -- target/invalid.sh@25 -- # printf %x 77 00:12:01.573 03:12:35 -- target/invalid.sh@25 -- # echo -e '\x4d' 00:12:01.573 03:12:35 -- target/invalid.sh@25 -- # string+=M 00:12:01.573 03:12:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:01.573 03:12:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:01.573 03:12:35 -- target/invalid.sh@25 -- # printf %x 56 00:12:01.573 03:12:35 -- target/invalid.sh@25 -- # echo -e '\x38' 00:12:01.573 03:12:35 -- target/invalid.sh@25 -- # string+=8 00:12:01.573 03:12:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:01.573 03:12:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:01.573 03:12:35 -- target/invalid.sh@25 -- # printf %x 38 00:12:01.573 03:12:35 -- target/invalid.sh@25 -- # echo -e '\x26' 00:12:01.573 03:12:35 -- target/invalid.sh@25 -- # string+='&' 00:12:01.573 03:12:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:01.573 03:12:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:01.573 03:12:35 -- target/invalid.sh@25 -- # printf %x 60 00:12:01.573 03:12:35 -- target/invalid.sh@25 -- # echo -e '\x3c' 00:12:01.573 03:12:35 -- target/invalid.sh@25 -- # string+='<' 00:12:01.573 03:12:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:01.573 03:12:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:01.573 03:12:35 -- target/invalid.sh@25 -- # printf %x 95 00:12:01.573 03:12:35 -- target/invalid.sh@25 -- # echo -e '\x5f' 00:12:01.573 03:12:35 -- target/invalid.sh@25 -- # string+=_ 00:12:01.573 03:12:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:01.573 03:12:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:01.573 03:12:35 -- target/invalid.sh@25 -- # printf %x 116 00:12:01.573 03:12:35 -- target/invalid.sh@25 -- # echo -e '\x74' 00:12:01.573 03:12:35 -- target/invalid.sh@25 -- # string+=t 00:12:01.573 03:12:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:01.573 03:12:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:01.573 03:12:35 -- target/invalid.sh@25 -- # printf %x 62 00:12:01.573 03:12:35 -- target/invalid.sh@25 -- # echo -e '\x3e' 00:12:01.573 03:12:35 -- target/invalid.sh@25 -- # string+='>' 00:12:01.573 03:12:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:01.573 03:12:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:01.573 03:12:35 -- target/invalid.sh@25 -- # printf %x 114 00:12:01.573 03:12:35 -- target/invalid.sh@25 -- # echo -e '\x72' 00:12:01.573 03:12:35 -- target/invalid.sh@25 -- # string+=r 00:12:01.573 03:12:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:01.573 03:12:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:01.573 03:12:35 -- target/invalid.sh@25 -- # printf %x 119 00:12:01.573 03:12:35 -- target/invalid.sh@25 -- # echo -e '\x77' 00:12:01.573 03:12:35 -- target/invalid.sh@25 -- # string+=w 00:12:01.573 03:12:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:01.573 03:12:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:01.573 03:12:35 -- target/invalid.sh@25 -- # printf %x 87 00:12:01.573 03:12:35 -- target/invalid.sh@25 -- # echo -e '\x57' 00:12:01.573 03:12:35 -- target/invalid.sh@25 -- # string+=W 00:12:01.573 03:12:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:01.573 03:12:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:01.573 03:12:35 -- target/invalid.sh@25 -- # printf %x 79 00:12:01.573 03:12:35 -- target/invalid.sh@25 -- # echo -e '\x4f' 00:12:01.573 03:12:35 -- target/invalid.sh@25 -- # string+=O 00:12:01.573 03:12:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:01.573 03:12:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:01.573 03:12:35 -- target/invalid.sh@25 -- # printf %x 41 00:12:01.573 03:12:35 -- target/invalid.sh@25 -- # echo -e '\x29' 00:12:01.573 03:12:35 -- target/invalid.sh@25 -- # string+=')' 00:12:01.573 03:12:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:01.573 03:12:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:01.573 03:12:35 -- target/invalid.sh@25 -- # printf %x 104 00:12:01.573 03:12:35 -- target/invalid.sh@25 -- # echo -e '\x68' 00:12:01.573 03:12:35 -- target/invalid.sh@25 -- # string+=h 00:12:01.573 03:12:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:01.573 03:12:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:01.573 03:12:35 -- target/invalid.sh@25 -- # printf %x 105 00:12:01.573 03:12:35 -- target/invalid.sh@25 -- # echo -e '\x69' 00:12:01.573 03:12:35 -- target/invalid.sh@25 -- # string+=i 00:12:01.573 03:12:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:01.573 03:12:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:01.573 03:12:35 -- target/invalid.sh@25 -- # printf %x 94 00:12:01.573 03:12:35 -- target/invalid.sh@25 -- # echo -e '\x5e' 00:12:01.573 03:12:35 -- target/invalid.sh@25 -- # string+='^' 00:12:01.573 03:12:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:01.573 03:12:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:01.573 03:12:35 -- target/invalid.sh@25 -- # printf %x 93 00:12:01.573 03:12:35 -- target/invalid.sh@25 -- # echo -e '\x5d' 00:12:01.573 03:12:35 -- target/invalid.sh@25 -- # string+=']' 00:12:01.573 03:12:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:01.573 03:12:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:01.573 03:12:35 -- target/invalid.sh@25 -- # printf %x 54 00:12:01.573 03:12:35 -- target/invalid.sh@25 -- # echo -e '\x36' 00:12:01.573 03:12:35 -- target/invalid.sh@25 -- # string+=6 00:12:01.573 03:12:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:01.573 03:12:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:01.573 03:12:35 -- target/invalid.sh@25 -- # printf %x 88 00:12:01.573 03:12:35 -- target/invalid.sh@25 -- # echo -e '\x58' 00:12:01.573 03:12:35 -- target/invalid.sh@25 -- # string+=X 00:12:01.573 03:12:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:01.573 03:12:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:01.573 03:12:35 -- target/invalid.sh@25 -- # printf %x 35 00:12:01.573 03:12:35 -- target/invalid.sh@25 -- # echo -e '\x23' 00:12:01.573 03:12:35 -- target/invalid.sh@25 -- # string+='#' 00:12:01.573 03:12:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:01.573 03:12:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:01.573 03:12:35 -- target/invalid.sh@25 -- # printf %x 90 00:12:01.573 03:12:35 -- target/invalid.sh@25 -- # echo -e '\x5a' 00:12:01.573 03:12:35 -- target/invalid.sh@25 -- # string+=Z 00:12:01.573 03:12:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:01.573 03:12:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:01.573 03:12:35 -- target/invalid.sh@25 -- # printf %x 81 00:12:01.573 03:12:35 -- target/invalid.sh@25 -- # echo -e '\x51' 00:12:01.573 03:12:35 -- target/invalid.sh@25 -- # string+=Q 00:12:01.573 03:12:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:01.573 03:12:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:01.573 03:12:35 -- target/invalid.sh@25 -- # printf %x 56 00:12:01.573 03:12:35 -- target/invalid.sh@25 -- # echo -e '\x38' 00:12:01.573 03:12:35 -- target/invalid.sh@25 -- # string+=8 00:12:01.573 03:12:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:01.573 03:12:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:01.573 03:12:35 -- target/invalid.sh@25 -- # printf %x 46 00:12:01.573 03:12:35 -- target/invalid.sh@25 -- # echo -e '\x2e' 00:12:01.573 03:12:35 -- target/invalid.sh@25 -- # string+=. 00:12:01.573 03:12:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:01.573 03:12:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:01.573 03:12:35 -- target/invalid.sh@25 -- # printf %x 50 00:12:01.573 03:12:35 -- target/invalid.sh@25 -- # echo -e '\x32' 00:12:01.573 03:12:35 -- target/invalid.sh@25 -- # string+=2 00:12:01.573 03:12:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:01.573 03:12:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:01.573 03:12:35 -- target/invalid.sh@25 -- # printf %x 32 00:12:01.573 03:12:35 -- target/invalid.sh@25 -- # echo -e '\x20' 00:12:01.573 03:12:35 -- target/invalid.sh@25 -- # string+=' ' 00:12:01.573 03:12:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:01.573 03:12:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:01.573 03:12:35 -- target/invalid.sh@25 -- # printf %x 82 00:12:01.574 03:12:35 -- target/invalid.sh@25 -- # echo -e '\x52' 00:12:01.574 03:12:35 -- target/invalid.sh@25 -- # string+=R 00:12:01.574 03:12:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:01.574 03:12:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:01.574 03:12:35 -- target/invalid.sh@25 -- # printf %x 108 00:12:01.574 03:12:35 -- target/invalid.sh@25 -- # echo -e '\x6c' 00:12:01.574 03:12:35 -- target/invalid.sh@25 -- # string+=l 00:12:01.574 03:12:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:01.574 03:12:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:01.574 03:12:35 -- target/invalid.sh@25 -- # printf %x 52 00:12:01.574 03:12:35 -- target/invalid.sh@25 -- # echo -e '\x34' 00:12:01.574 03:12:35 -- target/invalid.sh@25 -- # string+=4 00:12:01.574 03:12:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:01.574 03:12:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:01.574 03:12:35 -- target/invalid.sh@25 -- # printf %x 104 00:12:01.574 03:12:35 -- target/invalid.sh@25 -- # echo -e '\x68' 00:12:01.574 03:12:35 -- target/invalid.sh@25 -- # string+=h 00:12:01.574 03:12:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:01.574 03:12:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:01.574 03:12:35 -- target/invalid.sh@25 -- # printf %x 121 00:12:01.574 03:12:35 -- target/invalid.sh@25 -- # echo -e '\x79' 00:12:01.574 03:12:35 -- target/invalid.sh@25 -- # string+=y 00:12:01.574 03:12:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:01.574 03:12:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:01.574 03:12:35 -- target/invalid.sh@25 -- # printf %x 73 00:12:01.574 03:12:35 -- target/invalid.sh@25 -- # echo -e '\x49' 00:12:01.574 03:12:35 -- target/invalid.sh@25 -- # string+=I 00:12:01.574 03:12:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:01.574 03:12:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:01.574 03:12:35 -- target/invalid.sh@25 -- # printf %x 109 00:12:01.574 03:12:35 -- target/invalid.sh@25 -- # echo -e '\x6d' 00:12:01.574 03:12:35 -- target/invalid.sh@25 -- # string+=m 00:12:01.574 03:12:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:01.574 03:12:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:01.574 03:12:35 -- target/invalid.sh@25 -- # printf %x 39 00:12:01.574 03:12:35 -- target/invalid.sh@25 -- # echo -e '\x27' 00:12:01.574 03:12:35 -- target/invalid.sh@25 -- # string+=\' 00:12:01.574 03:12:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:01.574 03:12:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:01.574 03:12:35 -- target/invalid.sh@25 -- # printf %x 123 00:12:01.574 03:12:35 -- target/invalid.sh@25 -- # echo -e '\x7b' 00:12:01.574 03:12:35 -- target/invalid.sh@25 -- # string+='{' 00:12:01.574 03:12:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:01.574 03:12:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:01.574 03:12:35 -- target/invalid.sh@25 -- # printf %x 107 00:12:01.574 03:12:35 -- target/invalid.sh@25 -- # echo -e '\x6b' 00:12:01.574 03:12:35 -- target/invalid.sh@25 -- # string+=k 00:12:01.574 03:12:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:01.574 03:12:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:01.574 03:12:35 -- target/invalid.sh@25 -- # printf %x 77 00:12:01.574 03:12:35 -- target/invalid.sh@25 -- # echo -e '\x4d' 00:12:01.574 03:12:35 -- target/invalid.sh@25 -- # string+=M 00:12:01.574 03:12:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:01.574 03:12:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:01.574 03:12:35 -- target/invalid.sh@25 -- # printf %x 115 00:12:01.574 03:12:35 -- target/invalid.sh@25 -- # echo -e '\x73' 00:12:01.574 03:12:35 -- target/invalid.sh@25 -- # string+=s 00:12:01.574 03:12:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:01.574 03:12:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:01.574 03:12:35 -- target/invalid.sh@25 -- # printf %x 116 00:12:01.574 03:12:35 -- target/invalid.sh@25 -- # echo -e '\x74' 00:12:01.574 03:12:35 -- target/invalid.sh@25 -- # string+=t 00:12:01.574 03:12:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:01.574 03:12:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:01.574 03:12:35 -- target/invalid.sh@25 -- # printf %x 82 00:12:01.574 03:12:35 -- target/invalid.sh@25 -- # echo -e '\x52' 00:12:01.574 03:12:35 -- target/invalid.sh@25 -- # string+=R 00:12:01.574 03:12:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:01.574 03:12:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:01.574 03:12:35 -- target/invalid.sh@28 -- # [[ ] == \- ]] 00:12:01.574 03:12:35 -- target/invalid.sh@31 -- # echo ']9M8&<_t>rwWO)hi^]6X#ZQ8.2 Rl4hyIm'\''{kMstR' 00:12:01.574 03:12:35 -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d ']9M8&<_t>rwWO)hi^]6X#ZQ8.2 Rl4hyIm'\''{kMstR' nqn.2016-06.io.spdk:cnode7635 00:12:01.832 [2024-04-25 03:12:36.154315] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7635: invalid model number ']9M8&<_t>rwWO)hi^]6X#ZQ8.2 Rl4hyIm'{kMstR' 00:12:01.832 03:12:36 -- target/invalid.sh@58 -- # out='request: 00:12:01.832 { 00:12:01.832 "nqn": "nqn.2016-06.io.spdk:cnode7635", 00:12:01.832 "model_number": "]9M8&<_t>rwWO)hi^]6X#ZQ8.2 Rl4hyIm'\''{kMstR", 00:12:01.832 "method": "nvmf_create_subsystem", 00:12:01.832 "req_id": 1 00:12:01.832 } 00:12:01.832 Got JSON-RPC error response 00:12:01.832 response: 00:12:01.832 { 00:12:01.832 "code": -32602, 00:12:01.832 "message": "Invalid MN ]9M8&<_t>rwWO)hi^]6X#ZQ8.2 Rl4hyIm'\''{kMstR" 00:12:01.832 }' 00:12:01.832 03:12:36 -- target/invalid.sh@59 -- # [[ request: 00:12:01.832 { 00:12:01.832 "nqn": "nqn.2016-06.io.spdk:cnode7635", 00:12:01.832 "model_number": "]9M8&<_t>rwWO)hi^]6X#ZQ8.2 Rl4hyIm'{kMstR", 00:12:01.832 "method": "nvmf_create_subsystem", 00:12:01.832 "req_id": 1 00:12:01.832 } 00:12:01.833 Got JSON-RPC error response 00:12:01.833 response: 00:12:01.833 { 00:12:01.833 "code": -32602, 00:12:01.833 "message": "Invalid MN ]9M8&<_t>rwWO)hi^]6X#ZQ8.2 Rl4hyIm'{kMstR" 00:12:01.833 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:01.833 03:12:36 -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:12:02.090 [2024-04-25 03:12:36.391165] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:02.090 03:12:36 -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:12:02.349 03:12:36 -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:12:02.349 03:12:36 -- target/invalid.sh@67 -- # echo '' 00:12:02.349 03:12:36 -- target/invalid.sh@67 -- # head -n 1 00:12:02.349 03:12:36 -- target/invalid.sh@67 -- # IP= 00:12:02.349 03:12:36 -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:12:02.607 [2024-04-25 03:12:36.888788] nvmf_rpc.c: 792:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:12:02.607 03:12:36 -- target/invalid.sh@69 -- # out='request: 00:12:02.607 { 00:12:02.607 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:02.607 "listen_address": { 00:12:02.607 "trtype": "tcp", 00:12:02.607 "traddr": "", 00:12:02.607 "trsvcid": "4421" 00:12:02.607 }, 00:12:02.607 "method": "nvmf_subsystem_remove_listener", 00:12:02.607 "req_id": 1 00:12:02.607 } 00:12:02.607 Got JSON-RPC error response 00:12:02.607 response: 00:12:02.607 { 00:12:02.607 "code": -32602, 00:12:02.607 "message": "Invalid parameters" 00:12:02.607 }' 00:12:02.607 03:12:36 -- target/invalid.sh@70 -- # [[ request: 00:12:02.607 { 00:12:02.607 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:02.607 "listen_address": { 00:12:02.607 "trtype": "tcp", 00:12:02.607 "traddr": "", 00:12:02.607 "trsvcid": "4421" 00:12:02.607 }, 00:12:02.607 "method": "nvmf_subsystem_remove_listener", 00:12:02.607 "req_id": 1 00:12:02.607 } 00:12:02.607 Got JSON-RPC error response 00:12:02.607 response: 00:12:02.607 { 00:12:02.607 "code": -32602, 00:12:02.607 "message": "Invalid parameters" 00:12:02.607 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:12:02.607 03:12:36 -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode22523 -i 0 00:12:02.865 [2024-04-25 03:12:37.145700] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22523: invalid cntlid range [0-65519] 00:12:02.865 03:12:37 -- target/invalid.sh@73 -- # out='request: 00:12:02.865 { 00:12:02.865 "nqn": "nqn.2016-06.io.spdk:cnode22523", 00:12:02.865 "min_cntlid": 0, 00:12:02.865 "method": "nvmf_create_subsystem", 00:12:02.865 "req_id": 1 00:12:02.865 } 00:12:02.865 Got JSON-RPC error response 00:12:02.865 response: 00:12:02.865 { 00:12:02.865 "code": -32602, 00:12:02.865 "message": "Invalid cntlid range [0-65519]" 00:12:02.865 }' 00:12:02.865 03:12:37 -- target/invalid.sh@74 -- # [[ request: 00:12:02.865 { 00:12:02.865 "nqn": "nqn.2016-06.io.spdk:cnode22523", 00:12:02.865 "min_cntlid": 0, 00:12:02.865 "method": "nvmf_create_subsystem", 00:12:02.865 "req_id": 1 00:12:02.865 } 00:12:02.865 Got JSON-RPC error response 00:12:02.865 response: 00:12:02.866 { 00:12:02.866 "code": -32602, 00:12:02.866 "message": "Invalid cntlid range [0-65519]" 00:12:02.866 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:02.866 03:12:37 -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5822 -i 65520 00:12:03.123 [2024-04-25 03:12:37.390437] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5822: invalid cntlid range [65520-65519] 00:12:03.123 03:12:37 -- target/invalid.sh@75 -- # out='request: 00:12:03.123 { 00:12:03.123 "nqn": "nqn.2016-06.io.spdk:cnode5822", 00:12:03.123 "min_cntlid": 65520, 00:12:03.123 "method": "nvmf_create_subsystem", 00:12:03.123 "req_id": 1 00:12:03.123 } 00:12:03.123 Got JSON-RPC error response 00:12:03.123 response: 00:12:03.123 { 00:12:03.123 "code": -32602, 00:12:03.123 "message": "Invalid cntlid range [65520-65519]" 00:12:03.123 }' 00:12:03.123 03:12:37 -- target/invalid.sh@76 -- # [[ request: 00:12:03.123 { 00:12:03.123 "nqn": "nqn.2016-06.io.spdk:cnode5822", 00:12:03.123 "min_cntlid": 65520, 00:12:03.123 "method": "nvmf_create_subsystem", 00:12:03.123 "req_id": 1 00:12:03.123 } 00:12:03.123 Got JSON-RPC error response 00:12:03.123 response: 00:12:03.123 { 00:12:03.123 "code": -32602, 00:12:03.123 "message": "Invalid cntlid range [65520-65519]" 00:12:03.123 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:03.123 03:12:37 -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode13012 -I 0 00:12:03.382 [2024-04-25 03:12:37.635240] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13012: invalid cntlid range [1-0] 00:12:03.382 03:12:37 -- target/invalid.sh@77 -- # out='request: 00:12:03.382 { 00:12:03.382 "nqn": "nqn.2016-06.io.spdk:cnode13012", 00:12:03.382 "max_cntlid": 0, 00:12:03.382 "method": "nvmf_create_subsystem", 00:12:03.382 "req_id": 1 00:12:03.382 } 00:12:03.382 Got JSON-RPC error response 00:12:03.382 response: 00:12:03.382 { 00:12:03.382 "code": -32602, 00:12:03.382 "message": "Invalid cntlid range [1-0]" 00:12:03.382 }' 00:12:03.382 03:12:37 -- target/invalid.sh@78 -- # [[ request: 00:12:03.382 { 00:12:03.382 "nqn": "nqn.2016-06.io.spdk:cnode13012", 00:12:03.382 "max_cntlid": 0, 00:12:03.382 "method": "nvmf_create_subsystem", 00:12:03.382 "req_id": 1 00:12:03.382 } 00:12:03.382 Got JSON-RPC error response 00:12:03.382 response: 00:12:03.382 { 00:12:03.382 "code": -32602, 00:12:03.382 "message": "Invalid cntlid range [1-0]" 00:12:03.382 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:03.382 03:12:37 -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode28239 -I 65520 00:12:03.382 [2024-04-25 03:12:37.872038] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28239: invalid cntlid range [1-65520] 00:12:03.641 03:12:37 -- target/invalid.sh@79 -- # out='request: 00:12:03.641 { 00:12:03.641 "nqn": "nqn.2016-06.io.spdk:cnode28239", 00:12:03.641 "max_cntlid": 65520, 00:12:03.641 "method": "nvmf_create_subsystem", 00:12:03.641 "req_id": 1 00:12:03.641 } 00:12:03.641 Got JSON-RPC error response 00:12:03.641 response: 00:12:03.641 { 00:12:03.641 "code": -32602, 00:12:03.641 "message": "Invalid cntlid range [1-65520]" 00:12:03.641 }' 00:12:03.641 03:12:37 -- target/invalid.sh@80 -- # [[ request: 00:12:03.641 { 00:12:03.641 "nqn": "nqn.2016-06.io.spdk:cnode28239", 00:12:03.641 "max_cntlid": 65520, 00:12:03.641 "method": "nvmf_create_subsystem", 00:12:03.641 "req_id": 1 00:12:03.641 } 00:12:03.641 Got JSON-RPC error response 00:12:03.641 response: 00:12:03.641 { 00:12:03.641 "code": -32602, 00:12:03.641 "message": "Invalid cntlid range [1-65520]" 00:12:03.641 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:03.641 03:12:37 -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2135 -i 6 -I 5 00:12:03.641 [2024-04-25 03:12:38.132899] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2135: invalid cntlid range [6-5] 00:12:03.900 03:12:38 -- target/invalid.sh@83 -- # out='request: 00:12:03.900 { 00:12:03.900 "nqn": "nqn.2016-06.io.spdk:cnode2135", 00:12:03.900 "min_cntlid": 6, 00:12:03.900 "max_cntlid": 5, 00:12:03.900 "method": "nvmf_create_subsystem", 00:12:03.900 "req_id": 1 00:12:03.900 } 00:12:03.900 Got JSON-RPC error response 00:12:03.900 response: 00:12:03.900 { 00:12:03.900 "code": -32602, 00:12:03.900 "message": "Invalid cntlid range [6-5]" 00:12:03.900 }' 00:12:03.900 03:12:38 -- target/invalid.sh@84 -- # [[ request: 00:12:03.900 { 00:12:03.900 "nqn": "nqn.2016-06.io.spdk:cnode2135", 00:12:03.900 "min_cntlid": 6, 00:12:03.900 "max_cntlid": 5, 00:12:03.900 "method": "nvmf_create_subsystem", 00:12:03.900 "req_id": 1 00:12:03.900 } 00:12:03.900 Got JSON-RPC error response 00:12:03.900 response: 00:12:03.900 { 00:12:03.900 "code": -32602, 00:12:03.900 "message": "Invalid cntlid range [6-5]" 00:12:03.900 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:03.900 03:12:38 -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:12:03.900 03:12:38 -- target/invalid.sh@87 -- # out='request: 00:12:03.900 { 00:12:03.900 "name": "foobar", 00:12:03.900 "method": "nvmf_delete_target", 00:12:03.900 "req_id": 1 00:12:03.900 } 00:12:03.900 Got JSON-RPC error response 00:12:03.900 response: 00:12:03.900 { 00:12:03.900 "code": -32602, 00:12:03.900 "message": "The specified target doesn'\''t exist, cannot delete it." 00:12:03.900 }' 00:12:03.900 03:12:38 -- target/invalid.sh@88 -- # [[ request: 00:12:03.900 { 00:12:03.900 "name": "foobar", 00:12:03.900 "method": "nvmf_delete_target", 00:12:03.900 "req_id": 1 00:12:03.900 } 00:12:03.900 Got JSON-RPC error response 00:12:03.900 response: 00:12:03.900 { 00:12:03.900 "code": -32602, 00:12:03.900 "message": "The specified target doesn't exist, cannot delete it." 00:12:03.900 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:12:03.900 03:12:38 -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:12:03.900 03:12:38 -- target/invalid.sh@91 -- # nvmftestfini 00:12:03.900 03:12:38 -- nvmf/common.sh@477 -- # nvmfcleanup 00:12:03.900 03:12:38 -- nvmf/common.sh@117 -- # sync 00:12:03.900 03:12:38 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:03.900 03:12:38 -- nvmf/common.sh@120 -- # set +e 00:12:03.900 03:12:38 -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:03.900 03:12:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:03.900 rmmod nvme_tcp 00:12:03.900 rmmod nvme_fabrics 00:12:03.900 rmmod nvme_keyring 00:12:03.900 03:12:38 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:03.900 03:12:38 -- nvmf/common.sh@124 -- # set -e 00:12:03.900 03:12:38 -- nvmf/common.sh@125 -- # return 0 00:12:03.900 03:12:38 -- nvmf/common.sh@478 -- # '[' -n 1442750 ']' 00:12:03.900 03:12:38 -- nvmf/common.sh@479 -- # killprocess 1442750 00:12:03.900 03:12:38 -- common/autotest_common.sh@936 -- # '[' -z 1442750 ']' 00:12:03.900 03:12:38 -- common/autotest_common.sh@940 -- # kill -0 1442750 00:12:03.900 03:12:38 -- common/autotest_common.sh@941 -- # uname 00:12:03.900 03:12:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:03.900 03:12:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1442750 00:12:03.900 03:12:38 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:03.900 03:12:38 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:03.900 03:12:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1442750' 00:12:03.900 killing process with pid 1442750 00:12:03.900 03:12:38 -- common/autotest_common.sh@955 -- # kill 1442750 00:12:03.900 03:12:38 -- common/autotest_common.sh@960 -- # wait 1442750 00:12:04.159 03:12:38 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:12:04.159 03:12:38 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:12:04.159 03:12:38 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:12:04.159 03:12:38 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:04.159 03:12:38 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:04.159 03:12:38 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:04.159 03:12:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:04.159 03:12:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:06.703 03:12:40 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:06.703 00:12:06.703 real 0m9.106s 00:12:06.703 user 0m21.955s 00:12:06.703 sys 0m2.454s 00:12:06.703 03:12:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:06.703 03:12:40 -- common/autotest_common.sh@10 -- # set +x 00:12:06.704 ************************************ 00:12:06.704 END TEST nvmf_invalid 00:12:06.704 ************************************ 00:12:06.704 03:12:40 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:12:06.704 03:12:40 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:06.704 03:12:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:06.704 03:12:40 -- common/autotest_common.sh@10 -- # set +x 00:12:06.704 ************************************ 00:12:06.704 START TEST nvmf_abort 00:12:06.704 ************************************ 00:12:06.704 03:12:40 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:12:06.704 * Looking for test storage... 00:12:06.704 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:06.704 03:12:40 -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:06.704 03:12:40 -- nvmf/common.sh@7 -- # uname -s 00:12:06.704 03:12:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:06.704 03:12:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:06.704 03:12:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:06.704 03:12:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:06.704 03:12:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:06.704 03:12:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:06.704 03:12:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:06.704 03:12:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:06.704 03:12:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:06.704 03:12:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:06.704 03:12:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:06.704 03:12:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:06.704 03:12:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:06.704 03:12:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:06.704 03:12:40 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:06.704 03:12:40 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:06.704 03:12:40 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:06.704 03:12:40 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:06.704 03:12:40 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:06.704 03:12:40 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:06.704 03:12:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:06.704 03:12:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:06.704 03:12:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:06.704 03:12:40 -- paths/export.sh@5 -- # export PATH 00:12:06.704 03:12:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:06.704 03:12:40 -- nvmf/common.sh@47 -- # : 0 00:12:06.704 03:12:40 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:06.704 03:12:40 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:06.704 03:12:40 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:06.704 03:12:40 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:06.704 03:12:40 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:06.704 03:12:40 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:06.704 03:12:40 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:06.704 03:12:40 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:06.704 03:12:40 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:06.704 03:12:40 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:12:06.704 03:12:40 -- target/abort.sh@14 -- # nvmftestinit 00:12:06.704 03:12:40 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:12:06.704 03:12:40 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:06.704 03:12:40 -- nvmf/common.sh@437 -- # prepare_net_devs 00:12:06.704 03:12:40 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:12:06.704 03:12:40 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:12:06.704 03:12:40 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:06.704 03:12:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:06.704 03:12:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:06.704 03:12:40 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:12:06.704 03:12:40 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:12:06.704 03:12:40 -- nvmf/common.sh@285 -- # xtrace_disable 00:12:06.704 03:12:40 -- common/autotest_common.sh@10 -- # set +x 00:12:08.612 03:12:42 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:08.612 03:12:42 -- nvmf/common.sh@291 -- # pci_devs=() 00:12:08.612 03:12:42 -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:08.612 03:12:42 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:08.612 03:12:42 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:08.612 03:12:42 -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:08.612 03:12:42 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:08.612 03:12:42 -- nvmf/common.sh@295 -- # net_devs=() 00:12:08.612 03:12:42 -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:08.612 03:12:42 -- nvmf/common.sh@296 -- # e810=() 00:12:08.612 03:12:42 -- nvmf/common.sh@296 -- # local -ga e810 00:12:08.612 03:12:42 -- nvmf/common.sh@297 -- # x722=() 00:12:08.612 03:12:42 -- nvmf/common.sh@297 -- # local -ga x722 00:12:08.612 03:12:42 -- nvmf/common.sh@298 -- # mlx=() 00:12:08.612 03:12:42 -- nvmf/common.sh@298 -- # local -ga mlx 00:12:08.612 03:12:42 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:08.612 03:12:42 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:08.612 03:12:42 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:08.612 03:12:42 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:08.612 03:12:42 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:08.612 03:12:42 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:08.612 03:12:42 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:08.612 03:12:42 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:08.612 03:12:42 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:08.612 03:12:42 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:08.612 03:12:42 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:08.612 03:12:42 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:08.612 03:12:42 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:08.613 03:12:42 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:08.613 03:12:42 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:08.613 03:12:42 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:08.613 03:12:42 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:08.613 03:12:42 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:08.613 03:12:42 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:08.613 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:08.613 03:12:42 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:08.613 03:12:42 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:08.613 03:12:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:08.613 03:12:42 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:08.613 03:12:42 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:08.613 03:12:42 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:08.613 03:12:42 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:08.613 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:08.613 03:12:42 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:08.613 03:12:42 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:08.613 03:12:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:08.613 03:12:42 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:08.613 03:12:42 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:08.613 03:12:42 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:08.613 03:12:42 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:08.613 03:12:42 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:08.613 03:12:42 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:08.613 03:12:42 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:08.613 03:12:42 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:08.613 03:12:42 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:08.613 03:12:42 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:08.613 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:08.613 03:12:42 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:08.613 03:12:42 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:08.613 03:12:42 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:08.613 03:12:42 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:08.613 03:12:42 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:08.613 03:12:42 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:08.613 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:08.613 03:12:42 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:08.613 03:12:42 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:12:08.613 03:12:42 -- nvmf/common.sh@403 -- # is_hw=yes 00:12:08.613 03:12:42 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:12:08.613 03:12:42 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:12:08.613 03:12:42 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:12:08.613 03:12:42 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:08.613 03:12:42 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:08.613 03:12:42 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:08.613 03:12:42 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:08.613 03:12:42 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:08.613 03:12:42 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:08.613 03:12:42 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:08.613 03:12:42 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:08.613 03:12:42 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:08.613 03:12:42 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:08.613 03:12:42 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:08.613 03:12:42 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:08.613 03:12:42 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:08.613 03:12:42 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:08.613 03:12:42 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:08.613 03:12:42 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:08.613 03:12:42 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:08.613 03:12:42 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:08.613 03:12:42 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:08.613 03:12:42 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:08.613 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:08.613 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.135 ms 00:12:08.613 00:12:08.613 --- 10.0.0.2 ping statistics --- 00:12:08.613 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:08.613 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:12:08.613 03:12:42 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:08.613 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:08.613 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.102 ms 00:12:08.613 00:12:08.613 --- 10.0.0.1 ping statistics --- 00:12:08.613 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:08.613 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:12:08.613 03:12:42 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:08.613 03:12:42 -- nvmf/common.sh@411 -- # return 0 00:12:08.613 03:12:42 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:12:08.613 03:12:42 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:08.613 03:12:42 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:12:08.613 03:12:42 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:12:08.613 03:12:42 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:08.613 03:12:42 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:12:08.613 03:12:42 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:12:08.613 03:12:42 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:12:08.613 03:12:42 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:12:08.613 03:12:42 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:08.613 03:12:42 -- common/autotest_common.sh@10 -- # set +x 00:12:08.613 03:12:42 -- nvmf/common.sh@470 -- # nvmfpid=1445400 00:12:08.613 03:12:42 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:08.613 03:12:42 -- nvmf/common.sh@471 -- # waitforlisten 1445400 00:12:08.613 03:12:42 -- common/autotest_common.sh@817 -- # '[' -z 1445400 ']' 00:12:08.613 03:12:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:08.613 03:12:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:08.613 03:12:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:08.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:08.613 03:12:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:08.613 03:12:42 -- common/autotest_common.sh@10 -- # set +x 00:12:08.613 [2024-04-25 03:12:42.897107] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:12:08.613 [2024-04-25 03:12:42.897195] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:08.613 EAL: No free 2048 kB hugepages reported on node 1 00:12:08.613 [2024-04-25 03:12:42.961489] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:08.613 [2024-04-25 03:12:43.074812] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:08.613 [2024-04-25 03:12:43.074883] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:08.613 [2024-04-25 03:12:43.074908] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:08.613 [2024-04-25 03:12:43.074921] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:08.614 [2024-04-25 03:12:43.074933] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:08.614 [2024-04-25 03:12:43.075065] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:08.614 [2024-04-25 03:12:43.075149] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:08.614 [2024-04-25 03:12:43.075152] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:09.551 03:12:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:09.552 03:12:43 -- common/autotest_common.sh@850 -- # return 0 00:12:09.552 03:12:43 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:12:09.552 03:12:43 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:09.552 03:12:43 -- common/autotest_common.sh@10 -- # set +x 00:12:09.552 03:12:43 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:09.552 03:12:43 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:12:09.552 03:12:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:09.552 03:12:43 -- common/autotest_common.sh@10 -- # set +x 00:12:09.552 [2024-04-25 03:12:43.881974] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:09.552 03:12:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:09.552 03:12:43 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:12:09.552 03:12:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:09.552 03:12:43 -- common/autotest_common.sh@10 -- # set +x 00:12:09.552 Malloc0 00:12:09.552 03:12:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:09.552 03:12:43 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:09.552 03:12:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:09.552 03:12:43 -- common/autotest_common.sh@10 -- # set +x 00:12:09.552 Delay0 00:12:09.552 03:12:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:09.552 03:12:43 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:09.552 03:12:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:09.552 03:12:43 -- common/autotest_common.sh@10 -- # set +x 00:12:09.552 03:12:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:09.552 03:12:43 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:12:09.552 03:12:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:09.552 03:12:43 -- common/autotest_common.sh@10 -- # set +x 00:12:09.552 03:12:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:09.552 03:12:43 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:09.552 03:12:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:09.552 03:12:43 -- common/autotest_common.sh@10 -- # set +x 00:12:09.552 [2024-04-25 03:12:43.948835] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:09.552 03:12:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:09.552 03:12:43 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:09.552 03:12:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:09.552 03:12:43 -- common/autotest_common.sh@10 -- # set +x 00:12:09.552 03:12:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:09.552 03:12:43 -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:12:09.552 EAL: No free 2048 kB hugepages reported on node 1 00:12:09.811 [2024-04-25 03:12:44.056149] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:12:11.717 Initializing NVMe Controllers 00:12:11.717 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:12:11.717 controller IO queue size 128 less than required 00:12:11.717 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:12:11.717 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:12:11.717 Initialization complete. Launching workers. 00:12:11.717 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 32242 00:12:11.717 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 32307, failed to submit 62 00:12:11.717 success 32246, unsuccess 61, failed 0 00:12:11.717 03:12:46 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:11.717 03:12:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:11.717 03:12:46 -- common/autotest_common.sh@10 -- # set +x 00:12:11.717 03:12:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:11.717 03:12:46 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:12:11.717 03:12:46 -- target/abort.sh@38 -- # nvmftestfini 00:12:11.717 03:12:46 -- nvmf/common.sh@477 -- # nvmfcleanup 00:12:11.717 03:12:46 -- nvmf/common.sh@117 -- # sync 00:12:11.717 03:12:46 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:11.717 03:12:46 -- nvmf/common.sh@120 -- # set +e 00:12:11.717 03:12:46 -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:11.717 03:12:46 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:11.717 rmmod nvme_tcp 00:12:11.717 rmmod nvme_fabrics 00:12:11.717 rmmod nvme_keyring 00:12:11.717 03:12:46 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:11.717 03:12:46 -- nvmf/common.sh@124 -- # set -e 00:12:11.717 03:12:46 -- nvmf/common.sh@125 -- # return 0 00:12:11.717 03:12:46 -- nvmf/common.sh@478 -- # '[' -n 1445400 ']' 00:12:11.717 03:12:46 -- nvmf/common.sh@479 -- # killprocess 1445400 00:12:11.717 03:12:46 -- common/autotest_common.sh@936 -- # '[' -z 1445400 ']' 00:12:11.717 03:12:46 -- common/autotest_common.sh@940 -- # kill -0 1445400 00:12:11.717 03:12:46 -- common/autotest_common.sh@941 -- # uname 00:12:11.717 03:12:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:11.717 03:12:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1445400 00:12:11.977 03:12:46 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:12:11.977 03:12:46 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:12:11.977 03:12:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1445400' 00:12:11.977 killing process with pid 1445400 00:12:11.977 03:12:46 -- common/autotest_common.sh@955 -- # kill 1445400 00:12:11.977 03:12:46 -- common/autotest_common.sh@960 -- # wait 1445400 00:12:12.238 03:12:46 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:12:12.238 03:12:46 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:12:12.238 03:12:46 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:12:12.238 03:12:46 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:12.238 03:12:46 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:12.238 03:12:46 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:12.238 03:12:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:12.238 03:12:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:14.166 03:12:48 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:14.166 00:12:14.166 real 0m7.819s 00:12:14.166 user 0m12.671s 00:12:14.166 sys 0m2.484s 00:12:14.166 03:12:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:14.166 03:12:48 -- common/autotest_common.sh@10 -- # set +x 00:12:14.166 ************************************ 00:12:14.166 END TEST nvmf_abort 00:12:14.166 ************************************ 00:12:14.166 03:12:48 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:12:14.166 03:12:48 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:14.166 03:12:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:14.166 03:12:48 -- common/autotest_common.sh@10 -- # set +x 00:12:14.425 ************************************ 00:12:14.425 START TEST nvmf_ns_hotplug_stress 00:12:14.425 ************************************ 00:12:14.425 03:12:48 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:12:14.425 * Looking for test storage... 00:12:14.425 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:14.425 03:12:48 -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:14.425 03:12:48 -- nvmf/common.sh@7 -- # uname -s 00:12:14.425 03:12:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:14.425 03:12:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:14.425 03:12:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:14.425 03:12:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:14.425 03:12:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:14.425 03:12:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:14.425 03:12:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:14.425 03:12:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:14.425 03:12:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:14.425 03:12:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:14.425 03:12:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:14.425 03:12:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:14.425 03:12:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:14.425 03:12:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:14.425 03:12:48 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:14.425 03:12:48 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:14.425 03:12:48 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:14.425 03:12:48 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:14.425 03:12:48 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:14.425 03:12:48 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:14.425 03:12:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.425 03:12:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.425 03:12:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.425 03:12:48 -- paths/export.sh@5 -- # export PATH 00:12:14.425 03:12:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.425 03:12:48 -- nvmf/common.sh@47 -- # : 0 00:12:14.425 03:12:48 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:14.425 03:12:48 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:14.425 03:12:48 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:14.425 03:12:48 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:14.425 03:12:48 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:14.425 03:12:48 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:14.425 03:12:48 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:14.425 03:12:48 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:14.425 03:12:48 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:14.425 03:12:48 -- target/ns_hotplug_stress.sh@13 -- # nvmftestinit 00:12:14.425 03:12:48 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:12:14.425 03:12:48 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:14.425 03:12:48 -- nvmf/common.sh@437 -- # prepare_net_devs 00:12:14.425 03:12:48 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:12:14.425 03:12:48 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:12:14.425 03:12:48 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:14.425 03:12:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:14.425 03:12:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:14.425 03:12:48 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:12:14.425 03:12:48 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:12:14.425 03:12:48 -- nvmf/common.sh@285 -- # xtrace_disable 00:12:14.425 03:12:48 -- common/autotest_common.sh@10 -- # set +x 00:12:16.330 03:12:50 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:16.330 03:12:50 -- nvmf/common.sh@291 -- # pci_devs=() 00:12:16.330 03:12:50 -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:16.330 03:12:50 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:16.330 03:12:50 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:16.330 03:12:50 -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:16.330 03:12:50 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:16.330 03:12:50 -- nvmf/common.sh@295 -- # net_devs=() 00:12:16.330 03:12:50 -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:16.330 03:12:50 -- nvmf/common.sh@296 -- # e810=() 00:12:16.330 03:12:50 -- nvmf/common.sh@296 -- # local -ga e810 00:12:16.330 03:12:50 -- nvmf/common.sh@297 -- # x722=() 00:12:16.330 03:12:50 -- nvmf/common.sh@297 -- # local -ga x722 00:12:16.330 03:12:50 -- nvmf/common.sh@298 -- # mlx=() 00:12:16.330 03:12:50 -- nvmf/common.sh@298 -- # local -ga mlx 00:12:16.330 03:12:50 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:16.330 03:12:50 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:16.330 03:12:50 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:16.330 03:12:50 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:16.330 03:12:50 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:16.330 03:12:50 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:16.330 03:12:50 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:16.330 03:12:50 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:16.330 03:12:50 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:16.330 03:12:50 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:16.330 03:12:50 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:16.330 03:12:50 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:16.330 03:12:50 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:16.330 03:12:50 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:16.330 03:12:50 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:16.330 03:12:50 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:16.330 03:12:50 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:16.330 03:12:50 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:16.330 03:12:50 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:16.330 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:16.330 03:12:50 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:16.330 03:12:50 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:16.330 03:12:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:16.330 03:12:50 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:16.330 03:12:50 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:16.330 03:12:50 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:16.330 03:12:50 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:16.330 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:16.330 03:12:50 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:16.330 03:12:50 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:16.330 03:12:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:16.330 03:12:50 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:16.330 03:12:50 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:16.330 03:12:50 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:16.330 03:12:50 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:16.330 03:12:50 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:16.330 03:12:50 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:16.330 03:12:50 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:16.330 03:12:50 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:16.330 03:12:50 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:16.330 03:12:50 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:16.330 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:16.330 03:12:50 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:16.330 03:12:50 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:16.330 03:12:50 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:16.330 03:12:50 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:16.330 03:12:50 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:16.330 03:12:50 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:16.330 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:16.330 03:12:50 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:16.330 03:12:50 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:12:16.330 03:12:50 -- nvmf/common.sh@403 -- # is_hw=yes 00:12:16.330 03:12:50 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:12:16.330 03:12:50 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:12:16.330 03:12:50 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:12:16.330 03:12:50 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:16.330 03:12:50 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:16.330 03:12:50 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:16.330 03:12:50 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:16.330 03:12:50 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:16.330 03:12:50 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:16.330 03:12:50 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:16.330 03:12:50 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:16.330 03:12:50 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:16.330 03:12:50 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:16.330 03:12:50 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:16.330 03:12:50 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:16.330 03:12:50 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:16.330 03:12:50 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:16.330 03:12:50 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:16.330 03:12:50 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:16.330 03:12:50 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:16.588 03:12:50 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:16.588 03:12:50 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:16.588 03:12:50 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:16.588 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:16.588 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.194 ms 00:12:16.588 00:12:16.588 --- 10.0.0.2 ping statistics --- 00:12:16.588 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:16.588 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:12:16.588 03:12:50 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:16.588 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:16.588 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:12:16.588 00:12:16.588 --- 10.0.0.1 ping statistics --- 00:12:16.588 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:16.588 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:12:16.588 03:12:50 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:16.588 03:12:50 -- nvmf/common.sh@411 -- # return 0 00:12:16.588 03:12:50 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:12:16.588 03:12:50 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:16.588 03:12:50 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:12:16.588 03:12:50 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:12:16.588 03:12:50 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:16.588 03:12:50 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:12:16.588 03:12:50 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:12:16.588 03:12:50 -- target/ns_hotplug_stress.sh@14 -- # nvmfappstart -m 0xE 00:12:16.588 03:12:50 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:12:16.588 03:12:50 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:16.588 03:12:50 -- common/autotest_common.sh@10 -- # set +x 00:12:16.588 03:12:50 -- nvmf/common.sh@470 -- # nvmfpid=1447757 00:12:16.589 03:12:50 -- nvmf/common.sh@471 -- # waitforlisten 1447757 00:12:16.589 03:12:50 -- common/autotest_common.sh@817 -- # '[' -z 1447757 ']' 00:12:16.589 03:12:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:16.589 03:12:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:16.589 03:12:50 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:16.589 03:12:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:16.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:16.589 03:12:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:16.589 03:12:50 -- common/autotest_common.sh@10 -- # set +x 00:12:16.589 [2024-04-25 03:12:50.952080] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:12:16.589 [2024-04-25 03:12:50.952178] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:16.589 EAL: No free 2048 kB hugepages reported on node 1 00:12:16.589 [2024-04-25 03:12:51.021746] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:16.846 [2024-04-25 03:12:51.140587] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:16.846 [2024-04-25 03:12:51.140668] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:16.846 [2024-04-25 03:12:51.140695] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:16.846 [2024-04-25 03:12:51.140717] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:16.846 [2024-04-25 03:12:51.140729] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:16.846 [2024-04-25 03:12:51.140787] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:16.846 [2024-04-25 03:12:51.140821] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:16.846 [2024-04-25 03:12:51.140825] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:17.414 03:12:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:17.414 03:12:51 -- common/autotest_common.sh@850 -- # return 0 00:12:17.414 03:12:51 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:12:17.414 03:12:51 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:17.414 03:12:51 -- common/autotest_common.sh@10 -- # set +x 00:12:17.699 03:12:51 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:17.699 03:12:51 -- target/ns_hotplug_stress.sh@16 -- # null_size=1000 00:12:17.699 03:12:51 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:17.699 [2024-04-25 03:12:52.138057] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:17.699 03:12:52 -- target/ns_hotplug_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:17.956 03:12:52 -- target/ns_hotplug_stress.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:18.213 [2024-04-25 03:12:52.628726] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:18.213 03:12:52 -- target/ns_hotplug_stress.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:18.470 03:12:52 -- target/ns_hotplug_stress.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:12:18.727 Malloc0 00:12:18.727 03:12:53 -- target/ns_hotplug_stress.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:18.984 Delay0 00:12:18.984 03:12:53 -- target/ns_hotplug_stress.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:19.549 03:12:53 -- target/ns_hotplug_stress.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:12:19.549 NULL1 00:12:19.549 03:12:54 -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:12:19.806 03:12:54 -- target/ns_hotplug_stress.sh@33 -- # PERF_PID=1448189 00:12:19.806 03:12:54 -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:12:19.806 03:12:54 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1448189 00:12:19.806 03:12:54 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:19.806 EAL: No free 2048 kB hugepages reported on node 1 00:12:20.064 03:12:54 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:20.321 03:12:54 -- target/ns_hotplug_stress.sh@40 -- # null_size=1001 00:12:20.321 03:12:54 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:12:20.577 true 00:12:20.577 03:12:54 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1448189 00:12:20.577 03:12:54 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:20.835 03:12:55 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:21.093 03:12:55 -- target/ns_hotplug_stress.sh@40 -- # null_size=1002 00:12:21.093 03:12:55 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:12:21.351 true 00:12:21.351 03:12:55 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1448189 00:12:21.351 03:12:55 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:22.290 Read completed with error (sct=0, sc=11) 00:12:22.290 03:12:56 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:22.290 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:22.290 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:22.548 03:12:56 -- target/ns_hotplug_stress.sh@40 -- # null_size=1003 00:12:22.548 03:12:56 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:12:22.807 true 00:12:22.807 03:12:57 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1448189 00:12:22.807 03:12:57 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:23.066 03:12:57 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:23.325 03:12:57 -- target/ns_hotplug_stress.sh@40 -- # null_size=1004 00:12:23.325 03:12:57 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:12:23.325 true 00:12:23.325 03:12:57 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1448189 00:12:23.325 03:12:57 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:24.705 03:12:58 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:24.705 03:12:59 -- target/ns_hotplug_stress.sh@40 -- # null_size=1005 00:12:24.705 03:12:59 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:12:24.964 true 00:12:24.964 03:12:59 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1448189 00:12:24.964 03:12:59 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:25.224 03:12:59 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:25.497 03:12:59 -- target/ns_hotplug_stress.sh@40 -- # null_size=1006 00:12:25.497 03:12:59 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:12:25.497 true 00:12:25.497 03:12:59 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1448189 00:12:25.497 03:12:59 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:26.438 03:13:00 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:26.696 03:13:01 -- target/ns_hotplug_stress.sh@40 -- # null_size=1007 00:12:26.696 03:13:01 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:12:26.954 true 00:12:26.954 03:13:01 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1448189 00:12:26.954 03:13:01 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:27.212 03:13:01 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:27.470 03:13:01 -- target/ns_hotplug_stress.sh@40 -- # null_size=1008 00:12:27.470 03:13:01 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:12:27.729 true 00:12:27.729 03:13:02 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1448189 00:12:27.729 03:13:02 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:28.671 03:13:02 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:28.671 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:28.671 03:13:03 -- target/ns_hotplug_stress.sh@40 -- # null_size=1009 00:12:28.671 03:13:03 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:12:28.929 true 00:12:28.929 03:13:03 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1448189 00:12:28.929 03:13:03 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:29.187 03:13:03 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:29.445 03:13:03 -- target/ns_hotplug_stress.sh@40 -- # null_size=1010 00:12:29.445 03:13:03 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:12:29.703 true 00:12:29.703 03:13:04 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1448189 00:12:29.704 03:13:04 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:30.643 03:13:04 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:30.643 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:30.643 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:30.901 03:13:05 -- target/ns_hotplug_stress.sh@40 -- # null_size=1011 00:12:30.901 03:13:05 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:12:31.160 true 00:12:31.160 03:13:05 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1448189 00:12:31.160 03:13:05 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:31.418 03:13:05 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:31.676 03:13:05 -- target/ns_hotplug_stress.sh@40 -- # null_size=1012 00:12:31.676 03:13:05 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:12:31.935 true 00:12:31.935 03:13:06 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1448189 00:12:31.935 03:13:06 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:32.877 03:13:07 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:33.135 03:13:07 -- target/ns_hotplug_stress.sh@40 -- # null_size=1013 00:12:33.135 03:13:07 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:12:33.393 true 00:12:33.393 03:13:07 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1448189 00:12:33.393 03:13:07 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:33.651 03:13:07 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:33.909 03:13:08 -- target/ns_hotplug_stress.sh@40 -- # null_size=1014 00:12:33.909 03:13:08 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:12:34.167 true 00:12:34.167 03:13:08 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1448189 00:12:34.167 03:13:08 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:34.425 03:13:08 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:34.682 03:13:09 -- target/ns_hotplug_stress.sh@40 -- # null_size=1015 00:12:34.682 03:13:09 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:12:34.940 true 00:12:34.940 03:13:09 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1448189 00:12:34.940 03:13:09 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:35.873 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:35.873 03:13:10 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:36.135 03:13:10 -- target/ns_hotplug_stress.sh@40 -- # null_size=1016 00:12:36.438 03:13:10 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:12:36.438 true 00:12:36.438 03:13:10 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1448189 00:12:36.438 03:13:10 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:36.697 03:13:11 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:36.956 03:13:11 -- target/ns_hotplug_stress.sh@40 -- # null_size=1017 00:12:36.956 03:13:11 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:12:37.214 true 00:12:37.214 03:13:11 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1448189 00:12:37.214 03:13:11 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:37.471 03:13:11 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:37.729 03:13:12 -- target/ns_hotplug_stress.sh@40 -- # null_size=1018 00:12:37.729 03:13:12 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:12:37.988 true 00:12:37.988 03:13:12 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1448189 00:12:37.988 03:13:12 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:39.363 03:13:13 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:39.363 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:39.363 03:13:13 -- target/ns_hotplug_stress.sh@40 -- # null_size=1019 00:12:39.363 03:13:13 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:12:39.621 true 00:12:39.621 03:13:13 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1448189 00:12:39.621 03:13:13 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:39.881 03:13:14 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:40.142 03:13:14 -- target/ns_hotplug_stress.sh@40 -- # null_size=1020 00:12:40.142 03:13:14 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:12:40.142 true 00:12:40.401 03:13:14 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1448189 00:12:40.401 03:13:14 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:41.335 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:41.335 03:13:15 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:41.335 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:41.335 03:13:15 -- target/ns_hotplug_stress.sh@40 -- # null_size=1021 00:12:41.335 03:13:15 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:12:41.592 true 00:12:41.592 03:13:16 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1448189 00:12:41.592 03:13:16 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:41.849 03:13:16 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:42.107 03:13:16 -- target/ns_hotplug_stress.sh@40 -- # null_size=1022 00:12:42.107 03:13:16 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:12:42.365 true 00:12:42.365 03:13:16 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1448189 00:12:42.365 03:13:16 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:43.300 03:13:17 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:43.557 03:13:17 -- target/ns_hotplug_stress.sh@40 -- # null_size=1023 00:12:43.558 03:13:17 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:12:43.816 true 00:12:43.816 03:13:18 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1448189 00:12:43.816 03:13:18 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:43.816 03:13:18 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:44.074 03:13:18 -- target/ns_hotplug_stress.sh@40 -- # null_size=1024 00:12:44.074 03:13:18 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:12:44.333 true 00:12:44.333 03:13:18 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1448189 00:12:44.333 03:13:18 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:45.266 03:13:19 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:45.266 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:45.266 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:45.524 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:45.524 03:13:19 -- target/ns_hotplug_stress.sh@40 -- # null_size=1025 00:12:45.524 03:13:19 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:12:45.782 true 00:12:46.039 03:13:20 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1448189 00:12:46.039 03:13:20 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:46.297 03:13:20 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:46.297 03:13:20 -- target/ns_hotplug_stress.sh@40 -- # null_size=1026 00:12:46.297 03:13:20 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:12:46.554 true 00:12:46.554 03:13:21 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1448189 00:12:46.554 03:13:21 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:47.488 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:47.488 03:13:21 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:47.747 03:13:22 -- target/ns_hotplug_stress.sh@40 -- # null_size=1027 00:12:47.747 03:13:22 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:12:48.005 true 00:12:48.005 03:13:22 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1448189 00:12:48.005 03:13:22 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:48.263 03:13:22 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:48.521 03:13:22 -- target/ns_hotplug_stress.sh@40 -- # null_size=1028 00:12:48.521 03:13:22 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:12:48.777 true 00:12:48.777 03:13:23 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1448189 00:12:48.777 03:13:23 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:49.711 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:49.711 03:13:24 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:49.969 03:13:24 -- target/ns_hotplug_stress.sh@40 -- # null_size=1029 00:12:49.969 03:13:24 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:12:50.227 Initializing NVMe Controllers 00:12:50.227 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:50.227 Controller IO queue size 128, less than required. 00:12:50.227 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:50.227 Controller IO queue size 128, less than required. 00:12:50.227 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:50.227 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:50.227 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:12:50.227 Initialization complete. Launching workers. 00:12:50.227 ======================================================== 00:12:50.227 Latency(us) 00:12:50.227 Device Information : IOPS MiB/s Average min max 00:12:50.227 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 545.73 0.27 112736.13 3293.83 1012376.06 00:12:50.227 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 9812.34 4.79 13007.66 2625.91 368532.60 00:12:50.227 ======================================================== 00:12:50.227 Total : 10358.07 5.06 18262.01 2625.91 1012376.06 00:12:50.227 00:12:50.227 true 00:12:50.227 03:13:24 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1448189 00:12:50.227 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 35: kill: (1448189) - No such process 00:12:50.227 03:13:24 -- target/ns_hotplug_stress.sh@44 -- # wait 1448189 00:12:50.227 03:13:24 -- target/ns_hotplug_stress.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:12:50.227 03:13:24 -- target/ns_hotplug_stress.sh@48 -- # nvmftestfini 00:12:50.227 03:13:24 -- nvmf/common.sh@477 -- # nvmfcleanup 00:12:50.227 03:13:24 -- nvmf/common.sh@117 -- # sync 00:12:50.227 03:13:24 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:50.227 03:13:24 -- nvmf/common.sh@120 -- # set +e 00:12:50.227 03:13:24 -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:50.227 03:13:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:50.227 rmmod nvme_tcp 00:12:50.227 rmmod nvme_fabrics 00:12:50.227 rmmod nvme_keyring 00:12:50.486 03:13:24 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:50.486 03:13:24 -- nvmf/common.sh@124 -- # set -e 00:12:50.486 03:13:24 -- nvmf/common.sh@125 -- # return 0 00:12:50.486 03:13:24 -- nvmf/common.sh@478 -- # '[' -n 1447757 ']' 00:12:50.486 03:13:24 -- nvmf/common.sh@479 -- # killprocess 1447757 00:12:50.486 03:13:24 -- common/autotest_common.sh@936 -- # '[' -z 1447757 ']' 00:12:50.486 03:13:24 -- common/autotest_common.sh@940 -- # kill -0 1447757 00:12:50.486 03:13:24 -- common/autotest_common.sh@941 -- # uname 00:12:50.486 03:13:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:50.486 03:13:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1447757 00:12:50.486 03:13:24 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:12:50.486 03:13:24 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:12:50.486 03:13:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1447757' 00:12:50.486 killing process with pid 1447757 00:12:50.486 03:13:24 -- common/autotest_common.sh@955 -- # kill 1447757 00:12:50.486 03:13:24 -- common/autotest_common.sh@960 -- # wait 1447757 00:12:50.745 03:13:25 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:12:50.745 03:13:25 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:12:50.745 03:13:25 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:12:50.745 03:13:25 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:50.745 03:13:25 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:50.745 03:13:25 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:50.745 03:13:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:50.745 03:13:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:52.648 03:13:27 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:52.648 00:12:52.648 real 0m38.369s 00:12:52.648 user 2m28.984s 00:12:52.648 sys 0m9.919s 00:12:52.648 03:13:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:52.648 03:13:27 -- common/autotest_common.sh@10 -- # set +x 00:12:52.648 ************************************ 00:12:52.648 END TEST nvmf_ns_hotplug_stress 00:12:52.648 ************************************ 00:12:52.648 03:13:27 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:52.648 03:13:27 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:52.648 03:13:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:52.648 03:13:27 -- common/autotest_common.sh@10 -- # set +x 00:12:52.907 ************************************ 00:12:52.908 START TEST nvmf_connect_stress 00:12:52.908 ************************************ 00:12:52.908 03:13:27 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:52.908 * Looking for test storage... 00:12:52.908 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:52.908 03:13:27 -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:52.908 03:13:27 -- nvmf/common.sh@7 -- # uname -s 00:12:52.908 03:13:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:52.908 03:13:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:52.908 03:13:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:52.908 03:13:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:52.908 03:13:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:52.908 03:13:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:52.908 03:13:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:52.908 03:13:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:52.908 03:13:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:52.908 03:13:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:52.908 03:13:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:52.908 03:13:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:52.908 03:13:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:52.908 03:13:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:52.908 03:13:27 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:52.908 03:13:27 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:52.908 03:13:27 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:52.908 03:13:27 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:52.908 03:13:27 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:52.908 03:13:27 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:52.908 03:13:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.908 03:13:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.908 03:13:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.908 03:13:27 -- paths/export.sh@5 -- # export PATH 00:12:52.908 03:13:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.908 03:13:27 -- nvmf/common.sh@47 -- # : 0 00:12:52.908 03:13:27 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:52.908 03:13:27 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:52.908 03:13:27 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:52.908 03:13:27 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:52.908 03:13:27 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:52.908 03:13:27 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:52.908 03:13:27 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:52.908 03:13:27 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:52.908 03:13:27 -- target/connect_stress.sh@12 -- # nvmftestinit 00:12:52.908 03:13:27 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:12:52.908 03:13:27 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:52.908 03:13:27 -- nvmf/common.sh@437 -- # prepare_net_devs 00:12:52.908 03:13:27 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:12:52.908 03:13:27 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:12:52.908 03:13:27 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:52.908 03:13:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:52.908 03:13:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:52.908 03:13:27 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:12:52.908 03:13:27 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:12:52.908 03:13:27 -- nvmf/common.sh@285 -- # xtrace_disable 00:12:52.908 03:13:27 -- common/autotest_common.sh@10 -- # set +x 00:12:54.811 03:13:29 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:54.811 03:13:29 -- nvmf/common.sh@291 -- # pci_devs=() 00:12:54.811 03:13:29 -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:54.811 03:13:29 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:54.811 03:13:29 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:54.811 03:13:29 -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:54.811 03:13:29 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:54.811 03:13:29 -- nvmf/common.sh@295 -- # net_devs=() 00:12:54.811 03:13:29 -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:54.811 03:13:29 -- nvmf/common.sh@296 -- # e810=() 00:12:54.811 03:13:29 -- nvmf/common.sh@296 -- # local -ga e810 00:12:54.811 03:13:29 -- nvmf/common.sh@297 -- # x722=() 00:12:54.811 03:13:29 -- nvmf/common.sh@297 -- # local -ga x722 00:12:54.811 03:13:29 -- nvmf/common.sh@298 -- # mlx=() 00:12:54.811 03:13:29 -- nvmf/common.sh@298 -- # local -ga mlx 00:12:54.811 03:13:29 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:54.811 03:13:29 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:54.811 03:13:29 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:54.811 03:13:29 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:54.811 03:13:29 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:54.811 03:13:29 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:54.811 03:13:29 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:54.811 03:13:29 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:54.811 03:13:29 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:54.811 03:13:29 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:54.811 03:13:29 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:54.811 03:13:29 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:54.811 03:13:29 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:54.811 03:13:29 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:54.811 03:13:29 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:54.811 03:13:29 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:54.811 03:13:29 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:54.811 03:13:29 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:54.811 03:13:29 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:54.811 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:54.811 03:13:29 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:54.811 03:13:29 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:54.811 03:13:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:54.811 03:13:29 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:54.811 03:13:29 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:54.811 03:13:29 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:54.811 03:13:29 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:54.811 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:54.811 03:13:29 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:54.811 03:13:29 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:54.811 03:13:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:54.811 03:13:29 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:54.811 03:13:29 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:54.811 03:13:29 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:54.811 03:13:29 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:54.811 03:13:29 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:54.811 03:13:29 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:54.811 03:13:29 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:54.811 03:13:29 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:54.811 03:13:29 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:54.811 03:13:29 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:54.811 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:54.811 03:13:29 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:54.811 03:13:29 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:54.811 03:13:29 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:54.811 03:13:29 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:54.811 03:13:29 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:54.811 03:13:29 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:54.811 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:54.811 03:13:29 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:54.811 03:13:29 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:12:54.811 03:13:29 -- nvmf/common.sh@403 -- # is_hw=yes 00:12:54.811 03:13:29 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:12:54.811 03:13:29 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:12:54.811 03:13:29 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:12:54.811 03:13:29 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:54.811 03:13:29 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:54.811 03:13:29 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:54.811 03:13:29 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:54.811 03:13:29 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:54.811 03:13:29 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:54.811 03:13:29 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:54.811 03:13:29 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:54.811 03:13:29 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:54.811 03:13:29 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:54.811 03:13:29 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:54.811 03:13:29 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:54.811 03:13:29 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:54.811 03:13:29 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:54.811 03:13:29 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:54.811 03:13:29 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:54.812 03:13:29 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:54.812 03:13:29 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:54.812 03:13:29 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:54.812 03:13:29 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:54.812 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:54.812 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.294 ms 00:12:54.812 00:12:54.812 --- 10.0.0.2 ping statistics --- 00:12:54.812 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:54.812 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:12:54.812 03:13:29 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:54.812 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:54.812 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.177 ms 00:12:54.812 00:12:54.812 --- 10.0.0.1 ping statistics --- 00:12:54.812 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:54.812 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:12:54.812 03:13:29 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:54.812 03:13:29 -- nvmf/common.sh@411 -- # return 0 00:12:54.812 03:13:29 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:12:54.812 03:13:29 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:54.812 03:13:29 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:12:54.812 03:13:29 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:12:54.812 03:13:29 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:54.812 03:13:29 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:12:54.812 03:13:29 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:12:54.812 03:13:29 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:12:54.812 03:13:29 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:12:54.812 03:13:29 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:54.812 03:13:29 -- common/autotest_common.sh@10 -- # set +x 00:12:54.812 03:13:29 -- nvmf/common.sh@470 -- # nvmfpid=1453766 00:12:54.812 03:13:29 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:54.812 03:13:29 -- nvmf/common.sh@471 -- # waitforlisten 1453766 00:12:54.812 03:13:29 -- common/autotest_common.sh@817 -- # '[' -z 1453766 ']' 00:12:54.812 03:13:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:54.812 03:13:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:54.812 03:13:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:54.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:54.812 03:13:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:54.812 03:13:29 -- common/autotest_common.sh@10 -- # set +x 00:12:55.070 [2024-04-25 03:13:29.342912] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:12:55.070 [2024-04-25 03:13:29.343005] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:55.071 EAL: No free 2048 kB hugepages reported on node 1 00:12:55.071 [2024-04-25 03:13:29.406757] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:55.071 [2024-04-25 03:13:29.515263] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:55.071 [2024-04-25 03:13:29.515317] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:55.071 [2024-04-25 03:13:29.515330] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:55.071 [2024-04-25 03:13:29.515348] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:55.071 [2024-04-25 03:13:29.515358] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:55.071 [2024-04-25 03:13:29.515447] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:55.071 [2024-04-25 03:13:29.515512] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:55.071 [2024-04-25 03:13:29.515515] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:55.330 03:13:29 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:55.330 03:13:29 -- common/autotest_common.sh@850 -- # return 0 00:12:55.330 03:13:29 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:12:55.330 03:13:29 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:55.330 03:13:29 -- common/autotest_common.sh@10 -- # set +x 00:12:55.330 03:13:29 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:55.330 03:13:29 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:55.330 03:13:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:55.330 03:13:29 -- common/autotest_common.sh@10 -- # set +x 00:12:55.330 [2024-04-25 03:13:29.657805] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:55.330 03:13:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:55.330 03:13:29 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:55.330 03:13:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:55.330 03:13:29 -- common/autotest_common.sh@10 -- # set +x 00:12:55.330 03:13:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:55.330 03:13:29 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:55.330 03:13:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:55.330 03:13:29 -- common/autotest_common.sh@10 -- # set +x 00:12:55.330 [2024-04-25 03:13:29.692804] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:55.330 03:13:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:55.330 03:13:29 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:55.330 03:13:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:55.330 03:13:29 -- common/autotest_common.sh@10 -- # set +x 00:12:55.330 NULL1 00:12:55.330 03:13:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:55.330 03:13:29 -- target/connect_stress.sh@21 -- # PERF_PID=1453800 00:12:55.330 03:13:29 -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:55.330 03:13:29 -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:55.330 03:13:29 -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:12:55.330 03:13:29 -- target/connect_stress.sh@27 -- # seq 1 20 00:12:55.330 03:13:29 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:55.330 03:13:29 -- target/connect_stress.sh@28 -- # cat 00:12:55.330 03:13:29 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:55.330 03:13:29 -- target/connect_stress.sh@28 -- # cat 00:12:55.330 03:13:29 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:55.330 03:13:29 -- target/connect_stress.sh@28 -- # cat 00:12:55.330 03:13:29 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:55.330 03:13:29 -- target/connect_stress.sh@28 -- # cat 00:12:55.330 03:13:29 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:55.330 03:13:29 -- target/connect_stress.sh@28 -- # cat 00:12:55.330 03:13:29 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:55.330 03:13:29 -- target/connect_stress.sh@28 -- # cat 00:12:55.330 03:13:29 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:55.330 03:13:29 -- target/connect_stress.sh@28 -- # cat 00:12:55.330 03:13:29 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:55.330 03:13:29 -- target/connect_stress.sh@28 -- # cat 00:12:55.330 03:13:29 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:55.330 03:13:29 -- target/connect_stress.sh@28 -- # cat 00:12:55.330 03:13:29 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:55.330 03:13:29 -- target/connect_stress.sh@28 -- # cat 00:12:55.330 03:13:29 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:55.330 03:13:29 -- target/connect_stress.sh@28 -- # cat 00:12:55.330 03:13:29 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:55.330 03:13:29 -- target/connect_stress.sh@28 -- # cat 00:12:55.330 EAL: No free 2048 kB hugepages reported on node 1 00:12:55.330 03:13:29 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:55.330 03:13:29 -- target/connect_stress.sh@28 -- # cat 00:12:55.330 03:13:29 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:55.330 03:13:29 -- target/connect_stress.sh@28 -- # cat 00:12:55.330 03:13:29 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:55.330 03:13:29 -- target/connect_stress.sh@28 -- # cat 00:12:55.330 03:13:29 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:55.330 03:13:29 -- target/connect_stress.sh@28 -- # cat 00:12:55.330 03:13:29 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:55.330 03:13:29 -- target/connect_stress.sh@28 -- # cat 00:12:55.330 03:13:29 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:55.330 03:13:29 -- target/connect_stress.sh@28 -- # cat 00:12:55.330 03:13:29 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:55.330 03:13:29 -- target/connect_stress.sh@28 -- # cat 00:12:55.330 03:13:29 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:55.330 03:13:29 -- target/connect_stress.sh@28 -- # cat 00:12:55.330 03:13:29 -- target/connect_stress.sh@34 -- # kill -0 1453800 00:12:55.330 03:13:29 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:55.330 03:13:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:55.330 03:13:29 -- common/autotest_common.sh@10 -- # set +x 00:12:55.588 03:13:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:55.588 03:13:30 -- target/connect_stress.sh@34 -- # kill -0 1453800 00:12:55.588 03:13:30 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:55.588 03:13:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:55.588 03:13:30 -- common/autotest_common.sh@10 -- # set +x 00:12:56.153 03:13:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:56.153 03:13:30 -- target/connect_stress.sh@34 -- # kill -0 1453800 00:12:56.153 03:13:30 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:56.153 03:13:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:56.153 03:13:30 -- common/autotest_common.sh@10 -- # set +x 00:12:56.410 03:13:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:56.410 03:13:30 -- target/connect_stress.sh@34 -- # kill -0 1453800 00:12:56.410 03:13:30 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:56.410 03:13:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:56.410 03:13:30 -- common/autotest_common.sh@10 -- # set +x 00:12:56.668 03:13:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:56.668 03:13:31 -- target/connect_stress.sh@34 -- # kill -0 1453800 00:12:56.668 03:13:31 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:56.668 03:13:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:56.668 03:13:31 -- common/autotest_common.sh@10 -- # set +x 00:12:56.925 03:13:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:56.925 03:13:31 -- target/connect_stress.sh@34 -- # kill -0 1453800 00:12:56.925 03:13:31 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:56.925 03:13:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:56.925 03:13:31 -- common/autotest_common.sh@10 -- # set +x 00:12:57.490 03:13:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:57.490 03:13:31 -- target/connect_stress.sh@34 -- # kill -0 1453800 00:12:57.490 03:13:31 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:57.490 03:13:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:57.490 03:13:31 -- common/autotest_common.sh@10 -- # set +x 00:12:57.747 03:13:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:57.747 03:13:32 -- target/connect_stress.sh@34 -- # kill -0 1453800 00:12:57.747 03:13:32 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:57.748 03:13:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:57.748 03:13:32 -- common/autotest_common.sh@10 -- # set +x 00:12:58.005 03:13:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:58.005 03:13:32 -- target/connect_stress.sh@34 -- # kill -0 1453800 00:12:58.005 03:13:32 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:58.005 03:13:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:58.005 03:13:32 -- common/autotest_common.sh@10 -- # set +x 00:12:58.285 03:13:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:58.285 03:13:32 -- target/connect_stress.sh@34 -- # kill -0 1453800 00:12:58.285 03:13:32 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:58.285 03:13:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:58.285 03:13:32 -- common/autotest_common.sh@10 -- # set +x 00:12:58.543 03:13:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:58.543 03:13:32 -- target/connect_stress.sh@34 -- # kill -0 1453800 00:12:58.543 03:13:32 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:58.543 03:13:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:58.543 03:13:32 -- common/autotest_common.sh@10 -- # set +x 00:12:58.801 03:13:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:58.801 03:13:33 -- target/connect_stress.sh@34 -- # kill -0 1453800 00:12:58.801 03:13:33 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:58.801 03:13:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:58.801 03:13:33 -- common/autotest_common.sh@10 -- # set +x 00:12:59.365 03:13:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:59.365 03:13:33 -- target/connect_stress.sh@34 -- # kill -0 1453800 00:12:59.365 03:13:33 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:59.365 03:13:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:59.365 03:13:33 -- common/autotest_common.sh@10 -- # set +x 00:12:59.626 03:13:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:59.626 03:13:33 -- target/connect_stress.sh@34 -- # kill -0 1453800 00:12:59.626 03:13:33 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:59.626 03:13:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:59.626 03:13:33 -- common/autotest_common.sh@10 -- # set +x 00:12:59.883 03:13:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:59.883 03:13:34 -- target/connect_stress.sh@34 -- # kill -0 1453800 00:12:59.883 03:13:34 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:59.883 03:13:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:59.883 03:13:34 -- common/autotest_common.sh@10 -- # set +x 00:13:00.140 03:13:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:00.141 03:13:34 -- target/connect_stress.sh@34 -- # kill -0 1453800 00:13:00.141 03:13:34 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:00.141 03:13:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:00.141 03:13:34 -- common/autotest_common.sh@10 -- # set +x 00:13:00.398 03:13:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:00.399 03:13:34 -- target/connect_stress.sh@34 -- # kill -0 1453800 00:13:00.399 03:13:34 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:00.399 03:13:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:00.399 03:13:34 -- common/autotest_common.sh@10 -- # set +x 00:13:00.965 03:13:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:00.965 03:13:35 -- target/connect_stress.sh@34 -- # kill -0 1453800 00:13:00.965 03:13:35 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:00.965 03:13:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:00.965 03:13:35 -- common/autotest_common.sh@10 -- # set +x 00:13:01.223 03:13:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:01.223 03:13:35 -- target/connect_stress.sh@34 -- # kill -0 1453800 00:13:01.223 03:13:35 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:01.223 03:13:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:01.223 03:13:35 -- common/autotest_common.sh@10 -- # set +x 00:13:01.481 03:13:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:01.481 03:13:35 -- target/connect_stress.sh@34 -- # kill -0 1453800 00:13:01.481 03:13:35 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:01.481 03:13:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:01.481 03:13:35 -- common/autotest_common.sh@10 -- # set +x 00:13:01.739 03:13:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:01.739 03:13:36 -- target/connect_stress.sh@34 -- # kill -0 1453800 00:13:01.739 03:13:36 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:01.739 03:13:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:01.739 03:13:36 -- common/autotest_common.sh@10 -- # set +x 00:13:02.305 03:13:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:02.305 03:13:36 -- target/connect_stress.sh@34 -- # kill -0 1453800 00:13:02.305 03:13:36 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:02.305 03:13:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:02.305 03:13:36 -- common/autotest_common.sh@10 -- # set +x 00:13:02.563 03:13:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:02.563 03:13:36 -- target/connect_stress.sh@34 -- # kill -0 1453800 00:13:02.563 03:13:36 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:02.563 03:13:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:02.563 03:13:36 -- common/autotest_common.sh@10 -- # set +x 00:13:02.821 03:13:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:02.821 03:13:37 -- target/connect_stress.sh@34 -- # kill -0 1453800 00:13:02.821 03:13:37 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:02.821 03:13:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:02.821 03:13:37 -- common/autotest_common.sh@10 -- # set +x 00:13:03.079 03:13:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:03.079 03:13:37 -- target/connect_stress.sh@34 -- # kill -0 1453800 00:13:03.079 03:13:37 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:03.079 03:13:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:03.079 03:13:37 -- common/autotest_common.sh@10 -- # set +x 00:13:03.337 03:13:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:03.337 03:13:37 -- target/connect_stress.sh@34 -- # kill -0 1453800 00:13:03.337 03:13:37 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:03.337 03:13:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:03.337 03:13:37 -- common/autotest_common.sh@10 -- # set +x 00:13:03.903 03:13:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:03.903 03:13:38 -- target/connect_stress.sh@34 -- # kill -0 1453800 00:13:03.903 03:13:38 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:03.903 03:13:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:03.903 03:13:38 -- common/autotest_common.sh@10 -- # set +x 00:13:04.160 03:13:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:04.160 03:13:38 -- target/connect_stress.sh@34 -- # kill -0 1453800 00:13:04.160 03:13:38 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:04.160 03:13:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:04.160 03:13:38 -- common/autotest_common.sh@10 -- # set +x 00:13:04.418 03:13:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:04.418 03:13:38 -- target/connect_stress.sh@34 -- # kill -0 1453800 00:13:04.418 03:13:38 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:04.418 03:13:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:04.418 03:13:38 -- common/autotest_common.sh@10 -- # set +x 00:13:04.676 03:13:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:04.676 03:13:39 -- target/connect_stress.sh@34 -- # kill -0 1453800 00:13:04.676 03:13:39 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:04.676 03:13:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:04.676 03:13:39 -- common/autotest_common.sh@10 -- # set +x 00:13:04.933 03:13:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:04.933 03:13:39 -- target/connect_stress.sh@34 -- # kill -0 1453800 00:13:04.934 03:13:39 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:04.934 03:13:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:04.934 03:13:39 -- common/autotest_common.sh@10 -- # set +x 00:13:05.499 03:13:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:05.499 03:13:39 -- target/connect_stress.sh@34 -- # kill -0 1453800 00:13:05.499 03:13:39 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:05.499 03:13:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:05.499 03:13:39 -- common/autotest_common.sh@10 -- # set +x 00:13:05.499 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:05.757 03:13:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:05.757 03:13:40 -- target/connect_stress.sh@34 -- # kill -0 1453800 00:13:05.758 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1453800) - No such process 00:13:05.758 03:13:40 -- target/connect_stress.sh@38 -- # wait 1453800 00:13:05.758 03:13:40 -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:05.758 03:13:40 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:05.758 03:13:40 -- target/connect_stress.sh@43 -- # nvmftestfini 00:13:05.758 03:13:40 -- nvmf/common.sh@477 -- # nvmfcleanup 00:13:05.758 03:13:40 -- nvmf/common.sh@117 -- # sync 00:13:05.758 03:13:40 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:05.758 03:13:40 -- nvmf/common.sh@120 -- # set +e 00:13:05.758 03:13:40 -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:05.758 03:13:40 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:05.758 rmmod nvme_tcp 00:13:05.758 rmmod nvme_fabrics 00:13:05.758 rmmod nvme_keyring 00:13:05.758 03:13:40 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:05.758 03:13:40 -- nvmf/common.sh@124 -- # set -e 00:13:05.758 03:13:40 -- nvmf/common.sh@125 -- # return 0 00:13:05.758 03:13:40 -- nvmf/common.sh@478 -- # '[' -n 1453766 ']' 00:13:05.758 03:13:40 -- nvmf/common.sh@479 -- # killprocess 1453766 00:13:05.758 03:13:40 -- common/autotest_common.sh@936 -- # '[' -z 1453766 ']' 00:13:05.758 03:13:40 -- common/autotest_common.sh@940 -- # kill -0 1453766 00:13:05.758 03:13:40 -- common/autotest_common.sh@941 -- # uname 00:13:05.758 03:13:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:05.758 03:13:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1453766 00:13:05.758 03:13:40 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:13:05.758 03:13:40 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:13:05.758 03:13:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1453766' 00:13:05.758 killing process with pid 1453766 00:13:05.758 03:13:40 -- common/autotest_common.sh@955 -- # kill 1453766 00:13:05.758 03:13:40 -- common/autotest_common.sh@960 -- # wait 1453766 00:13:06.015 03:13:40 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:13:06.015 03:13:40 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:13:06.015 03:13:40 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:13:06.015 03:13:40 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:06.015 03:13:40 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:06.015 03:13:40 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:06.015 03:13:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:06.015 03:13:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:08.548 03:13:42 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:08.548 00:13:08.548 real 0m15.258s 00:13:08.548 user 0m38.147s 00:13:08.548 sys 0m6.034s 00:13:08.548 03:13:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:08.548 03:13:42 -- common/autotest_common.sh@10 -- # set +x 00:13:08.548 ************************************ 00:13:08.548 END TEST nvmf_connect_stress 00:13:08.548 ************************************ 00:13:08.548 03:13:42 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:08.548 03:13:42 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:08.548 03:13:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:08.548 03:13:42 -- common/autotest_common.sh@10 -- # set +x 00:13:08.548 ************************************ 00:13:08.548 START TEST nvmf_fused_ordering 00:13:08.548 ************************************ 00:13:08.548 03:13:42 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:08.548 * Looking for test storage... 00:13:08.548 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:08.548 03:13:42 -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:08.548 03:13:42 -- nvmf/common.sh@7 -- # uname -s 00:13:08.548 03:13:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:08.548 03:13:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:08.548 03:13:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:08.548 03:13:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:08.548 03:13:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:08.548 03:13:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:08.548 03:13:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:08.548 03:13:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:08.548 03:13:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:08.548 03:13:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:08.548 03:13:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:08.548 03:13:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:08.548 03:13:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:08.548 03:13:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:08.548 03:13:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:08.548 03:13:42 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:08.548 03:13:42 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:08.548 03:13:42 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:08.548 03:13:42 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:08.548 03:13:42 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:08.548 03:13:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.548 03:13:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.548 03:13:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.548 03:13:42 -- paths/export.sh@5 -- # export PATH 00:13:08.548 03:13:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.548 03:13:42 -- nvmf/common.sh@47 -- # : 0 00:13:08.548 03:13:42 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:08.548 03:13:42 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:08.548 03:13:42 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:08.548 03:13:42 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:08.548 03:13:42 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:08.548 03:13:42 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:08.548 03:13:42 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:08.548 03:13:42 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:08.548 03:13:42 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:13:08.548 03:13:42 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:13:08.548 03:13:42 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:08.548 03:13:42 -- nvmf/common.sh@437 -- # prepare_net_devs 00:13:08.548 03:13:42 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:13:08.548 03:13:42 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:13:08.548 03:13:42 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:08.548 03:13:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:08.548 03:13:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:08.548 03:13:42 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:13:08.548 03:13:42 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:13:08.548 03:13:42 -- nvmf/common.sh@285 -- # xtrace_disable 00:13:08.548 03:13:42 -- common/autotest_common.sh@10 -- # set +x 00:13:10.447 03:13:44 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:10.447 03:13:44 -- nvmf/common.sh@291 -- # pci_devs=() 00:13:10.447 03:13:44 -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:10.447 03:13:44 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:10.447 03:13:44 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:10.447 03:13:44 -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:10.447 03:13:44 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:10.447 03:13:44 -- nvmf/common.sh@295 -- # net_devs=() 00:13:10.447 03:13:44 -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:10.447 03:13:44 -- nvmf/common.sh@296 -- # e810=() 00:13:10.447 03:13:44 -- nvmf/common.sh@296 -- # local -ga e810 00:13:10.447 03:13:44 -- nvmf/common.sh@297 -- # x722=() 00:13:10.447 03:13:44 -- nvmf/common.sh@297 -- # local -ga x722 00:13:10.447 03:13:44 -- nvmf/common.sh@298 -- # mlx=() 00:13:10.447 03:13:44 -- nvmf/common.sh@298 -- # local -ga mlx 00:13:10.447 03:13:44 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:10.447 03:13:44 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:10.447 03:13:44 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:10.447 03:13:44 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:10.447 03:13:44 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:10.447 03:13:44 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:10.447 03:13:44 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:10.447 03:13:44 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:10.447 03:13:44 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:10.447 03:13:44 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:10.447 03:13:44 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:10.447 03:13:44 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:10.447 03:13:44 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:10.447 03:13:44 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:10.447 03:13:44 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:10.447 03:13:44 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:10.447 03:13:44 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:10.447 03:13:44 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:10.447 03:13:44 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:10.447 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:10.447 03:13:44 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:10.447 03:13:44 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:10.447 03:13:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:10.447 03:13:44 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:10.447 03:13:44 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:10.447 03:13:44 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:10.447 03:13:44 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:10.447 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:10.447 03:13:44 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:10.447 03:13:44 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:10.447 03:13:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:10.447 03:13:44 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:10.447 03:13:44 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:10.447 03:13:44 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:10.447 03:13:44 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:10.447 03:13:44 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:10.447 03:13:44 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:10.447 03:13:44 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:10.447 03:13:44 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:10.447 03:13:44 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:10.447 03:13:44 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:10.447 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:10.447 03:13:44 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:10.447 03:13:44 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:10.447 03:13:44 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:10.447 03:13:44 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:10.447 03:13:44 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:10.447 03:13:44 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:10.447 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:10.447 03:13:44 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:10.447 03:13:44 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:13:10.448 03:13:44 -- nvmf/common.sh@403 -- # is_hw=yes 00:13:10.448 03:13:44 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:13:10.448 03:13:44 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:13:10.448 03:13:44 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:13:10.448 03:13:44 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:10.448 03:13:44 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:10.448 03:13:44 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:10.448 03:13:44 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:10.448 03:13:44 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:10.448 03:13:44 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:10.448 03:13:44 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:10.448 03:13:44 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:10.448 03:13:44 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:10.448 03:13:44 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:10.448 03:13:44 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:10.448 03:13:44 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:10.448 03:13:44 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:10.448 03:13:44 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:10.448 03:13:44 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:10.448 03:13:44 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:10.448 03:13:44 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:10.448 03:13:44 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:10.448 03:13:44 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:10.448 03:13:44 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:10.448 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:10.448 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.209 ms 00:13:10.448 00:13:10.448 --- 10.0.0.2 ping statistics --- 00:13:10.448 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:10.448 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:13:10.448 03:13:44 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:10.448 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:10.448 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.131 ms 00:13:10.448 00:13:10.448 --- 10.0.0.1 ping statistics --- 00:13:10.448 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:10.448 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:13:10.448 03:13:44 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:10.448 03:13:44 -- nvmf/common.sh@411 -- # return 0 00:13:10.448 03:13:44 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:13:10.448 03:13:44 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:10.448 03:13:44 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:13:10.448 03:13:44 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:13:10.448 03:13:44 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:10.448 03:13:44 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:13:10.448 03:13:44 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:13:10.448 03:13:44 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:13:10.448 03:13:44 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:10.448 03:13:44 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:10.448 03:13:44 -- common/autotest_common.sh@10 -- # set +x 00:13:10.448 03:13:44 -- nvmf/common.sh@470 -- # nvmfpid=1457062 00:13:10.448 03:13:44 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:10.448 03:13:44 -- nvmf/common.sh@471 -- # waitforlisten 1457062 00:13:10.448 03:13:44 -- common/autotest_common.sh@817 -- # '[' -z 1457062 ']' 00:13:10.448 03:13:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:10.448 03:13:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:10.448 03:13:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:10.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:10.448 03:13:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:10.448 03:13:44 -- common/autotest_common.sh@10 -- # set +x 00:13:10.448 [2024-04-25 03:13:44.741526] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:13:10.448 [2024-04-25 03:13:44.741611] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:10.448 EAL: No free 2048 kB hugepages reported on node 1 00:13:10.448 [2024-04-25 03:13:44.821746] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:10.707 [2024-04-25 03:13:44.950128] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:10.707 [2024-04-25 03:13:44.950191] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:10.707 [2024-04-25 03:13:44.950230] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:10.707 [2024-04-25 03:13:44.950252] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:10.707 [2024-04-25 03:13:44.950278] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:10.707 [2024-04-25 03:13:44.950325] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:10.707 03:13:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:10.707 03:13:45 -- common/autotest_common.sh@850 -- # return 0 00:13:10.707 03:13:45 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:10.707 03:13:45 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:10.707 03:13:45 -- common/autotest_common.sh@10 -- # set +x 00:13:10.707 03:13:45 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:10.707 03:13:45 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:10.707 03:13:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:10.707 03:13:45 -- common/autotest_common.sh@10 -- # set +x 00:13:10.707 [2024-04-25 03:13:45.095953] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:10.707 03:13:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:10.707 03:13:45 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:10.707 03:13:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:10.707 03:13:45 -- common/autotest_common.sh@10 -- # set +x 00:13:10.707 03:13:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:10.707 03:13:45 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:10.707 03:13:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:10.707 03:13:45 -- common/autotest_common.sh@10 -- # set +x 00:13:10.707 [2024-04-25 03:13:45.112159] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:10.707 03:13:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:10.707 03:13:45 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:10.707 03:13:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:10.707 03:13:45 -- common/autotest_common.sh@10 -- # set +x 00:13:10.707 NULL1 00:13:10.707 03:13:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:10.707 03:13:45 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:13:10.707 03:13:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:10.707 03:13:45 -- common/autotest_common.sh@10 -- # set +x 00:13:10.707 03:13:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:10.707 03:13:45 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:10.707 03:13:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:10.707 03:13:45 -- common/autotest_common.sh@10 -- # set +x 00:13:10.707 03:13:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:10.708 03:13:45 -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:10.708 [2024-04-25 03:13:45.157268] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:13:10.708 [2024-04-25 03:13:45.157310] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1457089 ] 00:13:10.708 EAL: No free 2048 kB hugepages reported on node 1 00:13:11.674 Attached to nqn.2016-06.io.spdk:cnode1 00:13:11.674 Namespace ID: 1 size: 1GB 00:13:11.674 fused_ordering(0) 00:13:11.674 fused_ordering(1) 00:13:11.674 fused_ordering(2) 00:13:11.674 fused_ordering(3) 00:13:11.674 fused_ordering(4) 00:13:11.674 fused_ordering(5) 00:13:11.674 fused_ordering(6) 00:13:11.674 fused_ordering(7) 00:13:11.674 fused_ordering(8) 00:13:11.674 fused_ordering(9) 00:13:11.674 fused_ordering(10) 00:13:11.674 fused_ordering(11) 00:13:11.674 fused_ordering(12) 00:13:11.674 fused_ordering(13) 00:13:11.674 fused_ordering(14) 00:13:11.674 fused_ordering(15) 00:13:11.674 fused_ordering(16) 00:13:11.674 fused_ordering(17) 00:13:11.674 fused_ordering(18) 00:13:11.674 fused_ordering(19) 00:13:11.674 fused_ordering(20) 00:13:11.674 fused_ordering(21) 00:13:11.674 fused_ordering(22) 00:13:11.674 fused_ordering(23) 00:13:11.674 fused_ordering(24) 00:13:11.674 fused_ordering(25) 00:13:11.674 fused_ordering(26) 00:13:11.674 fused_ordering(27) 00:13:11.674 fused_ordering(28) 00:13:11.674 fused_ordering(29) 00:13:11.674 fused_ordering(30) 00:13:11.674 fused_ordering(31) 00:13:11.674 fused_ordering(32) 00:13:11.674 fused_ordering(33) 00:13:11.674 fused_ordering(34) 00:13:11.674 fused_ordering(35) 00:13:11.674 fused_ordering(36) 00:13:11.674 fused_ordering(37) 00:13:11.674 fused_ordering(38) 00:13:11.674 fused_ordering(39) 00:13:11.674 fused_ordering(40) 00:13:11.674 fused_ordering(41) 00:13:11.674 fused_ordering(42) 00:13:11.674 fused_ordering(43) 00:13:11.674 fused_ordering(44) 00:13:11.674 fused_ordering(45) 00:13:11.674 fused_ordering(46) 00:13:11.674 fused_ordering(47) 00:13:11.674 fused_ordering(48) 00:13:11.674 fused_ordering(49) 00:13:11.674 fused_ordering(50) 00:13:11.674 fused_ordering(51) 00:13:11.674 fused_ordering(52) 00:13:11.674 fused_ordering(53) 00:13:11.674 fused_ordering(54) 00:13:11.674 fused_ordering(55) 00:13:11.674 fused_ordering(56) 00:13:11.674 fused_ordering(57) 00:13:11.674 fused_ordering(58) 00:13:11.674 fused_ordering(59) 00:13:11.674 fused_ordering(60) 00:13:11.674 fused_ordering(61) 00:13:11.674 fused_ordering(62) 00:13:11.674 fused_ordering(63) 00:13:11.674 fused_ordering(64) 00:13:11.674 fused_ordering(65) 00:13:11.674 fused_ordering(66) 00:13:11.674 fused_ordering(67) 00:13:11.674 fused_ordering(68) 00:13:11.674 fused_ordering(69) 00:13:11.674 fused_ordering(70) 00:13:11.674 fused_ordering(71) 00:13:11.674 fused_ordering(72) 00:13:11.674 fused_ordering(73) 00:13:11.674 fused_ordering(74) 00:13:11.674 fused_ordering(75) 00:13:11.674 fused_ordering(76) 00:13:11.674 fused_ordering(77) 00:13:11.674 fused_ordering(78) 00:13:11.674 fused_ordering(79) 00:13:11.674 fused_ordering(80) 00:13:11.674 fused_ordering(81) 00:13:11.674 fused_ordering(82) 00:13:11.674 fused_ordering(83) 00:13:11.674 fused_ordering(84) 00:13:11.674 fused_ordering(85) 00:13:11.674 fused_ordering(86) 00:13:11.674 fused_ordering(87) 00:13:11.674 fused_ordering(88) 00:13:11.674 fused_ordering(89) 00:13:11.674 fused_ordering(90) 00:13:11.674 fused_ordering(91) 00:13:11.674 fused_ordering(92) 00:13:11.674 fused_ordering(93) 00:13:11.674 fused_ordering(94) 00:13:11.674 fused_ordering(95) 00:13:11.674 fused_ordering(96) 00:13:11.674 fused_ordering(97) 00:13:11.674 fused_ordering(98) 00:13:11.674 fused_ordering(99) 00:13:11.674 fused_ordering(100) 00:13:11.674 fused_ordering(101) 00:13:11.674 fused_ordering(102) 00:13:11.674 fused_ordering(103) 00:13:11.674 fused_ordering(104) 00:13:11.674 fused_ordering(105) 00:13:11.674 fused_ordering(106) 00:13:11.674 fused_ordering(107) 00:13:11.674 fused_ordering(108) 00:13:11.674 fused_ordering(109) 00:13:11.674 fused_ordering(110) 00:13:11.674 fused_ordering(111) 00:13:11.674 fused_ordering(112) 00:13:11.674 fused_ordering(113) 00:13:11.674 fused_ordering(114) 00:13:11.674 fused_ordering(115) 00:13:11.674 fused_ordering(116) 00:13:11.674 fused_ordering(117) 00:13:11.674 fused_ordering(118) 00:13:11.674 fused_ordering(119) 00:13:11.674 fused_ordering(120) 00:13:11.674 fused_ordering(121) 00:13:11.674 fused_ordering(122) 00:13:11.674 fused_ordering(123) 00:13:11.674 fused_ordering(124) 00:13:11.674 fused_ordering(125) 00:13:11.674 fused_ordering(126) 00:13:11.674 fused_ordering(127) 00:13:11.674 fused_ordering(128) 00:13:11.674 fused_ordering(129) 00:13:11.674 fused_ordering(130) 00:13:11.674 fused_ordering(131) 00:13:11.674 fused_ordering(132) 00:13:11.674 fused_ordering(133) 00:13:11.674 fused_ordering(134) 00:13:11.674 fused_ordering(135) 00:13:11.674 fused_ordering(136) 00:13:11.674 fused_ordering(137) 00:13:11.674 fused_ordering(138) 00:13:11.674 fused_ordering(139) 00:13:11.674 fused_ordering(140) 00:13:11.674 fused_ordering(141) 00:13:11.674 fused_ordering(142) 00:13:11.674 fused_ordering(143) 00:13:11.674 fused_ordering(144) 00:13:11.674 fused_ordering(145) 00:13:11.674 fused_ordering(146) 00:13:11.674 fused_ordering(147) 00:13:11.674 fused_ordering(148) 00:13:11.674 fused_ordering(149) 00:13:11.674 fused_ordering(150) 00:13:11.674 fused_ordering(151) 00:13:11.674 fused_ordering(152) 00:13:11.674 fused_ordering(153) 00:13:11.674 fused_ordering(154) 00:13:11.674 fused_ordering(155) 00:13:11.674 fused_ordering(156) 00:13:11.674 fused_ordering(157) 00:13:11.674 fused_ordering(158) 00:13:11.674 fused_ordering(159) 00:13:11.674 fused_ordering(160) 00:13:11.674 fused_ordering(161) 00:13:11.674 fused_ordering(162) 00:13:11.674 fused_ordering(163) 00:13:11.674 fused_ordering(164) 00:13:11.674 fused_ordering(165) 00:13:11.674 fused_ordering(166) 00:13:11.674 fused_ordering(167) 00:13:11.674 fused_ordering(168) 00:13:11.674 fused_ordering(169) 00:13:11.674 fused_ordering(170) 00:13:11.674 fused_ordering(171) 00:13:11.674 fused_ordering(172) 00:13:11.674 fused_ordering(173) 00:13:11.674 fused_ordering(174) 00:13:11.674 fused_ordering(175) 00:13:11.674 fused_ordering(176) 00:13:11.674 fused_ordering(177) 00:13:11.674 fused_ordering(178) 00:13:11.674 fused_ordering(179) 00:13:11.674 fused_ordering(180) 00:13:11.674 fused_ordering(181) 00:13:11.674 fused_ordering(182) 00:13:11.674 fused_ordering(183) 00:13:11.674 fused_ordering(184) 00:13:11.674 fused_ordering(185) 00:13:11.674 fused_ordering(186) 00:13:11.674 fused_ordering(187) 00:13:11.674 fused_ordering(188) 00:13:11.674 fused_ordering(189) 00:13:11.674 fused_ordering(190) 00:13:11.674 fused_ordering(191) 00:13:11.674 fused_ordering(192) 00:13:11.674 fused_ordering(193) 00:13:11.674 fused_ordering(194) 00:13:11.674 fused_ordering(195) 00:13:11.674 fused_ordering(196) 00:13:11.674 fused_ordering(197) 00:13:11.674 fused_ordering(198) 00:13:11.674 fused_ordering(199) 00:13:11.674 fused_ordering(200) 00:13:11.674 fused_ordering(201) 00:13:11.674 fused_ordering(202) 00:13:11.674 fused_ordering(203) 00:13:11.674 fused_ordering(204) 00:13:11.674 fused_ordering(205) 00:13:12.241 fused_ordering(206) 00:13:12.241 fused_ordering(207) 00:13:12.241 fused_ordering(208) 00:13:12.241 fused_ordering(209) 00:13:12.241 fused_ordering(210) 00:13:12.241 fused_ordering(211) 00:13:12.241 fused_ordering(212) 00:13:12.241 fused_ordering(213) 00:13:12.241 fused_ordering(214) 00:13:12.241 fused_ordering(215) 00:13:12.241 fused_ordering(216) 00:13:12.241 fused_ordering(217) 00:13:12.241 fused_ordering(218) 00:13:12.241 fused_ordering(219) 00:13:12.241 fused_ordering(220) 00:13:12.241 fused_ordering(221) 00:13:12.241 fused_ordering(222) 00:13:12.241 fused_ordering(223) 00:13:12.241 fused_ordering(224) 00:13:12.241 fused_ordering(225) 00:13:12.241 fused_ordering(226) 00:13:12.241 fused_ordering(227) 00:13:12.241 fused_ordering(228) 00:13:12.241 fused_ordering(229) 00:13:12.241 fused_ordering(230) 00:13:12.241 fused_ordering(231) 00:13:12.241 fused_ordering(232) 00:13:12.241 fused_ordering(233) 00:13:12.241 fused_ordering(234) 00:13:12.241 fused_ordering(235) 00:13:12.241 fused_ordering(236) 00:13:12.241 fused_ordering(237) 00:13:12.241 fused_ordering(238) 00:13:12.241 fused_ordering(239) 00:13:12.241 fused_ordering(240) 00:13:12.241 fused_ordering(241) 00:13:12.241 fused_ordering(242) 00:13:12.241 fused_ordering(243) 00:13:12.241 fused_ordering(244) 00:13:12.241 fused_ordering(245) 00:13:12.241 fused_ordering(246) 00:13:12.241 fused_ordering(247) 00:13:12.241 fused_ordering(248) 00:13:12.241 fused_ordering(249) 00:13:12.241 fused_ordering(250) 00:13:12.241 fused_ordering(251) 00:13:12.241 fused_ordering(252) 00:13:12.241 fused_ordering(253) 00:13:12.241 fused_ordering(254) 00:13:12.241 fused_ordering(255) 00:13:12.241 fused_ordering(256) 00:13:12.241 fused_ordering(257) 00:13:12.241 fused_ordering(258) 00:13:12.241 fused_ordering(259) 00:13:12.241 fused_ordering(260) 00:13:12.241 fused_ordering(261) 00:13:12.241 fused_ordering(262) 00:13:12.241 fused_ordering(263) 00:13:12.241 fused_ordering(264) 00:13:12.241 fused_ordering(265) 00:13:12.241 fused_ordering(266) 00:13:12.241 fused_ordering(267) 00:13:12.241 fused_ordering(268) 00:13:12.241 fused_ordering(269) 00:13:12.241 fused_ordering(270) 00:13:12.241 fused_ordering(271) 00:13:12.241 fused_ordering(272) 00:13:12.241 fused_ordering(273) 00:13:12.241 fused_ordering(274) 00:13:12.241 fused_ordering(275) 00:13:12.241 fused_ordering(276) 00:13:12.241 fused_ordering(277) 00:13:12.241 fused_ordering(278) 00:13:12.241 fused_ordering(279) 00:13:12.241 fused_ordering(280) 00:13:12.241 fused_ordering(281) 00:13:12.241 fused_ordering(282) 00:13:12.241 fused_ordering(283) 00:13:12.241 fused_ordering(284) 00:13:12.241 fused_ordering(285) 00:13:12.241 fused_ordering(286) 00:13:12.241 fused_ordering(287) 00:13:12.241 fused_ordering(288) 00:13:12.241 fused_ordering(289) 00:13:12.241 fused_ordering(290) 00:13:12.241 fused_ordering(291) 00:13:12.241 fused_ordering(292) 00:13:12.241 fused_ordering(293) 00:13:12.241 fused_ordering(294) 00:13:12.241 fused_ordering(295) 00:13:12.241 fused_ordering(296) 00:13:12.241 fused_ordering(297) 00:13:12.241 fused_ordering(298) 00:13:12.241 fused_ordering(299) 00:13:12.241 fused_ordering(300) 00:13:12.241 fused_ordering(301) 00:13:12.241 fused_ordering(302) 00:13:12.241 fused_ordering(303) 00:13:12.241 fused_ordering(304) 00:13:12.241 fused_ordering(305) 00:13:12.241 fused_ordering(306) 00:13:12.241 fused_ordering(307) 00:13:12.241 fused_ordering(308) 00:13:12.241 fused_ordering(309) 00:13:12.241 fused_ordering(310) 00:13:12.241 fused_ordering(311) 00:13:12.241 fused_ordering(312) 00:13:12.241 fused_ordering(313) 00:13:12.241 fused_ordering(314) 00:13:12.241 fused_ordering(315) 00:13:12.241 fused_ordering(316) 00:13:12.241 fused_ordering(317) 00:13:12.241 fused_ordering(318) 00:13:12.241 fused_ordering(319) 00:13:12.241 fused_ordering(320) 00:13:12.241 fused_ordering(321) 00:13:12.241 fused_ordering(322) 00:13:12.241 fused_ordering(323) 00:13:12.241 fused_ordering(324) 00:13:12.241 fused_ordering(325) 00:13:12.241 fused_ordering(326) 00:13:12.241 fused_ordering(327) 00:13:12.241 fused_ordering(328) 00:13:12.241 fused_ordering(329) 00:13:12.241 fused_ordering(330) 00:13:12.241 fused_ordering(331) 00:13:12.241 fused_ordering(332) 00:13:12.241 fused_ordering(333) 00:13:12.241 fused_ordering(334) 00:13:12.241 fused_ordering(335) 00:13:12.241 fused_ordering(336) 00:13:12.241 fused_ordering(337) 00:13:12.241 fused_ordering(338) 00:13:12.241 fused_ordering(339) 00:13:12.241 fused_ordering(340) 00:13:12.241 fused_ordering(341) 00:13:12.241 fused_ordering(342) 00:13:12.241 fused_ordering(343) 00:13:12.241 fused_ordering(344) 00:13:12.241 fused_ordering(345) 00:13:12.241 fused_ordering(346) 00:13:12.241 fused_ordering(347) 00:13:12.241 fused_ordering(348) 00:13:12.241 fused_ordering(349) 00:13:12.241 fused_ordering(350) 00:13:12.241 fused_ordering(351) 00:13:12.241 fused_ordering(352) 00:13:12.241 fused_ordering(353) 00:13:12.241 fused_ordering(354) 00:13:12.241 fused_ordering(355) 00:13:12.241 fused_ordering(356) 00:13:12.241 fused_ordering(357) 00:13:12.241 fused_ordering(358) 00:13:12.241 fused_ordering(359) 00:13:12.241 fused_ordering(360) 00:13:12.241 fused_ordering(361) 00:13:12.241 fused_ordering(362) 00:13:12.241 fused_ordering(363) 00:13:12.241 fused_ordering(364) 00:13:12.241 fused_ordering(365) 00:13:12.241 fused_ordering(366) 00:13:12.241 fused_ordering(367) 00:13:12.241 fused_ordering(368) 00:13:12.241 fused_ordering(369) 00:13:12.241 fused_ordering(370) 00:13:12.241 fused_ordering(371) 00:13:12.241 fused_ordering(372) 00:13:12.241 fused_ordering(373) 00:13:12.241 fused_ordering(374) 00:13:12.241 fused_ordering(375) 00:13:12.241 fused_ordering(376) 00:13:12.241 fused_ordering(377) 00:13:12.241 fused_ordering(378) 00:13:12.241 fused_ordering(379) 00:13:12.241 fused_ordering(380) 00:13:12.241 fused_ordering(381) 00:13:12.241 fused_ordering(382) 00:13:12.241 fused_ordering(383) 00:13:12.241 fused_ordering(384) 00:13:12.241 fused_ordering(385) 00:13:12.241 fused_ordering(386) 00:13:12.241 fused_ordering(387) 00:13:12.241 fused_ordering(388) 00:13:12.241 fused_ordering(389) 00:13:12.241 fused_ordering(390) 00:13:12.241 fused_ordering(391) 00:13:12.241 fused_ordering(392) 00:13:12.241 fused_ordering(393) 00:13:12.241 fused_ordering(394) 00:13:12.241 fused_ordering(395) 00:13:12.241 fused_ordering(396) 00:13:12.241 fused_ordering(397) 00:13:12.241 fused_ordering(398) 00:13:12.241 fused_ordering(399) 00:13:12.241 fused_ordering(400) 00:13:12.241 fused_ordering(401) 00:13:12.241 fused_ordering(402) 00:13:12.241 fused_ordering(403) 00:13:12.241 fused_ordering(404) 00:13:12.241 fused_ordering(405) 00:13:12.241 fused_ordering(406) 00:13:12.241 fused_ordering(407) 00:13:12.241 fused_ordering(408) 00:13:12.241 fused_ordering(409) 00:13:12.241 fused_ordering(410) 00:13:13.175 fused_ordering(411) 00:13:13.175 fused_ordering(412) 00:13:13.175 fused_ordering(413) 00:13:13.175 fused_ordering(414) 00:13:13.175 fused_ordering(415) 00:13:13.175 fused_ordering(416) 00:13:13.175 fused_ordering(417) 00:13:13.175 fused_ordering(418) 00:13:13.175 fused_ordering(419) 00:13:13.175 fused_ordering(420) 00:13:13.175 fused_ordering(421) 00:13:13.175 fused_ordering(422) 00:13:13.175 fused_ordering(423) 00:13:13.175 fused_ordering(424) 00:13:13.175 fused_ordering(425) 00:13:13.175 fused_ordering(426) 00:13:13.175 fused_ordering(427) 00:13:13.175 fused_ordering(428) 00:13:13.175 fused_ordering(429) 00:13:13.175 fused_ordering(430) 00:13:13.175 fused_ordering(431) 00:13:13.175 fused_ordering(432) 00:13:13.175 fused_ordering(433) 00:13:13.175 fused_ordering(434) 00:13:13.175 fused_ordering(435) 00:13:13.175 fused_ordering(436) 00:13:13.175 fused_ordering(437) 00:13:13.175 fused_ordering(438) 00:13:13.175 fused_ordering(439) 00:13:13.175 fused_ordering(440) 00:13:13.175 fused_ordering(441) 00:13:13.175 fused_ordering(442) 00:13:13.175 fused_ordering(443) 00:13:13.175 fused_ordering(444) 00:13:13.175 fused_ordering(445) 00:13:13.175 fused_ordering(446) 00:13:13.175 fused_ordering(447) 00:13:13.175 fused_ordering(448) 00:13:13.175 fused_ordering(449) 00:13:13.175 fused_ordering(450) 00:13:13.175 fused_ordering(451) 00:13:13.175 fused_ordering(452) 00:13:13.175 fused_ordering(453) 00:13:13.175 fused_ordering(454) 00:13:13.175 fused_ordering(455) 00:13:13.175 fused_ordering(456) 00:13:13.175 fused_ordering(457) 00:13:13.175 fused_ordering(458) 00:13:13.175 fused_ordering(459) 00:13:13.175 fused_ordering(460) 00:13:13.175 fused_ordering(461) 00:13:13.175 fused_ordering(462) 00:13:13.175 fused_ordering(463) 00:13:13.175 fused_ordering(464) 00:13:13.175 fused_ordering(465) 00:13:13.175 fused_ordering(466) 00:13:13.175 fused_ordering(467) 00:13:13.175 fused_ordering(468) 00:13:13.175 fused_ordering(469) 00:13:13.175 fused_ordering(470) 00:13:13.175 fused_ordering(471) 00:13:13.175 fused_ordering(472) 00:13:13.175 fused_ordering(473) 00:13:13.175 fused_ordering(474) 00:13:13.175 fused_ordering(475) 00:13:13.175 fused_ordering(476) 00:13:13.175 fused_ordering(477) 00:13:13.175 fused_ordering(478) 00:13:13.175 fused_ordering(479) 00:13:13.175 fused_ordering(480) 00:13:13.175 fused_ordering(481) 00:13:13.175 fused_ordering(482) 00:13:13.175 fused_ordering(483) 00:13:13.175 fused_ordering(484) 00:13:13.176 fused_ordering(485) 00:13:13.176 fused_ordering(486) 00:13:13.176 fused_ordering(487) 00:13:13.176 fused_ordering(488) 00:13:13.176 fused_ordering(489) 00:13:13.176 fused_ordering(490) 00:13:13.176 fused_ordering(491) 00:13:13.176 fused_ordering(492) 00:13:13.176 fused_ordering(493) 00:13:13.176 fused_ordering(494) 00:13:13.176 fused_ordering(495) 00:13:13.176 fused_ordering(496) 00:13:13.176 fused_ordering(497) 00:13:13.176 fused_ordering(498) 00:13:13.176 fused_ordering(499) 00:13:13.176 fused_ordering(500) 00:13:13.176 fused_ordering(501) 00:13:13.176 fused_ordering(502) 00:13:13.176 fused_ordering(503) 00:13:13.176 fused_ordering(504) 00:13:13.176 fused_ordering(505) 00:13:13.176 fused_ordering(506) 00:13:13.176 fused_ordering(507) 00:13:13.176 fused_ordering(508) 00:13:13.176 fused_ordering(509) 00:13:13.176 fused_ordering(510) 00:13:13.176 fused_ordering(511) 00:13:13.176 fused_ordering(512) 00:13:13.176 fused_ordering(513) 00:13:13.176 fused_ordering(514) 00:13:13.176 fused_ordering(515) 00:13:13.176 fused_ordering(516) 00:13:13.176 fused_ordering(517) 00:13:13.176 fused_ordering(518) 00:13:13.176 fused_ordering(519) 00:13:13.176 fused_ordering(520) 00:13:13.176 fused_ordering(521) 00:13:13.176 fused_ordering(522) 00:13:13.176 fused_ordering(523) 00:13:13.176 fused_ordering(524) 00:13:13.176 fused_ordering(525) 00:13:13.176 fused_ordering(526) 00:13:13.176 fused_ordering(527) 00:13:13.176 fused_ordering(528) 00:13:13.176 fused_ordering(529) 00:13:13.176 fused_ordering(530) 00:13:13.176 fused_ordering(531) 00:13:13.176 fused_ordering(532) 00:13:13.176 fused_ordering(533) 00:13:13.176 fused_ordering(534) 00:13:13.176 fused_ordering(535) 00:13:13.176 fused_ordering(536) 00:13:13.176 fused_ordering(537) 00:13:13.176 fused_ordering(538) 00:13:13.176 fused_ordering(539) 00:13:13.176 fused_ordering(540) 00:13:13.176 fused_ordering(541) 00:13:13.176 fused_ordering(542) 00:13:13.176 fused_ordering(543) 00:13:13.176 fused_ordering(544) 00:13:13.176 fused_ordering(545) 00:13:13.176 fused_ordering(546) 00:13:13.176 fused_ordering(547) 00:13:13.176 fused_ordering(548) 00:13:13.176 fused_ordering(549) 00:13:13.176 fused_ordering(550) 00:13:13.176 fused_ordering(551) 00:13:13.176 fused_ordering(552) 00:13:13.176 fused_ordering(553) 00:13:13.176 fused_ordering(554) 00:13:13.176 fused_ordering(555) 00:13:13.176 fused_ordering(556) 00:13:13.176 fused_ordering(557) 00:13:13.176 fused_ordering(558) 00:13:13.176 fused_ordering(559) 00:13:13.176 fused_ordering(560) 00:13:13.176 fused_ordering(561) 00:13:13.176 fused_ordering(562) 00:13:13.176 fused_ordering(563) 00:13:13.176 fused_ordering(564) 00:13:13.176 fused_ordering(565) 00:13:13.176 fused_ordering(566) 00:13:13.176 fused_ordering(567) 00:13:13.176 fused_ordering(568) 00:13:13.176 fused_ordering(569) 00:13:13.176 fused_ordering(570) 00:13:13.176 fused_ordering(571) 00:13:13.176 fused_ordering(572) 00:13:13.176 fused_ordering(573) 00:13:13.176 fused_ordering(574) 00:13:13.176 fused_ordering(575) 00:13:13.176 fused_ordering(576) 00:13:13.176 fused_ordering(577) 00:13:13.176 fused_ordering(578) 00:13:13.176 fused_ordering(579) 00:13:13.176 fused_ordering(580) 00:13:13.176 fused_ordering(581) 00:13:13.176 fused_ordering(582) 00:13:13.176 fused_ordering(583) 00:13:13.176 fused_ordering(584) 00:13:13.176 fused_ordering(585) 00:13:13.176 fused_ordering(586) 00:13:13.176 fused_ordering(587) 00:13:13.176 fused_ordering(588) 00:13:13.176 fused_ordering(589) 00:13:13.176 fused_ordering(590) 00:13:13.176 fused_ordering(591) 00:13:13.176 fused_ordering(592) 00:13:13.176 fused_ordering(593) 00:13:13.176 fused_ordering(594) 00:13:13.176 fused_ordering(595) 00:13:13.176 fused_ordering(596) 00:13:13.176 fused_ordering(597) 00:13:13.176 fused_ordering(598) 00:13:13.176 fused_ordering(599) 00:13:13.176 fused_ordering(600) 00:13:13.176 fused_ordering(601) 00:13:13.176 fused_ordering(602) 00:13:13.176 fused_ordering(603) 00:13:13.176 fused_ordering(604) 00:13:13.176 fused_ordering(605) 00:13:13.176 fused_ordering(606) 00:13:13.176 fused_ordering(607) 00:13:13.176 fused_ordering(608) 00:13:13.176 fused_ordering(609) 00:13:13.176 fused_ordering(610) 00:13:13.176 fused_ordering(611) 00:13:13.176 fused_ordering(612) 00:13:13.176 fused_ordering(613) 00:13:13.176 fused_ordering(614) 00:13:13.176 fused_ordering(615) 00:13:14.110 fused_ordering(616) 00:13:14.110 fused_ordering(617) 00:13:14.110 fused_ordering(618) 00:13:14.110 fused_ordering(619) 00:13:14.110 fused_ordering(620) 00:13:14.110 fused_ordering(621) 00:13:14.110 fused_ordering(622) 00:13:14.110 fused_ordering(623) 00:13:14.110 fused_ordering(624) 00:13:14.110 fused_ordering(625) 00:13:14.110 fused_ordering(626) 00:13:14.110 fused_ordering(627) 00:13:14.110 fused_ordering(628) 00:13:14.110 fused_ordering(629) 00:13:14.110 fused_ordering(630) 00:13:14.110 fused_ordering(631) 00:13:14.110 fused_ordering(632) 00:13:14.110 fused_ordering(633) 00:13:14.110 fused_ordering(634) 00:13:14.110 fused_ordering(635) 00:13:14.110 fused_ordering(636) 00:13:14.110 fused_ordering(637) 00:13:14.110 fused_ordering(638) 00:13:14.110 fused_ordering(639) 00:13:14.110 fused_ordering(640) 00:13:14.110 fused_ordering(641) 00:13:14.110 fused_ordering(642) 00:13:14.110 fused_ordering(643) 00:13:14.110 fused_ordering(644) 00:13:14.110 fused_ordering(645) 00:13:14.110 fused_ordering(646) 00:13:14.110 fused_ordering(647) 00:13:14.110 fused_ordering(648) 00:13:14.110 fused_ordering(649) 00:13:14.110 fused_ordering(650) 00:13:14.110 fused_ordering(651) 00:13:14.110 fused_ordering(652) 00:13:14.110 fused_ordering(653) 00:13:14.110 fused_ordering(654) 00:13:14.110 fused_ordering(655) 00:13:14.110 fused_ordering(656) 00:13:14.110 fused_ordering(657) 00:13:14.110 fused_ordering(658) 00:13:14.110 fused_ordering(659) 00:13:14.110 fused_ordering(660) 00:13:14.110 fused_ordering(661) 00:13:14.110 fused_ordering(662) 00:13:14.110 fused_ordering(663) 00:13:14.110 fused_ordering(664) 00:13:14.110 fused_ordering(665) 00:13:14.110 fused_ordering(666) 00:13:14.110 fused_ordering(667) 00:13:14.110 fused_ordering(668) 00:13:14.110 fused_ordering(669) 00:13:14.110 fused_ordering(670) 00:13:14.110 fused_ordering(671) 00:13:14.110 fused_ordering(672) 00:13:14.110 fused_ordering(673) 00:13:14.110 fused_ordering(674) 00:13:14.110 fused_ordering(675) 00:13:14.110 fused_ordering(676) 00:13:14.110 fused_ordering(677) 00:13:14.110 fused_ordering(678) 00:13:14.110 fused_ordering(679) 00:13:14.110 fused_ordering(680) 00:13:14.110 fused_ordering(681) 00:13:14.110 fused_ordering(682) 00:13:14.110 fused_ordering(683) 00:13:14.110 fused_ordering(684) 00:13:14.110 fused_ordering(685) 00:13:14.110 fused_ordering(686) 00:13:14.110 fused_ordering(687) 00:13:14.110 fused_ordering(688) 00:13:14.110 fused_ordering(689) 00:13:14.110 fused_ordering(690) 00:13:14.110 fused_ordering(691) 00:13:14.110 fused_ordering(692) 00:13:14.110 fused_ordering(693) 00:13:14.110 fused_ordering(694) 00:13:14.110 fused_ordering(695) 00:13:14.110 fused_ordering(696) 00:13:14.110 fused_ordering(697) 00:13:14.110 fused_ordering(698) 00:13:14.110 fused_ordering(699) 00:13:14.110 fused_ordering(700) 00:13:14.110 fused_ordering(701) 00:13:14.110 fused_ordering(702) 00:13:14.110 fused_ordering(703) 00:13:14.110 fused_ordering(704) 00:13:14.110 fused_ordering(705) 00:13:14.110 fused_ordering(706) 00:13:14.110 fused_ordering(707) 00:13:14.110 fused_ordering(708) 00:13:14.110 fused_ordering(709) 00:13:14.110 fused_ordering(710) 00:13:14.110 fused_ordering(711) 00:13:14.110 fused_ordering(712) 00:13:14.110 fused_ordering(713) 00:13:14.110 fused_ordering(714) 00:13:14.110 fused_ordering(715) 00:13:14.110 fused_ordering(716) 00:13:14.110 fused_ordering(717) 00:13:14.110 fused_ordering(718) 00:13:14.110 fused_ordering(719) 00:13:14.110 fused_ordering(720) 00:13:14.110 fused_ordering(721) 00:13:14.110 fused_ordering(722) 00:13:14.110 fused_ordering(723) 00:13:14.110 fused_ordering(724) 00:13:14.110 fused_ordering(725) 00:13:14.110 fused_ordering(726) 00:13:14.110 fused_ordering(727) 00:13:14.110 fused_ordering(728) 00:13:14.110 fused_ordering(729) 00:13:14.110 fused_ordering(730) 00:13:14.110 fused_ordering(731) 00:13:14.110 fused_ordering(732) 00:13:14.110 fused_ordering(733) 00:13:14.110 fused_ordering(734) 00:13:14.110 fused_ordering(735) 00:13:14.110 fused_ordering(736) 00:13:14.110 fused_ordering(737) 00:13:14.110 fused_ordering(738) 00:13:14.110 fused_ordering(739) 00:13:14.110 fused_ordering(740) 00:13:14.110 fused_ordering(741) 00:13:14.110 fused_ordering(742) 00:13:14.110 fused_ordering(743) 00:13:14.110 fused_ordering(744) 00:13:14.110 fused_ordering(745) 00:13:14.110 fused_ordering(746) 00:13:14.110 fused_ordering(747) 00:13:14.110 fused_ordering(748) 00:13:14.110 fused_ordering(749) 00:13:14.110 fused_ordering(750) 00:13:14.110 fused_ordering(751) 00:13:14.110 fused_ordering(752) 00:13:14.110 fused_ordering(753) 00:13:14.110 fused_ordering(754) 00:13:14.110 fused_ordering(755) 00:13:14.110 fused_ordering(756) 00:13:14.110 fused_ordering(757) 00:13:14.110 fused_ordering(758) 00:13:14.110 fused_ordering(759) 00:13:14.110 fused_ordering(760) 00:13:14.110 fused_ordering(761) 00:13:14.110 fused_ordering(762) 00:13:14.110 fused_ordering(763) 00:13:14.110 fused_ordering(764) 00:13:14.110 fused_ordering(765) 00:13:14.110 fused_ordering(766) 00:13:14.110 fused_ordering(767) 00:13:14.110 fused_ordering(768) 00:13:14.110 fused_ordering(769) 00:13:14.110 fused_ordering(770) 00:13:14.110 fused_ordering(771) 00:13:14.110 fused_ordering(772) 00:13:14.110 fused_ordering(773) 00:13:14.110 fused_ordering(774) 00:13:14.110 fused_ordering(775) 00:13:14.110 fused_ordering(776) 00:13:14.110 fused_ordering(777) 00:13:14.110 fused_ordering(778) 00:13:14.110 fused_ordering(779) 00:13:14.110 fused_ordering(780) 00:13:14.110 fused_ordering(781) 00:13:14.110 fused_ordering(782) 00:13:14.110 fused_ordering(783) 00:13:14.110 fused_ordering(784) 00:13:14.110 fused_ordering(785) 00:13:14.110 fused_ordering(786) 00:13:14.110 fused_ordering(787) 00:13:14.110 fused_ordering(788) 00:13:14.110 fused_ordering(789) 00:13:14.110 fused_ordering(790) 00:13:14.110 fused_ordering(791) 00:13:14.110 fused_ordering(792) 00:13:14.110 fused_ordering(793) 00:13:14.110 fused_ordering(794) 00:13:14.111 fused_ordering(795) 00:13:14.111 fused_ordering(796) 00:13:14.111 fused_ordering(797) 00:13:14.111 fused_ordering(798) 00:13:14.111 fused_ordering(799) 00:13:14.111 fused_ordering(800) 00:13:14.111 fused_ordering(801) 00:13:14.111 fused_ordering(802) 00:13:14.111 fused_ordering(803) 00:13:14.111 fused_ordering(804) 00:13:14.111 fused_ordering(805) 00:13:14.111 fused_ordering(806) 00:13:14.111 fused_ordering(807) 00:13:14.111 fused_ordering(808) 00:13:14.111 fused_ordering(809) 00:13:14.111 fused_ordering(810) 00:13:14.111 fused_ordering(811) 00:13:14.111 fused_ordering(812) 00:13:14.111 fused_ordering(813) 00:13:14.111 fused_ordering(814) 00:13:14.111 fused_ordering(815) 00:13:14.111 fused_ordering(816) 00:13:14.111 fused_ordering(817) 00:13:14.111 fused_ordering(818) 00:13:14.111 fused_ordering(819) 00:13:14.111 fused_ordering(820) 00:13:15.045 fused_ordering(821) 00:13:15.045 fused_ordering(822) 00:13:15.045 fused_ordering(823) 00:13:15.045 fused_ordering(824) 00:13:15.045 fused_ordering(825) 00:13:15.045 fused_ordering(826) 00:13:15.045 fused_ordering(827) 00:13:15.045 fused_ordering(828) 00:13:15.045 fused_ordering(829) 00:13:15.045 fused_ordering(830) 00:13:15.045 fused_ordering(831) 00:13:15.045 fused_ordering(832) 00:13:15.045 fused_ordering(833) 00:13:15.045 fused_ordering(834) 00:13:15.045 fused_ordering(835) 00:13:15.045 fused_ordering(836) 00:13:15.045 fused_ordering(837) 00:13:15.045 fused_ordering(838) 00:13:15.045 fused_ordering(839) 00:13:15.045 fused_ordering(840) 00:13:15.045 fused_ordering(841) 00:13:15.045 fused_ordering(842) 00:13:15.045 fused_ordering(843) 00:13:15.045 fused_ordering(844) 00:13:15.045 fused_ordering(845) 00:13:15.045 fused_ordering(846) 00:13:15.046 fused_ordering(847) 00:13:15.046 fused_ordering(848) 00:13:15.046 fused_ordering(849) 00:13:15.046 fused_ordering(850) 00:13:15.046 fused_ordering(851) 00:13:15.046 fused_ordering(852) 00:13:15.046 fused_ordering(853) 00:13:15.046 fused_ordering(854) 00:13:15.046 fused_ordering(855) 00:13:15.046 fused_ordering(856) 00:13:15.046 fused_ordering(857) 00:13:15.046 fused_ordering(858) 00:13:15.046 fused_ordering(859) 00:13:15.046 fused_ordering(860) 00:13:15.046 fused_ordering(861) 00:13:15.046 fused_ordering(862) 00:13:15.046 fused_ordering(863) 00:13:15.046 fused_ordering(864) 00:13:15.046 fused_ordering(865) 00:13:15.046 fused_ordering(866) 00:13:15.046 fused_ordering(867) 00:13:15.046 fused_ordering(868) 00:13:15.046 fused_ordering(869) 00:13:15.046 fused_ordering(870) 00:13:15.046 fused_ordering(871) 00:13:15.046 fused_ordering(872) 00:13:15.046 fused_ordering(873) 00:13:15.046 fused_ordering(874) 00:13:15.046 fused_ordering(875) 00:13:15.046 fused_ordering(876) 00:13:15.046 fused_ordering(877) 00:13:15.046 fused_ordering(878) 00:13:15.046 fused_ordering(879) 00:13:15.046 fused_ordering(880) 00:13:15.046 fused_ordering(881) 00:13:15.046 fused_ordering(882) 00:13:15.046 fused_ordering(883) 00:13:15.046 fused_ordering(884) 00:13:15.046 fused_ordering(885) 00:13:15.046 fused_ordering(886) 00:13:15.046 fused_ordering(887) 00:13:15.046 fused_ordering(888) 00:13:15.046 fused_ordering(889) 00:13:15.046 fused_ordering(890) 00:13:15.046 fused_ordering(891) 00:13:15.046 fused_ordering(892) 00:13:15.046 fused_ordering(893) 00:13:15.046 fused_ordering(894) 00:13:15.046 fused_ordering(895) 00:13:15.046 fused_ordering(896) 00:13:15.046 fused_ordering(897) 00:13:15.046 fused_ordering(898) 00:13:15.046 fused_ordering(899) 00:13:15.046 fused_ordering(900) 00:13:15.046 fused_ordering(901) 00:13:15.046 fused_ordering(902) 00:13:15.046 fused_ordering(903) 00:13:15.046 fused_ordering(904) 00:13:15.046 fused_ordering(905) 00:13:15.046 fused_ordering(906) 00:13:15.046 fused_ordering(907) 00:13:15.046 fused_ordering(908) 00:13:15.046 fused_ordering(909) 00:13:15.046 fused_ordering(910) 00:13:15.046 fused_ordering(911) 00:13:15.046 fused_ordering(912) 00:13:15.046 fused_ordering(913) 00:13:15.046 fused_ordering(914) 00:13:15.046 fused_ordering(915) 00:13:15.046 fused_ordering(916) 00:13:15.046 fused_ordering(917) 00:13:15.046 fused_ordering(918) 00:13:15.046 fused_ordering(919) 00:13:15.046 fused_ordering(920) 00:13:15.046 fused_ordering(921) 00:13:15.046 fused_ordering(922) 00:13:15.046 fused_ordering(923) 00:13:15.046 fused_ordering(924) 00:13:15.046 fused_ordering(925) 00:13:15.046 fused_ordering(926) 00:13:15.046 fused_ordering(927) 00:13:15.046 fused_ordering(928) 00:13:15.046 fused_ordering(929) 00:13:15.046 fused_ordering(930) 00:13:15.046 fused_ordering(931) 00:13:15.046 fused_ordering(932) 00:13:15.046 fused_ordering(933) 00:13:15.046 fused_ordering(934) 00:13:15.046 fused_ordering(935) 00:13:15.046 fused_ordering(936) 00:13:15.046 fused_ordering(937) 00:13:15.046 fused_ordering(938) 00:13:15.046 fused_ordering(939) 00:13:15.046 fused_ordering(940) 00:13:15.046 fused_ordering(941) 00:13:15.046 fused_ordering(942) 00:13:15.046 fused_ordering(943) 00:13:15.046 fused_ordering(944) 00:13:15.046 fused_ordering(945) 00:13:15.046 fused_ordering(946) 00:13:15.046 fused_ordering(947) 00:13:15.046 fused_ordering(948) 00:13:15.046 fused_ordering(949) 00:13:15.046 fused_ordering(950) 00:13:15.046 fused_ordering(951) 00:13:15.046 fused_ordering(952) 00:13:15.046 fused_ordering(953) 00:13:15.046 fused_ordering(954) 00:13:15.046 fused_ordering(955) 00:13:15.046 fused_ordering(956) 00:13:15.046 fused_ordering(957) 00:13:15.046 fused_ordering(958) 00:13:15.046 fused_ordering(959) 00:13:15.046 fused_ordering(960) 00:13:15.046 fused_ordering(961) 00:13:15.046 fused_ordering(962) 00:13:15.046 fused_ordering(963) 00:13:15.046 fused_ordering(964) 00:13:15.046 fused_ordering(965) 00:13:15.046 fused_ordering(966) 00:13:15.046 fused_ordering(967) 00:13:15.046 fused_ordering(968) 00:13:15.046 fused_ordering(969) 00:13:15.046 fused_ordering(970) 00:13:15.046 fused_ordering(971) 00:13:15.046 fused_ordering(972) 00:13:15.046 fused_ordering(973) 00:13:15.046 fused_ordering(974) 00:13:15.046 fused_ordering(975) 00:13:15.046 fused_ordering(976) 00:13:15.046 fused_ordering(977) 00:13:15.046 fused_ordering(978) 00:13:15.046 fused_ordering(979) 00:13:15.046 fused_ordering(980) 00:13:15.046 fused_ordering(981) 00:13:15.046 fused_ordering(982) 00:13:15.046 fused_ordering(983) 00:13:15.046 fused_ordering(984) 00:13:15.046 fused_ordering(985) 00:13:15.046 fused_ordering(986) 00:13:15.046 fused_ordering(987) 00:13:15.046 fused_ordering(988) 00:13:15.046 fused_ordering(989) 00:13:15.046 fused_ordering(990) 00:13:15.046 fused_ordering(991) 00:13:15.046 fused_ordering(992) 00:13:15.046 fused_ordering(993) 00:13:15.046 fused_ordering(994) 00:13:15.046 fused_ordering(995) 00:13:15.046 fused_ordering(996) 00:13:15.046 fused_ordering(997) 00:13:15.046 fused_ordering(998) 00:13:15.046 fused_ordering(999) 00:13:15.046 fused_ordering(1000) 00:13:15.046 fused_ordering(1001) 00:13:15.046 fused_ordering(1002) 00:13:15.046 fused_ordering(1003) 00:13:15.046 fused_ordering(1004) 00:13:15.046 fused_ordering(1005) 00:13:15.046 fused_ordering(1006) 00:13:15.046 fused_ordering(1007) 00:13:15.046 fused_ordering(1008) 00:13:15.046 fused_ordering(1009) 00:13:15.046 fused_ordering(1010) 00:13:15.046 fused_ordering(1011) 00:13:15.046 fused_ordering(1012) 00:13:15.046 fused_ordering(1013) 00:13:15.046 fused_ordering(1014) 00:13:15.046 fused_ordering(1015) 00:13:15.046 fused_ordering(1016) 00:13:15.046 fused_ordering(1017) 00:13:15.046 fused_ordering(1018) 00:13:15.046 fused_ordering(1019) 00:13:15.046 fused_ordering(1020) 00:13:15.046 fused_ordering(1021) 00:13:15.046 fused_ordering(1022) 00:13:15.046 fused_ordering(1023) 00:13:15.046 03:13:49 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:13:15.046 03:13:49 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:13:15.046 03:13:49 -- nvmf/common.sh@477 -- # nvmfcleanup 00:13:15.046 03:13:49 -- nvmf/common.sh@117 -- # sync 00:13:15.046 03:13:49 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:15.046 03:13:49 -- nvmf/common.sh@120 -- # set +e 00:13:15.046 03:13:49 -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:15.046 03:13:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:15.046 rmmod nvme_tcp 00:13:15.046 rmmod nvme_fabrics 00:13:15.046 rmmod nvme_keyring 00:13:15.046 03:13:49 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:15.046 03:13:49 -- nvmf/common.sh@124 -- # set -e 00:13:15.046 03:13:49 -- nvmf/common.sh@125 -- # return 0 00:13:15.046 03:13:49 -- nvmf/common.sh@478 -- # '[' -n 1457062 ']' 00:13:15.046 03:13:49 -- nvmf/common.sh@479 -- # killprocess 1457062 00:13:15.046 03:13:49 -- common/autotest_common.sh@936 -- # '[' -z 1457062 ']' 00:13:15.046 03:13:49 -- common/autotest_common.sh@940 -- # kill -0 1457062 00:13:15.046 03:13:49 -- common/autotest_common.sh@941 -- # uname 00:13:15.046 03:13:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:15.046 03:13:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1457062 00:13:15.046 03:13:49 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:13:15.046 03:13:49 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:13:15.046 03:13:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1457062' 00:13:15.046 killing process with pid 1457062 00:13:15.046 03:13:49 -- common/autotest_common.sh@955 -- # kill 1457062 00:13:15.046 03:13:49 -- common/autotest_common.sh@960 -- # wait 1457062 00:13:15.305 03:13:49 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:13:15.305 03:13:49 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:13:15.305 03:13:49 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:13:15.305 03:13:49 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:15.305 03:13:49 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:15.305 03:13:49 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:15.305 03:13:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:15.305 03:13:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:17.230 03:13:51 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:17.230 00:13:17.230 real 0m9.119s 00:13:17.230 user 0m5.264s 00:13:17.230 sys 0m5.077s 00:13:17.230 03:13:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:17.230 03:13:51 -- common/autotest_common.sh@10 -- # set +x 00:13:17.230 ************************************ 00:13:17.230 END TEST nvmf_fused_ordering 00:13:17.230 ************************************ 00:13:17.488 03:13:51 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:13:17.488 03:13:51 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:17.488 03:13:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:17.488 03:13:51 -- common/autotest_common.sh@10 -- # set +x 00:13:17.488 ************************************ 00:13:17.488 START TEST nvmf_delete_subsystem 00:13:17.488 ************************************ 00:13:17.488 03:13:51 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:13:17.488 * Looking for test storage... 00:13:17.488 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:17.488 03:13:51 -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:17.488 03:13:51 -- nvmf/common.sh@7 -- # uname -s 00:13:17.488 03:13:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:17.488 03:13:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:17.488 03:13:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:17.488 03:13:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:17.488 03:13:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:17.488 03:13:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:17.488 03:13:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:17.488 03:13:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:17.488 03:13:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:17.488 03:13:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:17.488 03:13:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:17.488 03:13:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:17.489 03:13:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:17.489 03:13:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:17.489 03:13:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:17.489 03:13:51 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:17.489 03:13:51 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:17.489 03:13:51 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:17.489 03:13:51 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:17.489 03:13:51 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:17.489 03:13:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.489 03:13:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.489 03:13:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.489 03:13:51 -- paths/export.sh@5 -- # export PATH 00:13:17.489 03:13:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.489 03:13:51 -- nvmf/common.sh@47 -- # : 0 00:13:17.489 03:13:51 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:17.489 03:13:51 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:17.489 03:13:51 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:17.489 03:13:51 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:17.489 03:13:51 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:17.489 03:13:51 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:17.489 03:13:51 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:17.489 03:13:51 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:17.489 03:13:51 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:13:17.489 03:13:51 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:13:17.489 03:13:51 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:17.489 03:13:51 -- nvmf/common.sh@437 -- # prepare_net_devs 00:13:17.489 03:13:51 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:13:17.489 03:13:51 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:13:17.489 03:13:51 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:17.489 03:13:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:17.489 03:13:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:17.489 03:13:51 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:13:17.489 03:13:51 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:13:17.489 03:13:51 -- nvmf/common.sh@285 -- # xtrace_disable 00:13:17.489 03:13:51 -- common/autotest_common.sh@10 -- # set +x 00:13:19.391 03:13:53 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:19.391 03:13:53 -- nvmf/common.sh@291 -- # pci_devs=() 00:13:19.391 03:13:53 -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:19.391 03:13:53 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:19.391 03:13:53 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:19.391 03:13:53 -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:19.391 03:13:53 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:19.391 03:13:53 -- nvmf/common.sh@295 -- # net_devs=() 00:13:19.391 03:13:53 -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:19.391 03:13:53 -- nvmf/common.sh@296 -- # e810=() 00:13:19.391 03:13:53 -- nvmf/common.sh@296 -- # local -ga e810 00:13:19.391 03:13:53 -- nvmf/common.sh@297 -- # x722=() 00:13:19.391 03:13:53 -- nvmf/common.sh@297 -- # local -ga x722 00:13:19.391 03:13:53 -- nvmf/common.sh@298 -- # mlx=() 00:13:19.391 03:13:53 -- nvmf/common.sh@298 -- # local -ga mlx 00:13:19.391 03:13:53 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:19.391 03:13:53 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:19.391 03:13:53 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:19.391 03:13:53 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:19.391 03:13:53 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:19.391 03:13:53 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:19.391 03:13:53 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:19.391 03:13:53 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:19.391 03:13:53 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:19.391 03:13:53 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:19.391 03:13:53 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:19.391 03:13:53 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:19.391 03:13:53 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:19.391 03:13:53 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:19.391 03:13:53 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:19.391 03:13:53 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:19.391 03:13:53 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:19.391 03:13:53 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:19.391 03:13:53 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:19.391 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:19.391 03:13:53 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:19.391 03:13:53 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:19.391 03:13:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:19.391 03:13:53 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:19.391 03:13:53 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:19.391 03:13:53 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:19.391 03:13:53 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:19.391 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:19.391 03:13:53 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:19.391 03:13:53 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:19.391 03:13:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:19.391 03:13:53 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:19.391 03:13:53 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:19.391 03:13:53 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:19.391 03:13:53 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:19.391 03:13:53 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:19.391 03:13:53 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:19.391 03:13:53 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:19.391 03:13:53 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:19.391 03:13:53 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:19.391 03:13:53 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:19.391 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:19.391 03:13:53 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:19.391 03:13:53 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:19.391 03:13:53 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:19.391 03:13:53 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:19.391 03:13:53 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:19.391 03:13:53 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:19.391 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:19.391 03:13:53 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:19.391 03:13:53 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:13:19.391 03:13:53 -- nvmf/common.sh@403 -- # is_hw=yes 00:13:19.391 03:13:53 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:13:19.391 03:13:53 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:13:19.391 03:13:53 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:13:19.391 03:13:53 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:19.391 03:13:53 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:19.391 03:13:53 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:19.391 03:13:53 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:19.391 03:13:53 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:19.391 03:13:53 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:19.391 03:13:53 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:19.391 03:13:53 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:19.391 03:13:53 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:19.391 03:13:53 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:19.391 03:13:53 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:19.391 03:13:53 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:19.391 03:13:53 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:19.650 03:13:53 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:19.650 03:13:53 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:19.650 03:13:53 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:19.650 03:13:53 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:19.650 03:13:53 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:19.650 03:13:53 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:19.650 03:13:53 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:19.650 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:19.650 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.137 ms 00:13:19.650 00:13:19.650 --- 10.0.0.2 ping statistics --- 00:13:19.650 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:19.650 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:13:19.650 03:13:53 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:19.650 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:19.650 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.109 ms 00:13:19.650 00:13:19.650 --- 10.0.0.1 ping statistics --- 00:13:19.650 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:19.650 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:13:19.650 03:13:53 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:19.650 03:13:53 -- nvmf/common.sh@411 -- # return 0 00:13:19.650 03:13:53 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:13:19.650 03:13:53 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:19.650 03:13:53 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:13:19.650 03:13:53 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:13:19.650 03:13:53 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:19.650 03:13:53 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:13:19.650 03:13:53 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:13:19.650 03:13:53 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:13:19.650 03:13:53 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:19.650 03:13:53 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:19.650 03:13:53 -- common/autotest_common.sh@10 -- # set +x 00:13:19.650 03:13:54 -- nvmf/common.sh@470 -- # nvmfpid=1459553 00:13:19.650 03:13:54 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:13:19.650 03:13:54 -- nvmf/common.sh@471 -- # waitforlisten 1459553 00:13:19.650 03:13:54 -- common/autotest_common.sh@817 -- # '[' -z 1459553 ']' 00:13:19.650 03:13:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:19.650 03:13:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:19.650 03:13:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:19.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:19.650 03:13:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:19.650 03:13:54 -- common/autotest_common.sh@10 -- # set +x 00:13:19.650 [2024-04-25 03:13:54.048883] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:13:19.651 [2024-04-25 03:13:54.048966] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:19.651 EAL: No free 2048 kB hugepages reported on node 1 00:13:19.651 [2024-04-25 03:13:54.119831] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:19.909 [2024-04-25 03:13:54.237581] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:19.909 [2024-04-25 03:13:54.237670] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:19.909 [2024-04-25 03:13:54.237696] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:19.909 [2024-04-25 03:13:54.237710] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:19.909 [2024-04-25 03:13:54.237722] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:19.909 [2024-04-25 03:13:54.237808] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:19.909 [2024-04-25 03:13:54.237815] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:20.843 03:13:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:20.843 03:13:55 -- common/autotest_common.sh@850 -- # return 0 00:13:20.843 03:13:55 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:20.843 03:13:55 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:20.843 03:13:55 -- common/autotest_common.sh@10 -- # set +x 00:13:20.843 03:13:55 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:20.843 03:13:55 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:20.843 03:13:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:20.843 03:13:55 -- common/autotest_common.sh@10 -- # set +x 00:13:20.843 [2024-04-25 03:13:55.039313] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:20.843 03:13:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:20.843 03:13:55 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:20.843 03:13:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:20.843 03:13:55 -- common/autotest_common.sh@10 -- # set +x 00:13:20.843 03:13:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:20.843 03:13:55 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:20.843 03:13:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:20.843 03:13:55 -- common/autotest_common.sh@10 -- # set +x 00:13:20.843 [2024-04-25 03:13:55.055528] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:20.843 03:13:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:20.843 03:13:55 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:20.843 03:13:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:20.843 03:13:55 -- common/autotest_common.sh@10 -- # set +x 00:13:20.843 NULL1 00:13:20.843 03:13:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:20.843 03:13:55 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:20.843 03:13:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:20.843 03:13:55 -- common/autotest_common.sh@10 -- # set +x 00:13:20.843 Delay0 00:13:20.843 03:13:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:20.843 03:13:55 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:20.843 03:13:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:20.843 03:13:55 -- common/autotest_common.sh@10 -- # set +x 00:13:20.843 03:13:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:20.843 03:13:55 -- target/delete_subsystem.sh@28 -- # perf_pid=1459708 00:13:20.843 03:13:55 -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:13:20.843 03:13:55 -- target/delete_subsystem.sh@30 -- # sleep 2 00:13:20.843 EAL: No free 2048 kB hugepages reported on node 1 00:13:20.843 [2024-04-25 03:13:55.130290] subsystem.c:1435:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:13:22.742 03:13:57 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:22.742 03:13:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:22.742 03:13:57 -- common/autotest_common.sh@10 -- # set +x 00:13:22.742 Write completed with error (sct=0, sc=8) 00:13:22.742 Read completed with error (sct=0, sc=8) 00:13:22.742 starting I/O failed: -6 00:13:22.742 Read completed with error (sct=0, sc=8) 00:13:22.742 Write completed with error (sct=0, sc=8) 00:13:22.742 Write completed with error (sct=0, sc=8) 00:13:22.742 Read completed with error (sct=0, sc=8) 00:13:22.742 starting I/O failed: -6 00:13:22.742 Read completed with error (sct=0, sc=8) 00:13:22.742 Read completed with error (sct=0, sc=8) 00:13:22.742 Read completed with error (sct=0, sc=8) 00:13:22.742 Write completed with error (sct=0, sc=8) 00:13:22.742 starting I/O failed: -6 00:13:22.742 Read completed with error (sct=0, sc=8) 00:13:22.742 Read completed with error (sct=0, sc=8) 00:13:22.742 Read completed with error (sct=0, sc=8) 00:13:22.742 Write completed with error (sct=0, sc=8) 00:13:22.742 starting I/O failed: -6 00:13:22.742 Read completed with error (sct=0, sc=8) 00:13:22.742 Read completed with error (sct=0, sc=8) 00:13:22.742 Read completed with error (sct=0, sc=8) 00:13:22.742 Read completed with error (sct=0, sc=8) 00:13:22.742 starting I/O failed: -6 00:13:22.742 Write completed with error (sct=0, sc=8) 00:13:22.742 Read completed with error (sct=0, sc=8) 00:13:22.742 Read completed with error (sct=0, sc=8) 00:13:22.742 Read completed with error (sct=0, sc=8) 00:13:22.742 starting I/O failed: -6 00:13:22.742 Read completed with error (sct=0, sc=8) 00:13:22.742 Read completed with error (sct=0, sc=8) 00:13:22.742 Read completed with error (sct=0, sc=8) 00:13:22.742 Write completed with error (sct=0, sc=8) 00:13:22.742 starting I/O failed: -6 00:13:22.742 Write completed with error (sct=0, sc=8) 00:13:22.742 Read completed with error (sct=0, sc=8) 00:13:22.742 Write completed with error (sct=0, sc=8) 00:13:22.742 Read completed with error (sct=0, sc=8) 00:13:22.742 starting I/O failed: -6 00:13:22.742 Read completed with error (sct=0, sc=8) 00:13:22.742 Read completed with error (sct=0, sc=8) 00:13:22.742 Read completed with error (sct=0, sc=8) 00:13:22.742 Read completed with error (sct=0, sc=8) 00:13:22.742 starting I/O failed: -6 00:13:22.742 Read completed with error (sct=0, sc=8) 00:13:22.742 Write completed with error (sct=0, sc=8) 00:13:22.742 Write completed with error (sct=0, sc=8) 00:13:22.742 Read completed with error (sct=0, sc=8) 00:13:22.742 starting I/O failed: -6 00:13:22.742 Write completed with error (sct=0, sc=8) 00:13:22.742 Read completed with error (sct=0, sc=8) 00:13:22.743 Read completed with error (sct=0, sc=8) 00:13:22.743 Read completed with error (sct=0, sc=8) 00:13:22.743 starting I/O failed: -6 00:13:22.743 [2024-04-25 03:13:57.221550] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fd32c00c250 is same with the state(5) to be set 00:13:22.743 Write completed with error (sct=0, sc=8) 00:13:22.743 Read completed with error (sct=0, sc=8) 00:13:22.743 Read completed with error (sct=0, sc=8) 00:13:22.743 Read completed with error (sct=0, sc=8) 00:13:22.743 Read completed with error (sct=0, sc=8) 00:13:22.743 Read completed with error (sct=0, sc=8) 00:13:22.743 Read completed with error (sct=0, sc=8) 00:13:22.743 Read completed with error (sct=0, sc=8) 00:13:22.743 Read completed with error (sct=0, sc=8) 00:13:22.743 Read completed with error (sct=0, sc=8) 00:13:22.743 Write completed with error (sct=0, sc=8) 00:13:22.743 Write completed with error (sct=0, sc=8) 00:13:22.743 Write completed with error (sct=0, sc=8) 00:13:22.743 Write completed with error (sct=0, sc=8) 00:13:22.743 Read completed with error (sct=0, sc=8) 00:13:22.743 Read completed with error (sct=0, sc=8) 00:13:22.743 Read completed with error (sct=0, sc=8) 00:13:22.743 Read completed with error (sct=0, sc=8) 00:13:22.743 Read completed with error (sct=0, sc=8) 00:13:22.743 Write completed with error (sct=0, sc=8) 00:13:22.743 Read completed with error (sct=0, sc=8) 00:13:22.743 Read completed with error (sct=0, sc=8) 00:13:22.743 Read completed with error (sct=0, sc=8) 00:13:22.743 Read completed with error (sct=0, sc=8) 00:13:22.743 Read completed with error (sct=0, sc=8) 00:13:22.743 Write completed with error (sct=0, sc=8) 00:13:22.743 Read completed with error (sct=0, sc=8) 00:13:22.743 starting I/O failed: -6 00:13:22.743 Read completed with error (sct=0, sc=8) 00:13:22.743 Write completed with error (sct=0, sc=8) 00:13:22.743 Write completed with error (sct=0, sc=8) 00:13:22.743 Read completed with error (sct=0, sc=8) 00:13:22.743 Read completed with error (sct=0, sc=8) 00:13:22.743 Read completed with error (sct=0, sc=8) 00:13:22.743 Read completed with error (sct=0, sc=8) 00:13:22.743 Read completed with error (sct=0, sc=8) 00:13:22.743 Read completed with error (sct=0, sc=8) 00:13:22.743 Read completed with error (sct=0, sc=8) 00:13:22.743 Read completed with error (sct=0, sc=8) 00:13:22.743 Read completed with error (sct=0, sc=8) 00:13:22.743 starting I/O failed: -6 00:13:22.743 Read completed with error (sct=0, sc=8) 00:13:22.743 Read completed with error (sct=0, sc=8) 00:13:22.743 Read completed with error (sct=0, sc=8) 00:13:22.743 Write completed with error (sct=0, sc=8) 00:13:22.743 Read completed with error (sct=0, sc=8) 00:13:22.743 Read completed with error (sct=0, sc=8) 00:13:22.743 Read completed with error (sct=0, sc=8) 00:13:22.743 Write completed with error (sct=0, sc=8) 00:13:22.743 Read completed with error (sct=0, sc=8) 00:13:22.743 Write completed with error (sct=0, sc=8) 00:13:22.743 Read completed with error (sct=0, sc=8) 00:13:22.743 Read completed with error (sct=0, sc=8) 00:13:22.743 Read completed with error (sct=0, sc=8) 00:13:22.743 starting I/O failed: -6 00:13:22.743 Read completed with error (sct=0, sc=8) 00:13:22.743 Write completed with error (sct=0, sc=8) 00:13:22.743 Read completed with error (sct=0, sc=8) 00:13:22.743 Read completed with error (sct=0, sc=8) 00:13:22.743 Read completed with error (sct=0, sc=8) 00:13:22.743 Read completed with error (sct=0, sc=8) 00:13:22.743 Write completed with error (sct=0, sc=8) 00:13:22.743 Read completed with error (sct=0, sc=8) 00:13:22.743 Read completed with error (sct=0, sc=8) 00:13:22.743 Read completed with error (sct=0, sc=8) 00:13:22.743 Write completed with error (sct=0, sc=8) 00:13:22.743 starting I/O failed: -6 00:13:22.743 Write completed with error (sct=0, sc=8) 00:13:22.743 Read completed with error (sct=0, sc=8) 00:13:22.743 Read completed with error (sct=0, sc=8) 00:13:22.743 Read completed with error (sct=0, sc=8) 00:13:22.743 Read completed with error (sct=0, sc=8) 00:13:22.743 Read completed with error (sct=0, sc=8) 00:13:22.743 [2024-04-25 03:13:57.222707] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fd32c000c00 is same with the state(5) to be set 00:13:22.743 Write completed with error (sct=0, sc=8) 00:13:22.743 starting I/O failed: -6 00:13:22.743 Read completed with error (sct=0, sc=8) 00:13:22.743 Read completed with error (sct=0, sc=8) 00:13:22.743 Read completed with error (sct=0, sc=8) 00:13:22.743 Read completed with error (sct=0, sc=8) 00:13:22.743 starting I/O failed: -6 00:13:22.743 Read completed with error (sct=0, sc=8) 00:13:22.743 Read completed with error (sct=0, sc=8) 00:13:22.743 Read completed with error (sct=0, sc=8) 00:13:22.743 Read completed with error (sct=0, sc=8) 00:13:22.743 starting I/O failed: -6 00:13:22.743 Write completed with error (sct=0, sc=8) 00:13:22.743 Write completed with error (sct=0, sc=8) 00:13:22.743 Read completed with error (sct=0, sc=8) 00:13:22.743 Read completed with error (sct=0, sc=8) 00:13:22.743 starting I/O failed: -6 00:13:22.743 Write completed with error (sct=0, sc=8) 00:13:22.743 Write completed with error (sct=0, sc=8) 00:13:22.743 Read completed with error (sct=0, sc=8) 00:13:22.743 Read completed with error (sct=0, sc=8) 00:13:22.743 starting I/O failed: -6 00:13:22.743 Read completed with error (sct=0, sc=8) 00:13:22.743 Write completed with error (sct=0, sc=8) 00:13:22.743 Read completed with error (sct=0, sc=8) 00:13:22.743 Read completed with error (sct=0, sc=8) 00:13:22.743 starting I/O failed: -6 00:13:22.743 Read completed with error (sct=0, sc=8) 00:13:22.743 Read completed with error (sct=0, sc=8) 00:13:22.743 Read completed with error (sct=0, sc=8) 00:13:22.743 Read completed with error (sct=0, sc=8) 00:13:22.743 starting I/O failed: -6 00:13:22.743 Read completed with error (sct=0, sc=8) 00:13:22.743 Read completed with error (sct=0, sc=8) 00:13:22.743 Read completed with error (sct=0, sc=8) 00:13:22.743 Read completed with error (sct=0, sc=8) 00:13:22.743 starting I/O failed: -6 00:13:22.743 Read completed with error (sct=0, sc=8) 00:13:22.743 Read completed with error (sct=0, sc=8) 00:13:22.743 starting I/O failed: -6 00:13:22.743 Write completed with error (sct=0, sc=8) 00:13:22.743 Read completed with error (sct=0, sc=8) 00:13:22.743 starting I/O failed: -6 00:13:22.743 Read completed with error (sct=0, sc=8) 00:13:22.743 Read completed with error (sct=0, sc=8) 00:13:22.743 starting I/O failed: -6 00:13:22.743 Write completed with error (sct=0, sc=8) 00:13:22.743 Write completed with error (sct=0, sc=8) 00:13:22.743 starting I/O failed: -6 00:13:22.743 Read completed with error (sct=0, sc=8) 00:13:22.743 Read completed with error (sct=0, sc=8) 00:13:22.743 starting I/O failed: -6 00:13:22.743 Write completed with error (sct=0, sc=8) 00:13:22.743 Write completed with error (sct=0, sc=8) 00:13:22.743 starting I/O failed: -6 00:13:22.743 Read completed with error (sct=0, sc=8) 00:13:22.743 Write completed with error (sct=0, sc=8) 00:13:22.743 starting I/O failed: -6 00:13:22.743 Read completed with error (sct=0, sc=8) 00:13:22.743 Read completed with error (sct=0, sc=8) 00:13:22.743 starting I/O failed: -6 00:13:22.743 Read completed with error (sct=0, sc=8) 00:13:22.743 Write completed with error (sct=0, sc=8) 00:13:22.743 starting I/O failed: -6 00:13:22.743 Read completed with error (sct=0, sc=8) 00:13:22.743 Read completed with error (sct=0, sc=8) 00:13:22.743 starting I/O failed: -6 00:13:22.743 Write completed with error (sct=0, sc=8) 00:13:22.744 Read completed with error (sct=0, sc=8) 00:13:22.744 starting I/O failed: -6 00:13:22.744 Read completed with error (sct=0, sc=8) 00:13:22.744 Read completed with error (sct=0, sc=8) 00:13:22.744 starting I/O failed: -6 00:13:22.744 Write completed with error (sct=0, sc=8) 00:13:22.744 Write completed with error (sct=0, sc=8) 00:13:22.744 starting I/O failed: -6 00:13:22.744 Write completed with error (sct=0, sc=8) 00:13:22.744 Read completed with error (sct=0, sc=8) 00:13:22.744 starting I/O failed: -6 00:13:22.744 Write completed with error (sct=0, sc=8) 00:13:22.744 Read completed with error (sct=0, sc=8) 00:13:22.744 starting I/O failed: -6 00:13:22.744 Read completed with error (sct=0, sc=8) 00:13:22.744 Read completed with error (sct=0, sc=8) 00:13:22.744 starting I/O failed: -6 00:13:22.744 Read completed with error (sct=0, sc=8) 00:13:22.744 Write completed with error (sct=0, sc=8) 00:13:22.744 starting I/O failed: -6 00:13:22.744 Write completed with error (sct=0, sc=8) 00:13:22.744 Write completed with error (sct=0, sc=8) 00:13:22.744 starting I/O failed: -6 00:13:22.744 Read completed with error (sct=0, sc=8) 00:13:22.744 Write completed with error (sct=0, sc=8) 00:13:22.744 starting I/O failed: -6 00:13:22.744 Write completed with error (sct=0, sc=8) 00:13:22.744 Write completed with error (sct=0, sc=8) 00:13:22.744 starting I/O failed: -6 00:13:22.744 Read completed with error (sct=0, sc=8) 00:13:22.744 Read completed with error (sct=0, sc=8) 00:13:22.744 starting I/O failed: -6 00:13:22.744 Write completed with error (sct=0, sc=8) 00:13:22.744 Write completed with error (sct=0, sc=8) 00:13:22.744 starting I/O failed: -6 00:13:22.744 Read completed with error (sct=0, sc=8) 00:13:22.744 Write completed with error (sct=0, sc=8) 00:13:22.744 starting I/O failed: -6 00:13:22.744 Write completed with error (sct=0, sc=8) 00:13:22.744 Read completed with error (sct=0, sc=8) 00:13:22.744 starting I/O failed: -6 00:13:22.744 Write completed with error (sct=0, sc=8) 00:13:22.744 Read completed with error (sct=0, sc=8) 00:13:22.744 starting I/O failed: -6 00:13:22.744 Read completed with error (sct=0, sc=8) 00:13:22.744 Read completed with error (sct=0, sc=8) 00:13:22.744 starting I/O failed: -6 00:13:22.744 Read completed with error (sct=0, sc=8) 00:13:22.744 Read completed with error (sct=0, sc=8) 00:13:22.744 starting I/O failed: -6 00:13:22.744 Read completed with error (sct=0, sc=8) 00:13:22.744 Write completed with error (sct=0, sc=8) 00:13:22.744 starting I/O failed: -6 00:13:22.744 Read completed with error (sct=0, sc=8) 00:13:22.744 Read completed with error (sct=0, sc=8) 00:13:22.744 starting I/O failed: -6 00:13:22.744 Write completed with error (sct=0, sc=8) 00:13:22.744 [2024-04-25 03:13:57.223723] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111a880 is same with the state(5) to be set 00:13:24.117 [2024-04-25 03:13:58.188909] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1139120 is same with the state(5) to be set 00:13:24.117 Read completed with error (sct=0, sc=8) 00:13:24.117 Read completed with error (sct=0, sc=8) 00:13:24.117 Write completed with error (sct=0, sc=8) 00:13:24.117 Read completed with error (sct=0, sc=8) 00:13:24.117 Read completed with error (sct=0, sc=8) 00:13:24.117 Read completed with error (sct=0, sc=8) 00:13:24.117 Write completed with error (sct=0, sc=8) 00:13:24.117 Read completed with error (sct=0, sc=8) 00:13:24.117 Read completed with error (sct=0, sc=8) 00:13:24.117 Read completed with error (sct=0, sc=8) 00:13:24.117 Read completed with error (sct=0, sc=8) 00:13:24.117 Read completed with error (sct=0, sc=8) 00:13:24.117 Read completed with error (sct=0, sc=8) 00:13:24.117 Write completed with error (sct=0, sc=8) 00:13:24.117 Write completed with error (sct=0, sc=8) 00:13:24.117 Read completed with error (sct=0, sc=8) 00:13:24.117 Read completed with error (sct=0, sc=8) 00:13:24.117 Read completed with error (sct=0, sc=8) 00:13:24.117 Read completed with error (sct=0, sc=8) 00:13:24.117 Read completed with error (sct=0, sc=8) 00:13:24.117 Read completed with error (sct=0, sc=8) 00:13:24.117 Write completed with error (sct=0, sc=8) 00:13:24.117 Write completed with error (sct=0, sc=8) 00:13:24.117 Read completed with error (sct=0, sc=8) 00:13:24.117 Read completed with error (sct=0, sc=8) 00:13:24.117 Read completed with error (sct=0, sc=8) 00:13:24.117 Write completed with error (sct=0, sc=8) 00:13:24.117 Read completed with error (sct=0, sc=8) 00:13:24.117 Read completed with error (sct=0, sc=8) 00:13:24.117 Write completed with error (sct=0, sc=8) 00:13:24.117 Read completed with error (sct=0, sc=8) 00:13:24.117 Write completed with error (sct=0, sc=8) 00:13:24.117 Read completed with error (sct=0, sc=8) 00:13:24.117 Read completed with error (sct=0, sc=8) 00:13:24.117 Read completed with error (sct=0, sc=8) 00:13:24.117 Write completed with error (sct=0, sc=8) 00:13:24.117 Read completed with error (sct=0, sc=8) 00:13:24.117 Write completed with error (sct=0, sc=8) 00:13:24.117 Read completed with error (sct=0, sc=8) 00:13:24.117 Write completed with error (sct=0, sc=8) 00:13:24.117 [2024-04-25 03:13:58.224735] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111ad30 is same with the state(5) to be set 00:13:24.117 Write completed with error (sct=0, sc=8) 00:13:24.117 Read completed with error (sct=0, sc=8) 00:13:24.117 Read completed with error (sct=0, sc=8) 00:13:24.117 Read completed with error (sct=0, sc=8) 00:13:24.117 Read completed with error (sct=0, sc=8) 00:13:24.117 Read completed with error (sct=0, sc=8) 00:13:24.117 Write completed with error (sct=0, sc=8) 00:13:24.117 Read completed with error (sct=0, sc=8) 00:13:24.117 Read completed with error (sct=0, sc=8) 00:13:24.117 Read completed with error (sct=0, sc=8) 00:13:24.117 Read completed with error (sct=0, sc=8) 00:13:24.117 Write completed with error (sct=0, sc=8) 00:13:24.117 Read completed with error (sct=0, sc=8) 00:13:24.117 Read completed with error (sct=0, sc=8) 00:13:24.117 Read completed with error (sct=0, sc=8) 00:13:24.117 Read completed with error (sct=0, sc=8) 00:13:24.117 Read completed with error (sct=0, sc=8) 00:13:24.117 Write completed with error (sct=0, sc=8) 00:13:24.117 Read completed with error (sct=0, sc=8) 00:13:24.117 Read completed with error (sct=0, sc=8) 00:13:24.117 Read completed with error (sct=0, sc=8) 00:13:24.117 Write completed with error (sct=0, sc=8) 00:13:24.117 Read completed with error (sct=0, sc=8) 00:13:24.117 Read completed with error (sct=0, sc=8) 00:13:24.117 Read completed with error (sct=0, sc=8) 00:13:24.117 Write completed with error (sct=0, sc=8) 00:13:24.117 Write completed with error (sct=0, sc=8) 00:13:24.117 Read completed with error (sct=0, sc=8) 00:13:24.117 Read completed with error (sct=0, sc=8) 00:13:24.117 Write completed with error (sct=0, sc=8) 00:13:24.117 Write completed with error (sct=0, sc=8) 00:13:24.117 Read completed with error (sct=0, sc=8) 00:13:24.117 Write completed with error (sct=0, sc=8) 00:13:24.117 Read completed with error (sct=0, sc=8) 00:13:24.117 Read completed with error (sct=0, sc=8) 00:13:24.117 Read completed with error (sct=0, sc=8) 00:13:24.117 Read completed with error (sct=0, sc=8) 00:13:24.117 Write completed with error (sct=0, sc=8) 00:13:24.117 Read completed with error (sct=0, sc=8) 00:13:24.117 Write completed with error (sct=0, sc=8) 00:13:24.117 Read completed with error (sct=0, sc=8) 00:13:24.117 [2024-04-25 03:13:58.225973] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111aa10 is same with the state(5) to be set 00:13:24.117 Write completed with error (sct=0, sc=8) 00:13:24.117 Read completed with error (sct=0, sc=8) 00:13:24.117 Read completed with error (sct=0, sc=8) 00:13:24.117 Write completed with error (sct=0, sc=8) 00:13:24.117 Read completed with error (sct=0, sc=8) 00:13:24.117 Read completed with error (sct=0, sc=8) 00:13:24.117 Read completed with error (sct=0, sc=8) 00:13:24.117 Write completed with error (sct=0, sc=8) 00:13:24.117 Read completed with error (sct=0, sc=8) 00:13:24.117 Read completed with error (sct=0, sc=8) 00:13:24.117 Read completed with error (sct=0, sc=8) 00:13:24.117 Read completed with error (sct=0, sc=8) 00:13:24.117 Read completed with error (sct=0, sc=8) 00:13:24.117 Write completed with error (sct=0, sc=8) 00:13:24.117 Write completed with error (sct=0, sc=8) 00:13:24.117 Write completed with error (sct=0, sc=8) 00:13:24.117 Write completed with error (sct=0, sc=8) 00:13:24.117 Read completed with error (sct=0, sc=8) 00:13:24.117 Read completed with error (sct=0, sc=8) 00:13:24.117 Write completed with error (sct=0, sc=8) 00:13:24.117 Read completed with error (sct=0, sc=8) 00:13:24.117 [2024-04-25 03:13:58.226140] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fd32c00bf90 is same with the state(5) to be set 00:13:24.117 Read completed with error (sct=0, sc=8) 00:13:24.117 Write completed with error (sct=0, sc=8) 00:13:24.117 Read completed with error (sct=0, sc=8) 00:13:24.117 Write completed with error (sct=0, sc=8) 00:13:24.117 Read completed with error (sct=0, sc=8) 00:13:24.117 Write completed with error (sct=0, sc=8) 00:13:24.117 Write completed with error (sct=0, sc=8) 00:13:24.117 Read completed with error (sct=0, sc=8) 00:13:24.117 Read completed with error (sct=0, sc=8) 00:13:24.117 Read completed with error (sct=0, sc=8) 00:13:24.117 Write completed with error (sct=0, sc=8) 00:13:24.117 Read completed with error (sct=0, sc=8) 00:13:24.117 Write completed with error (sct=0, sc=8) 00:13:24.117 Read completed with error (sct=0, sc=8) 00:13:24.117 Read completed with error (sct=0, sc=8) 00:13:24.117 Write completed with error (sct=0, sc=8) 00:13:24.117 Read completed with error (sct=0, sc=8) 00:13:24.117 Read completed with error (sct=0, sc=8) 00:13:24.117 Read completed with error (sct=0, sc=8) 00:13:24.117 Read completed with error (sct=0, sc=8) 00:13:24.117 [2024-04-25 03:13:58.226933] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fd32c00c510 is same with the state(5) to be set 00:13:24.117 03:13:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:24.117 [2024-04-25 03:13:58.227378] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1139120 (9): Bad file descriptor 00:13:24.117 03:13:58 -- target/delete_subsystem.sh@34 -- # delay=0 00:13:24.117 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:13:24.117 03:13:58 -- target/delete_subsystem.sh@35 -- # kill -0 1459708 00:13:24.117 03:13:58 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:13:24.117 Initializing NVMe Controllers 00:13:24.117 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:24.117 Controller IO queue size 128, less than required. 00:13:24.118 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:24.118 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:13:24.118 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:13:24.118 Initialization complete. Launching workers. 00:13:24.118 ======================================================== 00:13:24.118 Latency(us) 00:13:24.118 Device Information : IOPS MiB/s Average min max 00:13:24.118 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 187.00 0.09 899316.74 696.31 1013977.29 00:13:24.118 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 162.69 0.08 911581.43 606.49 1011332.29 00:13:24.118 ======================================================== 00:13:24.118 Total : 349.69 0.17 905022.87 606.49 1013977.29 00:13:24.118 00:13:24.375 03:13:58 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:13:24.375 03:13:58 -- target/delete_subsystem.sh@35 -- # kill -0 1459708 00:13:24.375 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1459708) - No such process 00:13:24.375 03:13:58 -- target/delete_subsystem.sh@45 -- # NOT wait 1459708 00:13:24.375 03:13:58 -- common/autotest_common.sh@638 -- # local es=0 00:13:24.375 03:13:58 -- common/autotest_common.sh@640 -- # valid_exec_arg wait 1459708 00:13:24.375 03:13:58 -- common/autotest_common.sh@626 -- # local arg=wait 00:13:24.375 03:13:58 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:24.375 03:13:58 -- common/autotest_common.sh@630 -- # type -t wait 00:13:24.375 03:13:58 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:24.375 03:13:58 -- common/autotest_common.sh@641 -- # wait 1459708 00:13:24.375 03:13:58 -- common/autotest_common.sh@641 -- # es=1 00:13:24.375 03:13:58 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:13:24.375 03:13:58 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:13:24.375 03:13:58 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:13:24.375 03:13:58 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:24.375 03:13:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:24.375 03:13:58 -- common/autotest_common.sh@10 -- # set +x 00:13:24.375 03:13:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:24.375 03:13:58 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:24.375 03:13:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:24.375 03:13:58 -- common/autotest_common.sh@10 -- # set +x 00:13:24.375 [2024-04-25 03:13:58.748703] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:24.375 03:13:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:24.375 03:13:58 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:24.375 03:13:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:24.375 03:13:58 -- common/autotest_common.sh@10 -- # set +x 00:13:24.375 03:13:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:24.375 03:13:58 -- target/delete_subsystem.sh@54 -- # perf_pid=1460106 00:13:24.375 03:13:58 -- target/delete_subsystem.sh@56 -- # delay=0 00:13:24.375 03:13:58 -- target/delete_subsystem.sh@57 -- # kill -0 1460106 00:13:24.375 03:13:58 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:24.375 03:13:58 -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:13:24.375 EAL: No free 2048 kB hugepages reported on node 1 00:13:24.375 [2024-04-25 03:13:58.812194] subsystem.c:1435:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:13:24.942 03:13:59 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:24.942 03:13:59 -- target/delete_subsystem.sh@57 -- # kill -0 1460106 00:13:24.942 03:13:59 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:25.507 03:13:59 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:25.507 03:13:59 -- target/delete_subsystem.sh@57 -- # kill -0 1460106 00:13:25.507 03:13:59 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:26.073 03:14:00 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:26.073 03:14:00 -- target/delete_subsystem.sh@57 -- # kill -0 1460106 00:13:26.073 03:14:00 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:26.331 03:14:00 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:26.331 03:14:00 -- target/delete_subsystem.sh@57 -- # kill -0 1460106 00:13:26.331 03:14:00 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:26.896 03:14:01 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:26.896 03:14:01 -- target/delete_subsystem.sh@57 -- # kill -0 1460106 00:13:26.896 03:14:01 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:27.462 03:14:01 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:27.462 03:14:01 -- target/delete_subsystem.sh@57 -- # kill -0 1460106 00:13:27.462 03:14:01 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:27.462 Initializing NVMe Controllers 00:13:27.462 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:27.462 Controller IO queue size 128, less than required. 00:13:27.462 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:27.462 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:13:27.462 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:13:27.462 Initialization complete. Launching workers. 00:13:27.462 ======================================================== 00:13:27.462 Latency(us) 00:13:27.462 Device Information : IOPS MiB/s Average min max 00:13:27.462 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1005169.78 1000285.93 1014710.06 00:13:27.462 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005072.87 1000318.87 1014791.18 00:13:27.462 ======================================================== 00:13:27.462 Total : 256.00 0.12 1005121.33 1000285.93 1014791.18 00:13:27.462 00:13:28.028 03:14:02 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:28.028 03:14:02 -- target/delete_subsystem.sh@57 -- # kill -0 1460106 00:13:28.028 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1460106) - No such process 00:13:28.028 03:14:02 -- target/delete_subsystem.sh@67 -- # wait 1460106 00:13:28.028 03:14:02 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:13:28.028 03:14:02 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:13:28.028 03:14:02 -- nvmf/common.sh@477 -- # nvmfcleanup 00:13:28.028 03:14:02 -- nvmf/common.sh@117 -- # sync 00:13:28.028 03:14:02 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:28.028 03:14:02 -- nvmf/common.sh@120 -- # set +e 00:13:28.028 03:14:02 -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:28.028 03:14:02 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:28.028 rmmod nvme_tcp 00:13:28.028 rmmod nvme_fabrics 00:13:28.028 rmmod nvme_keyring 00:13:28.028 03:14:02 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:28.028 03:14:02 -- nvmf/common.sh@124 -- # set -e 00:13:28.028 03:14:02 -- nvmf/common.sh@125 -- # return 0 00:13:28.028 03:14:02 -- nvmf/common.sh@478 -- # '[' -n 1459553 ']' 00:13:28.028 03:14:02 -- nvmf/common.sh@479 -- # killprocess 1459553 00:13:28.028 03:14:02 -- common/autotest_common.sh@936 -- # '[' -z 1459553 ']' 00:13:28.028 03:14:02 -- common/autotest_common.sh@940 -- # kill -0 1459553 00:13:28.028 03:14:02 -- common/autotest_common.sh@941 -- # uname 00:13:28.028 03:14:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:28.028 03:14:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1459553 00:13:28.028 03:14:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:28.028 03:14:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:28.028 03:14:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1459553' 00:13:28.028 killing process with pid 1459553 00:13:28.029 03:14:02 -- common/autotest_common.sh@955 -- # kill 1459553 00:13:28.029 03:14:02 -- common/autotest_common.sh@960 -- # wait 1459553 00:13:28.288 03:14:02 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:13:28.288 03:14:02 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:13:28.288 03:14:02 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:13:28.288 03:14:02 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:28.288 03:14:02 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:28.288 03:14:02 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:28.288 03:14:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:28.288 03:14:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:30.826 03:14:04 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:30.826 00:13:30.826 real 0m12.874s 00:13:30.826 user 0m29.105s 00:13:30.826 sys 0m2.888s 00:13:30.826 03:14:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:30.826 03:14:04 -- common/autotest_common.sh@10 -- # set +x 00:13:30.826 ************************************ 00:13:30.826 END TEST nvmf_delete_subsystem 00:13:30.826 ************************************ 00:13:30.826 03:14:04 -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:13:30.826 03:14:04 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:30.826 03:14:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:30.826 03:14:04 -- common/autotest_common.sh@10 -- # set +x 00:13:30.826 ************************************ 00:13:30.826 START TEST nvmf_ns_masking 00:13:30.826 ************************************ 00:13:30.826 03:14:04 -- common/autotest_common.sh@1111 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:13:30.826 * Looking for test storage... 00:13:30.826 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:30.826 03:14:04 -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:30.826 03:14:04 -- nvmf/common.sh@7 -- # uname -s 00:13:30.826 03:14:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:30.826 03:14:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:30.826 03:14:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:30.826 03:14:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:30.826 03:14:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:30.826 03:14:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:30.826 03:14:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:30.826 03:14:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:30.826 03:14:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:30.826 03:14:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:30.826 03:14:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:30.826 03:14:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:30.826 03:14:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:30.826 03:14:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:30.826 03:14:04 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:30.826 03:14:04 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:30.826 03:14:04 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:30.826 03:14:04 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:30.826 03:14:04 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:30.826 03:14:04 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:30.826 03:14:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.826 03:14:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.826 03:14:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.826 03:14:04 -- paths/export.sh@5 -- # export PATH 00:13:30.827 03:14:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.827 03:14:04 -- nvmf/common.sh@47 -- # : 0 00:13:30.827 03:14:04 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:30.827 03:14:04 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:30.827 03:14:04 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:30.827 03:14:04 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:30.827 03:14:04 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:30.827 03:14:04 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:30.827 03:14:04 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:30.827 03:14:04 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:30.827 03:14:04 -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:30.827 03:14:04 -- target/ns_masking.sh@11 -- # loops=5 00:13:30.827 03:14:04 -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:13:30.827 03:14:04 -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:13:30.827 03:14:04 -- target/ns_masking.sh@15 -- # uuidgen 00:13:30.827 03:14:04 -- target/ns_masking.sh@15 -- # HOSTID=3075ea13-9576-4d40-b760-844af507d731 00:13:30.827 03:14:04 -- target/ns_masking.sh@44 -- # nvmftestinit 00:13:30.827 03:14:04 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:13:30.827 03:14:04 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:30.827 03:14:04 -- nvmf/common.sh@437 -- # prepare_net_devs 00:13:30.827 03:14:04 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:13:30.827 03:14:04 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:13:30.827 03:14:04 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:30.827 03:14:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:30.827 03:14:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:30.827 03:14:04 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:13:30.827 03:14:04 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:13:30.827 03:14:04 -- nvmf/common.sh@285 -- # xtrace_disable 00:13:30.827 03:14:04 -- common/autotest_common.sh@10 -- # set +x 00:13:32.743 03:14:06 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:32.743 03:14:06 -- nvmf/common.sh@291 -- # pci_devs=() 00:13:32.743 03:14:06 -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:32.743 03:14:06 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:32.743 03:14:06 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:32.743 03:14:06 -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:32.743 03:14:06 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:32.743 03:14:06 -- nvmf/common.sh@295 -- # net_devs=() 00:13:32.743 03:14:06 -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:32.743 03:14:06 -- nvmf/common.sh@296 -- # e810=() 00:13:32.743 03:14:06 -- nvmf/common.sh@296 -- # local -ga e810 00:13:32.743 03:14:06 -- nvmf/common.sh@297 -- # x722=() 00:13:32.743 03:14:06 -- nvmf/common.sh@297 -- # local -ga x722 00:13:32.743 03:14:06 -- nvmf/common.sh@298 -- # mlx=() 00:13:32.743 03:14:06 -- nvmf/common.sh@298 -- # local -ga mlx 00:13:32.743 03:14:06 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:32.743 03:14:06 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:32.743 03:14:06 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:32.743 03:14:06 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:32.743 03:14:06 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:32.743 03:14:06 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:32.743 03:14:06 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:32.743 03:14:06 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:32.743 03:14:06 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:32.743 03:14:06 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:32.743 03:14:06 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:32.743 03:14:06 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:32.743 03:14:06 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:32.743 03:14:06 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:32.743 03:14:06 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:32.743 03:14:06 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:32.743 03:14:06 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:32.743 03:14:06 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:32.743 03:14:06 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:32.743 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:32.743 03:14:06 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:32.743 03:14:06 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:32.743 03:14:06 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:32.743 03:14:06 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:32.743 03:14:06 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:32.743 03:14:06 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:32.743 03:14:06 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:32.743 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:32.743 03:14:06 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:32.743 03:14:06 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:32.743 03:14:06 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:32.743 03:14:06 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:32.743 03:14:06 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:32.743 03:14:06 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:32.743 03:14:06 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:32.743 03:14:06 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:32.743 03:14:06 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:32.743 03:14:06 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:32.743 03:14:06 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:32.743 03:14:06 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:32.743 03:14:06 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:32.743 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:32.743 03:14:06 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:32.743 03:14:06 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:32.743 03:14:06 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:32.743 03:14:06 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:32.743 03:14:06 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:32.743 03:14:06 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:32.743 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:32.743 03:14:06 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:32.743 03:14:06 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:13:32.743 03:14:06 -- nvmf/common.sh@403 -- # is_hw=yes 00:13:32.743 03:14:06 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:13:32.743 03:14:06 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:13:32.743 03:14:06 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:13:32.743 03:14:06 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:32.743 03:14:06 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:32.743 03:14:06 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:32.743 03:14:06 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:32.743 03:14:06 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:32.743 03:14:06 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:32.743 03:14:06 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:32.743 03:14:06 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:32.743 03:14:06 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:32.743 03:14:06 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:32.743 03:14:06 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:32.743 03:14:06 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:32.743 03:14:06 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:32.743 03:14:06 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:32.743 03:14:06 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:32.743 03:14:06 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:32.743 03:14:06 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:32.743 03:14:06 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:32.743 03:14:06 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:32.743 03:14:06 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:32.743 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:32.743 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.169 ms 00:13:32.743 00:13:32.743 --- 10.0.0.2 ping statistics --- 00:13:32.743 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:32.743 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:13:32.743 03:14:06 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:32.743 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:32.743 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:13:32.743 00:13:32.743 --- 10.0.0.1 ping statistics --- 00:13:32.743 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:32.743 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:13:32.743 03:14:06 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:32.743 03:14:06 -- nvmf/common.sh@411 -- # return 0 00:13:32.743 03:14:06 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:13:32.743 03:14:06 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:32.743 03:14:06 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:13:32.743 03:14:06 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:13:32.743 03:14:06 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:32.743 03:14:06 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:13:32.743 03:14:06 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:13:32.743 03:14:07 -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:13:32.743 03:14:07 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:32.743 03:14:07 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:32.743 03:14:07 -- common/autotest_common.sh@10 -- # set +x 00:13:32.743 03:14:07 -- nvmf/common.sh@470 -- # nvmfpid=1462461 00:13:32.743 03:14:07 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:32.743 03:14:07 -- nvmf/common.sh@471 -- # waitforlisten 1462461 00:13:32.743 03:14:07 -- common/autotest_common.sh@817 -- # '[' -z 1462461 ']' 00:13:32.743 03:14:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:32.743 03:14:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:32.743 03:14:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:32.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:32.743 03:14:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:32.743 03:14:07 -- common/autotest_common.sh@10 -- # set +x 00:13:32.743 [2024-04-25 03:14:07.050426] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:13:32.743 [2024-04-25 03:14:07.050492] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:32.743 EAL: No free 2048 kB hugepages reported on node 1 00:13:32.743 [2024-04-25 03:14:07.116345] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:32.743 [2024-04-25 03:14:07.235964] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:32.743 [2024-04-25 03:14:07.236035] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:32.743 [2024-04-25 03:14:07.236052] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:32.743 [2024-04-25 03:14:07.236073] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:32.744 [2024-04-25 03:14:07.236085] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:32.744 [2024-04-25 03:14:07.236175] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:32.744 [2024-04-25 03:14:07.236229] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:32.744 [2024-04-25 03:14:07.236285] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:32.744 [2024-04-25 03:14:07.236289] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:33.676 03:14:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:33.676 03:14:07 -- common/autotest_common.sh@850 -- # return 0 00:13:33.676 03:14:07 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:33.676 03:14:07 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:33.676 03:14:07 -- common/autotest_common.sh@10 -- # set +x 00:13:33.676 03:14:08 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:33.676 03:14:08 -- target/ns_masking.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:33.934 [2024-04-25 03:14:08.276478] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:33.934 03:14:08 -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:13:33.934 03:14:08 -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:13:33.934 03:14:08 -- target/ns_masking.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:34.192 Malloc1 00:13:34.192 03:14:08 -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:34.450 Malloc2 00:13:34.450 03:14:08 -- target/ns_masking.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:34.708 03:14:09 -- target/ns_masking.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:13:34.966 03:14:09 -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:35.224 [2024-04-25 03:14:09.532809] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:35.224 03:14:09 -- target/ns_masking.sh@61 -- # connect 00:13:35.224 03:14:09 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 3075ea13-9576-4d40-b760-844af507d731 -a 10.0.0.2 -s 4420 -i 4 00:13:35.482 03:14:09 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:13:35.482 03:14:09 -- common/autotest_common.sh@1184 -- # local i=0 00:13:35.482 03:14:09 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:13:35.482 03:14:09 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:13:35.482 03:14:09 -- common/autotest_common.sh@1191 -- # sleep 2 00:13:37.379 03:14:11 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:13:37.379 03:14:11 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:13:37.379 03:14:11 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:13:37.379 03:14:11 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:13:37.379 03:14:11 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:13:37.379 03:14:11 -- common/autotest_common.sh@1194 -- # return 0 00:13:37.379 03:14:11 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:13:37.379 03:14:11 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:37.379 03:14:11 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:13:37.379 03:14:11 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:13:37.379 03:14:11 -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:13:37.379 03:14:11 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:37.379 03:14:11 -- target/ns_masking.sh@39 -- # grep 0x1 00:13:37.379 [ 0]:0x1 00:13:37.379 03:14:11 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:37.379 03:14:11 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:37.379 03:14:11 -- target/ns_masking.sh@40 -- # nguid=5ab94410a0444830870933c52a53b420 00:13:37.379 03:14:11 -- target/ns_masking.sh@41 -- # [[ 5ab94410a0444830870933c52a53b420 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:37.379 03:14:11 -- target/ns_masking.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:13:37.637 03:14:12 -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:13:37.637 03:14:12 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:37.637 03:14:12 -- target/ns_masking.sh@39 -- # grep 0x1 00:13:37.637 [ 0]:0x1 00:13:37.637 03:14:12 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:37.637 03:14:12 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:37.637 03:14:12 -- target/ns_masking.sh@40 -- # nguid=5ab94410a0444830870933c52a53b420 00:13:37.637 03:14:12 -- target/ns_masking.sh@41 -- # [[ 5ab94410a0444830870933c52a53b420 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:37.637 03:14:12 -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:13:37.637 03:14:12 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:37.637 03:14:12 -- target/ns_masking.sh@39 -- # grep 0x2 00:13:37.897 [ 1]:0x2 00:13:37.897 03:14:12 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:37.897 03:14:12 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:37.897 03:14:12 -- target/ns_masking.sh@40 -- # nguid=e9c0fae7febe41ec873302d96230279d 00:13:37.897 03:14:12 -- target/ns_masking.sh@41 -- # [[ e9c0fae7febe41ec873302d96230279d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:37.897 03:14:12 -- target/ns_masking.sh@69 -- # disconnect 00:13:37.897 03:14:12 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:38.154 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:38.154 03:14:12 -- target/ns_masking.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:38.411 03:14:12 -- target/ns_masking.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:13:38.669 03:14:12 -- target/ns_masking.sh@77 -- # connect 1 00:13:38.669 03:14:12 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 3075ea13-9576-4d40-b760-844af507d731 -a 10.0.0.2 -s 4420 -i 4 00:13:38.669 03:14:13 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:13:38.669 03:14:13 -- common/autotest_common.sh@1184 -- # local i=0 00:13:38.669 03:14:13 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:13:38.669 03:14:13 -- common/autotest_common.sh@1186 -- # [[ -n 1 ]] 00:13:38.669 03:14:13 -- common/autotest_common.sh@1187 -- # nvme_device_counter=1 00:13:38.669 03:14:13 -- common/autotest_common.sh@1191 -- # sleep 2 00:13:41.207 03:14:15 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:13:41.207 03:14:15 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:13:41.207 03:14:15 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:13:41.207 03:14:15 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:13:41.207 03:14:15 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:13:41.207 03:14:15 -- common/autotest_common.sh@1194 -- # return 0 00:13:41.207 03:14:15 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:13:41.207 03:14:15 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:41.207 03:14:15 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:13:41.207 03:14:15 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:13:41.207 03:14:15 -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:13:41.207 03:14:15 -- common/autotest_common.sh@638 -- # local es=0 00:13:41.207 03:14:15 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:13:41.207 03:14:15 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:13:41.207 03:14:15 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:41.207 03:14:15 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:13:41.207 03:14:15 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:41.207 03:14:15 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:13:41.207 03:14:15 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:41.207 03:14:15 -- target/ns_masking.sh@39 -- # grep 0x1 00:13:41.207 03:14:15 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:41.207 03:14:15 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:41.207 03:14:15 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:13:41.207 03:14:15 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:41.207 03:14:15 -- common/autotest_common.sh@641 -- # es=1 00:13:41.207 03:14:15 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:13:41.207 03:14:15 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:13:41.207 03:14:15 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:13:41.207 03:14:15 -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:13:41.207 03:14:15 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:41.207 03:14:15 -- target/ns_masking.sh@39 -- # grep 0x2 00:13:41.207 [ 0]:0x2 00:13:41.207 03:14:15 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:41.207 03:14:15 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:41.207 03:14:15 -- target/ns_masking.sh@40 -- # nguid=e9c0fae7febe41ec873302d96230279d 00:13:41.207 03:14:15 -- target/ns_masking.sh@41 -- # [[ e9c0fae7febe41ec873302d96230279d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:41.207 03:14:15 -- target/ns_masking.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:41.207 03:14:15 -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:13:41.207 03:14:15 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:41.207 03:14:15 -- target/ns_masking.sh@39 -- # grep 0x1 00:13:41.207 [ 0]:0x1 00:13:41.207 03:14:15 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:41.207 03:14:15 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:41.207 03:14:15 -- target/ns_masking.sh@40 -- # nguid=5ab94410a0444830870933c52a53b420 00:13:41.207 03:14:15 -- target/ns_masking.sh@41 -- # [[ 5ab94410a0444830870933c52a53b420 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:41.207 03:14:15 -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:13:41.207 03:14:15 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:41.207 03:14:15 -- target/ns_masking.sh@39 -- # grep 0x2 00:13:41.207 [ 1]:0x2 00:13:41.207 03:14:15 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:41.207 03:14:15 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:41.465 03:14:15 -- target/ns_masking.sh@40 -- # nguid=e9c0fae7febe41ec873302d96230279d 00:13:41.465 03:14:15 -- target/ns_masking.sh@41 -- # [[ e9c0fae7febe41ec873302d96230279d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:41.465 03:14:15 -- target/ns_masking.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:41.722 03:14:15 -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:13:41.722 03:14:15 -- common/autotest_common.sh@638 -- # local es=0 00:13:41.722 03:14:15 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:13:41.722 03:14:15 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:13:41.722 03:14:15 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:41.722 03:14:15 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:13:41.722 03:14:15 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:41.722 03:14:15 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:13:41.722 03:14:15 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:41.722 03:14:15 -- target/ns_masking.sh@39 -- # grep 0x1 00:13:41.722 03:14:15 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:41.722 03:14:15 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:41.722 03:14:16 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:13:41.722 03:14:16 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:41.722 03:14:16 -- common/autotest_common.sh@641 -- # es=1 00:13:41.722 03:14:16 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:13:41.722 03:14:16 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:13:41.722 03:14:16 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:13:41.722 03:14:16 -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:13:41.722 03:14:16 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:41.722 03:14:16 -- target/ns_masking.sh@39 -- # grep 0x2 00:13:41.722 [ 0]:0x2 00:13:41.722 03:14:16 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:41.722 03:14:16 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:41.722 03:14:16 -- target/ns_masking.sh@40 -- # nguid=e9c0fae7febe41ec873302d96230279d 00:13:41.722 03:14:16 -- target/ns_masking.sh@41 -- # [[ e9c0fae7febe41ec873302d96230279d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:41.722 03:14:16 -- target/ns_masking.sh@91 -- # disconnect 00:13:41.722 03:14:16 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:41.722 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:41.722 03:14:16 -- target/ns_masking.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:41.980 03:14:16 -- target/ns_masking.sh@95 -- # connect 2 00:13:41.980 03:14:16 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 3075ea13-9576-4d40-b760-844af507d731 -a 10.0.0.2 -s 4420 -i 4 00:13:42.238 03:14:16 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:42.238 03:14:16 -- common/autotest_common.sh@1184 -- # local i=0 00:13:42.238 03:14:16 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:13:42.238 03:14:16 -- common/autotest_common.sh@1186 -- # [[ -n 2 ]] 00:13:42.238 03:14:16 -- common/autotest_common.sh@1187 -- # nvme_device_counter=2 00:13:42.238 03:14:16 -- common/autotest_common.sh@1191 -- # sleep 2 00:13:44.137 03:14:18 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:13:44.137 03:14:18 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:13:44.137 03:14:18 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:13:44.137 03:14:18 -- common/autotest_common.sh@1193 -- # nvme_devices=2 00:13:44.137 03:14:18 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:13:44.137 03:14:18 -- common/autotest_common.sh@1194 -- # return 0 00:13:44.137 03:14:18 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:13:44.137 03:14:18 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:44.395 03:14:18 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:13:44.395 03:14:18 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:13:44.395 03:14:18 -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:13:44.395 03:14:18 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:44.395 03:14:18 -- target/ns_masking.sh@39 -- # grep 0x1 00:13:44.395 [ 0]:0x1 00:13:44.395 03:14:18 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:44.395 03:14:18 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:44.395 03:14:18 -- target/ns_masking.sh@40 -- # nguid=5ab94410a0444830870933c52a53b420 00:13:44.395 03:14:18 -- target/ns_masking.sh@41 -- # [[ 5ab94410a0444830870933c52a53b420 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:44.395 03:14:18 -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:13:44.395 03:14:18 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:44.395 03:14:18 -- target/ns_masking.sh@39 -- # grep 0x2 00:13:44.395 [ 1]:0x2 00:13:44.395 03:14:18 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:44.395 03:14:18 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:44.395 03:14:18 -- target/ns_masking.sh@40 -- # nguid=e9c0fae7febe41ec873302d96230279d 00:13:44.395 03:14:18 -- target/ns_masking.sh@41 -- # [[ e9c0fae7febe41ec873302d96230279d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:44.395 03:14:18 -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:44.961 03:14:19 -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:13:44.961 03:14:19 -- common/autotest_common.sh@638 -- # local es=0 00:13:44.961 03:14:19 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:13:44.961 03:14:19 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:13:44.961 03:14:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:44.961 03:14:19 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:13:44.961 03:14:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:44.961 03:14:19 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:13:44.961 03:14:19 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:44.961 03:14:19 -- target/ns_masking.sh@39 -- # grep 0x1 00:13:44.961 03:14:19 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:44.961 03:14:19 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:44.961 03:14:19 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:13:44.961 03:14:19 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:44.961 03:14:19 -- common/autotest_common.sh@641 -- # es=1 00:13:44.961 03:14:19 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:13:44.961 03:14:19 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:13:44.961 03:14:19 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:13:44.961 03:14:19 -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:13:44.961 03:14:19 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:44.961 03:14:19 -- target/ns_masking.sh@39 -- # grep 0x2 00:13:44.961 [ 0]:0x2 00:13:44.961 03:14:19 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:44.961 03:14:19 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:44.961 03:14:19 -- target/ns_masking.sh@40 -- # nguid=e9c0fae7febe41ec873302d96230279d 00:13:44.961 03:14:19 -- target/ns_masking.sh@41 -- # [[ e9c0fae7febe41ec873302d96230279d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:44.961 03:14:19 -- target/ns_masking.sh@105 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:44.961 03:14:19 -- common/autotest_common.sh@638 -- # local es=0 00:13:44.961 03:14:19 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:44.961 03:14:19 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:44.961 03:14:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:44.961 03:14:19 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:44.961 03:14:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:44.961 03:14:19 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:44.961 03:14:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:44.961 03:14:19 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:44.961 03:14:19 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:44.961 03:14:19 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:45.219 [2024-04-25 03:14:19.464900] nvmf_rpc.c:1779:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:13:45.219 request: 00:13:45.219 { 00:13:45.219 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:45.219 "nsid": 2, 00:13:45.219 "host": "nqn.2016-06.io.spdk:host1", 00:13:45.219 "method": "nvmf_ns_remove_host", 00:13:45.219 "req_id": 1 00:13:45.219 } 00:13:45.219 Got JSON-RPC error response 00:13:45.219 response: 00:13:45.219 { 00:13:45.219 "code": -32602, 00:13:45.219 "message": "Invalid parameters" 00:13:45.219 } 00:13:45.219 03:14:19 -- common/autotest_common.sh@641 -- # es=1 00:13:45.219 03:14:19 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:13:45.219 03:14:19 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:13:45.219 03:14:19 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:13:45.219 03:14:19 -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:13:45.219 03:14:19 -- common/autotest_common.sh@638 -- # local es=0 00:13:45.219 03:14:19 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:13:45.219 03:14:19 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:13:45.219 03:14:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:45.219 03:14:19 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:13:45.219 03:14:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:45.219 03:14:19 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:13:45.219 03:14:19 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:45.219 03:14:19 -- target/ns_masking.sh@39 -- # grep 0x1 00:13:45.219 03:14:19 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:45.219 03:14:19 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:45.219 03:14:19 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:13:45.219 03:14:19 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:45.219 03:14:19 -- common/autotest_common.sh@641 -- # es=1 00:13:45.219 03:14:19 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:13:45.219 03:14:19 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:13:45.220 03:14:19 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:13:45.220 03:14:19 -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:13:45.220 03:14:19 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:45.220 03:14:19 -- target/ns_masking.sh@39 -- # grep 0x2 00:13:45.220 [ 0]:0x2 00:13:45.220 03:14:19 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:45.220 03:14:19 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:45.220 03:14:19 -- target/ns_masking.sh@40 -- # nguid=e9c0fae7febe41ec873302d96230279d 00:13:45.220 03:14:19 -- target/ns_masking.sh@41 -- # [[ e9c0fae7febe41ec873302d96230279d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:45.220 03:14:19 -- target/ns_masking.sh@108 -- # disconnect 00:13:45.220 03:14:19 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:45.220 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:45.220 03:14:19 -- target/ns_masking.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:45.478 03:14:19 -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:13:45.478 03:14:19 -- target/ns_masking.sh@114 -- # nvmftestfini 00:13:45.478 03:14:19 -- nvmf/common.sh@477 -- # nvmfcleanup 00:13:45.478 03:14:19 -- nvmf/common.sh@117 -- # sync 00:13:45.478 03:14:19 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:45.478 03:14:19 -- nvmf/common.sh@120 -- # set +e 00:13:45.478 03:14:19 -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:45.478 03:14:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:45.478 rmmod nvme_tcp 00:13:45.478 rmmod nvme_fabrics 00:13:45.478 rmmod nvme_keyring 00:13:45.478 03:14:19 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:45.478 03:14:19 -- nvmf/common.sh@124 -- # set -e 00:13:45.478 03:14:19 -- nvmf/common.sh@125 -- # return 0 00:13:45.478 03:14:19 -- nvmf/common.sh@478 -- # '[' -n 1462461 ']' 00:13:45.478 03:14:19 -- nvmf/common.sh@479 -- # killprocess 1462461 00:13:45.478 03:14:19 -- common/autotest_common.sh@936 -- # '[' -z 1462461 ']' 00:13:45.478 03:14:19 -- common/autotest_common.sh@940 -- # kill -0 1462461 00:13:45.478 03:14:19 -- common/autotest_common.sh@941 -- # uname 00:13:45.478 03:14:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:45.478 03:14:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1462461 00:13:45.478 03:14:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:45.478 03:14:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:45.478 03:14:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1462461' 00:13:45.478 killing process with pid 1462461 00:13:45.478 03:14:19 -- common/autotest_common.sh@955 -- # kill 1462461 00:13:45.478 03:14:19 -- common/autotest_common.sh@960 -- # wait 1462461 00:13:46.045 03:14:20 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:13:46.045 03:14:20 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:13:46.045 03:14:20 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:13:46.045 03:14:20 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:46.045 03:14:20 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:46.045 03:14:20 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:46.045 03:14:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:46.045 03:14:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:47.949 03:14:22 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:47.949 00:13:47.949 real 0m17.454s 00:13:47.949 user 0m55.228s 00:13:47.949 sys 0m3.876s 00:13:47.949 03:14:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:47.949 03:14:22 -- common/autotest_common.sh@10 -- # set +x 00:13:47.949 ************************************ 00:13:47.949 END TEST nvmf_ns_masking 00:13:47.949 ************************************ 00:13:47.949 03:14:22 -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:13:47.949 03:14:22 -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:13:47.949 03:14:22 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:47.949 03:14:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:47.949 03:14:22 -- common/autotest_common.sh@10 -- # set +x 00:13:47.949 ************************************ 00:13:47.949 START TEST nvmf_nvme_cli 00:13:47.949 ************************************ 00:13:47.950 03:14:22 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:13:48.208 * Looking for test storage... 00:13:48.208 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:48.208 03:14:22 -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:48.208 03:14:22 -- nvmf/common.sh@7 -- # uname -s 00:13:48.208 03:14:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:48.208 03:14:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:48.208 03:14:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:48.208 03:14:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:48.208 03:14:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:48.208 03:14:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:48.208 03:14:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:48.208 03:14:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:48.208 03:14:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:48.208 03:14:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:48.208 03:14:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:48.208 03:14:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:48.208 03:14:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:48.208 03:14:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:48.208 03:14:22 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:48.208 03:14:22 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:48.209 03:14:22 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:48.209 03:14:22 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:48.209 03:14:22 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:48.209 03:14:22 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:48.209 03:14:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.209 03:14:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.209 03:14:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.209 03:14:22 -- paths/export.sh@5 -- # export PATH 00:13:48.209 03:14:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.209 03:14:22 -- nvmf/common.sh@47 -- # : 0 00:13:48.209 03:14:22 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:48.209 03:14:22 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:48.209 03:14:22 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:48.209 03:14:22 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:48.209 03:14:22 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:48.209 03:14:22 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:48.209 03:14:22 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:48.209 03:14:22 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:48.209 03:14:22 -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:48.209 03:14:22 -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:48.209 03:14:22 -- target/nvme_cli.sh@14 -- # devs=() 00:13:48.209 03:14:22 -- target/nvme_cli.sh@16 -- # nvmftestinit 00:13:48.209 03:14:22 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:13:48.209 03:14:22 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:48.209 03:14:22 -- nvmf/common.sh@437 -- # prepare_net_devs 00:13:48.209 03:14:22 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:13:48.209 03:14:22 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:13:48.209 03:14:22 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:48.209 03:14:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:48.209 03:14:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:48.209 03:14:22 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:13:48.209 03:14:22 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:13:48.209 03:14:22 -- nvmf/common.sh@285 -- # xtrace_disable 00:13:48.209 03:14:22 -- common/autotest_common.sh@10 -- # set +x 00:13:50.110 03:14:24 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:50.110 03:14:24 -- nvmf/common.sh@291 -- # pci_devs=() 00:13:50.110 03:14:24 -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:50.110 03:14:24 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:50.110 03:14:24 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:50.110 03:14:24 -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:50.111 03:14:24 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:50.111 03:14:24 -- nvmf/common.sh@295 -- # net_devs=() 00:13:50.111 03:14:24 -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:50.111 03:14:24 -- nvmf/common.sh@296 -- # e810=() 00:13:50.111 03:14:24 -- nvmf/common.sh@296 -- # local -ga e810 00:13:50.111 03:14:24 -- nvmf/common.sh@297 -- # x722=() 00:13:50.111 03:14:24 -- nvmf/common.sh@297 -- # local -ga x722 00:13:50.111 03:14:24 -- nvmf/common.sh@298 -- # mlx=() 00:13:50.111 03:14:24 -- nvmf/common.sh@298 -- # local -ga mlx 00:13:50.111 03:14:24 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:50.111 03:14:24 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:50.111 03:14:24 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:50.111 03:14:24 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:50.111 03:14:24 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:50.111 03:14:24 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:50.111 03:14:24 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:50.111 03:14:24 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:50.111 03:14:24 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:50.111 03:14:24 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:50.111 03:14:24 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:50.111 03:14:24 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:50.111 03:14:24 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:50.111 03:14:24 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:50.111 03:14:24 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:50.111 03:14:24 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:50.111 03:14:24 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:50.111 03:14:24 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:50.111 03:14:24 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:50.111 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:50.111 03:14:24 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:50.111 03:14:24 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:50.111 03:14:24 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:50.111 03:14:24 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:50.111 03:14:24 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:50.111 03:14:24 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:50.111 03:14:24 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:50.111 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:50.111 03:14:24 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:50.111 03:14:24 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:50.111 03:14:24 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:50.111 03:14:24 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:50.111 03:14:24 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:50.111 03:14:24 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:50.111 03:14:24 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:50.111 03:14:24 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:50.111 03:14:24 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:50.111 03:14:24 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:50.111 03:14:24 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:50.111 03:14:24 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:50.111 03:14:24 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:50.111 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:50.111 03:14:24 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:50.111 03:14:24 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:50.111 03:14:24 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:50.111 03:14:24 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:50.111 03:14:24 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:50.111 03:14:24 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:50.111 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:50.111 03:14:24 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:50.111 03:14:24 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:13:50.111 03:14:24 -- nvmf/common.sh@403 -- # is_hw=yes 00:13:50.111 03:14:24 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:13:50.111 03:14:24 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:13:50.111 03:14:24 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:13:50.111 03:14:24 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:50.111 03:14:24 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:50.111 03:14:24 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:50.111 03:14:24 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:50.111 03:14:24 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:50.111 03:14:24 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:50.111 03:14:24 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:50.111 03:14:24 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:50.111 03:14:24 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:50.111 03:14:24 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:50.111 03:14:24 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:50.111 03:14:24 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:50.111 03:14:24 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:50.111 03:14:24 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:50.111 03:14:24 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:50.111 03:14:24 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:50.111 03:14:24 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:50.111 03:14:24 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:50.111 03:14:24 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:50.111 03:14:24 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:50.111 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:50.111 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.187 ms 00:13:50.111 00:13:50.111 --- 10.0.0.2 ping statistics --- 00:13:50.111 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:50.111 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:13:50.111 03:14:24 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:50.111 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:50.111 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:13:50.111 00:13:50.111 --- 10.0.0.1 ping statistics --- 00:13:50.111 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:50.111 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:13:50.111 03:14:24 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:50.111 03:14:24 -- nvmf/common.sh@411 -- # return 0 00:13:50.111 03:14:24 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:13:50.111 03:14:24 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:50.111 03:14:24 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:13:50.111 03:14:24 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:13:50.111 03:14:24 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:50.111 03:14:24 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:13:50.111 03:14:24 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:13:50.111 03:14:24 -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:13:50.111 03:14:24 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:50.111 03:14:24 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:50.111 03:14:24 -- common/autotest_common.sh@10 -- # set +x 00:13:50.111 03:14:24 -- nvmf/common.sh@470 -- # nvmfpid=1466135 00:13:50.111 03:14:24 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:50.111 03:14:24 -- nvmf/common.sh@471 -- # waitforlisten 1466135 00:13:50.111 03:14:24 -- common/autotest_common.sh@817 -- # '[' -z 1466135 ']' 00:13:50.111 03:14:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:50.111 03:14:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:50.111 03:14:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:50.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:50.111 03:14:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:50.111 03:14:24 -- common/autotest_common.sh@10 -- # set +x 00:13:50.370 [2024-04-25 03:14:24.637657] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:13:50.370 [2024-04-25 03:14:24.637726] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:50.370 EAL: No free 2048 kB hugepages reported on node 1 00:13:50.370 [2024-04-25 03:14:24.703001] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:50.370 [2024-04-25 03:14:24.822127] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:50.370 [2024-04-25 03:14:24.822192] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:50.370 [2024-04-25 03:14:24.822217] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:50.370 [2024-04-25 03:14:24.822231] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:50.370 [2024-04-25 03:14:24.822242] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:50.370 [2024-04-25 03:14:24.822336] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:50.370 [2024-04-25 03:14:24.822400] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:50.370 [2024-04-25 03:14:24.822453] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:50.370 [2024-04-25 03:14:24.822456] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:51.314 03:14:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:51.314 03:14:25 -- common/autotest_common.sh@850 -- # return 0 00:13:51.314 03:14:25 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:51.314 03:14:25 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:51.314 03:14:25 -- common/autotest_common.sh@10 -- # set +x 00:13:51.314 03:14:25 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:51.314 03:14:25 -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:51.314 03:14:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:51.314 03:14:25 -- common/autotest_common.sh@10 -- # set +x 00:13:51.314 [2024-04-25 03:14:25.644657] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:51.314 03:14:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:51.314 03:14:25 -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:51.314 03:14:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:51.314 03:14:25 -- common/autotest_common.sh@10 -- # set +x 00:13:51.314 Malloc0 00:13:51.314 03:14:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:51.314 03:14:25 -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:51.314 03:14:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:51.314 03:14:25 -- common/autotest_common.sh@10 -- # set +x 00:13:51.314 Malloc1 00:13:51.314 03:14:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:51.314 03:14:25 -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:13:51.314 03:14:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:51.314 03:14:25 -- common/autotest_common.sh@10 -- # set +x 00:13:51.314 03:14:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:51.314 03:14:25 -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:51.314 03:14:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:51.314 03:14:25 -- common/autotest_common.sh@10 -- # set +x 00:13:51.314 03:14:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:51.314 03:14:25 -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:51.314 03:14:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:51.314 03:14:25 -- common/autotest_common.sh@10 -- # set +x 00:13:51.314 03:14:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:51.314 03:14:25 -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:51.314 03:14:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:51.314 03:14:25 -- common/autotest_common.sh@10 -- # set +x 00:13:51.314 [2024-04-25 03:14:25.730384] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:51.314 03:14:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:51.314 03:14:25 -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:51.314 03:14:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:51.314 03:14:25 -- common/autotest_common.sh@10 -- # set +x 00:13:51.314 03:14:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:51.314 03:14:25 -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:13:51.572 00:13:51.572 Discovery Log Number of Records 2, Generation counter 2 00:13:51.572 =====Discovery Log Entry 0====== 00:13:51.572 trtype: tcp 00:13:51.572 adrfam: ipv4 00:13:51.572 subtype: current discovery subsystem 00:13:51.572 treq: not required 00:13:51.572 portid: 0 00:13:51.572 trsvcid: 4420 00:13:51.572 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:51.572 traddr: 10.0.0.2 00:13:51.572 eflags: explicit discovery connections, duplicate discovery information 00:13:51.572 sectype: none 00:13:51.572 =====Discovery Log Entry 1====== 00:13:51.572 trtype: tcp 00:13:51.572 adrfam: ipv4 00:13:51.572 subtype: nvme subsystem 00:13:51.572 treq: not required 00:13:51.572 portid: 0 00:13:51.572 trsvcid: 4420 00:13:51.572 subnqn: nqn.2016-06.io.spdk:cnode1 00:13:51.572 traddr: 10.0.0.2 00:13:51.572 eflags: none 00:13:51.572 sectype: none 00:13:51.572 03:14:25 -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:13:51.572 03:14:25 -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:13:51.572 03:14:25 -- nvmf/common.sh@511 -- # local dev _ 00:13:51.572 03:14:25 -- nvmf/common.sh@513 -- # read -r dev _ 00:13:51.572 03:14:25 -- nvmf/common.sh@510 -- # nvme list 00:13:51.572 03:14:25 -- nvmf/common.sh@514 -- # [[ Node == /dev/nvme* ]] 00:13:51.572 03:14:25 -- nvmf/common.sh@513 -- # read -r dev _ 00:13:51.572 03:14:25 -- nvmf/common.sh@514 -- # [[ --------------------- == /dev/nvme* ]] 00:13:51.572 03:14:25 -- nvmf/common.sh@513 -- # read -r dev _ 00:13:51.572 03:14:25 -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:13:51.572 03:14:25 -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:52.138 03:14:26 -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:52.138 03:14:26 -- common/autotest_common.sh@1184 -- # local i=0 00:13:52.138 03:14:26 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:13:52.138 03:14:26 -- common/autotest_common.sh@1186 -- # [[ -n 2 ]] 00:13:52.138 03:14:26 -- common/autotest_common.sh@1187 -- # nvme_device_counter=2 00:13:52.138 03:14:26 -- common/autotest_common.sh@1191 -- # sleep 2 00:13:54.035 03:14:28 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:13:54.035 03:14:28 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:13:54.035 03:14:28 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:13:54.035 03:14:28 -- common/autotest_common.sh@1193 -- # nvme_devices=2 00:13:54.035 03:14:28 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:13:54.035 03:14:28 -- common/autotest_common.sh@1194 -- # return 0 00:13:54.035 03:14:28 -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:13:54.035 03:14:28 -- nvmf/common.sh@511 -- # local dev _ 00:13:54.035 03:14:28 -- nvmf/common.sh@513 -- # read -r dev _ 00:13:54.035 03:14:28 -- nvmf/common.sh@510 -- # nvme list 00:13:54.293 03:14:28 -- nvmf/common.sh@514 -- # [[ Node == /dev/nvme* ]] 00:13:54.293 03:14:28 -- nvmf/common.sh@513 -- # read -r dev _ 00:13:54.293 03:14:28 -- nvmf/common.sh@514 -- # [[ --------------------- == /dev/nvme* ]] 00:13:54.293 03:14:28 -- nvmf/common.sh@513 -- # read -r dev _ 00:13:54.293 03:14:28 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:54.293 03:14:28 -- nvmf/common.sh@515 -- # echo /dev/nvme0n2 00:13:54.293 03:14:28 -- nvmf/common.sh@513 -- # read -r dev _ 00:13:54.293 03:14:28 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:54.293 03:14:28 -- nvmf/common.sh@515 -- # echo /dev/nvme0n1 00:13:54.293 03:14:28 -- nvmf/common.sh@513 -- # read -r dev _ 00:13:54.293 03:14:28 -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:13:54.293 /dev/nvme0n1 ]] 00:13:54.293 03:14:28 -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:13:54.293 03:14:28 -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:13:54.293 03:14:28 -- nvmf/common.sh@511 -- # local dev _ 00:13:54.293 03:14:28 -- nvmf/common.sh@513 -- # read -r dev _ 00:13:54.293 03:14:28 -- nvmf/common.sh@510 -- # nvme list 00:13:54.293 03:14:28 -- nvmf/common.sh@514 -- # [[ Node == /dev/nvme* ]] 00:13:54.293 03:14:28 -- nvmf/common.sh@513 -- # read -r dev _ 00:13:54.293 03:14:28 -- nvmf/common.sh@514 -- # [[ --------------------- == /dev/nvme* ]] 00:13:54.293 03:14:28 -- nvmf/common.sh@513 -- # read -r dev _ 00:13:54.293 03:14:28 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:54.293 03:14:28 -- nvmf/common.sh@515 -- # echo /dev/nvme0n2 00:13:54.293 03:14:28 -- nvmf/common.sh@513 -- # read -r dev _ 00:13:54.293 03:14:28 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:54.293 03:14:28 -- nvmf/common.sh@515 -- # echo /dev/nvme0n1 00:13:54.293 03:14:28 -- nvmf/common.sh@513 -- # read -r dev _ 00:13:54.293 03:14:28 -- target/nvme_cli.sh@59 -- # nvme_num=2 00:13:54.293 03:14:28 -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:54.551 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:54.551 03:14:29 -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:54.551 03:14:29 -- common/autotest_common.sh@1205 -- # local i=0 00:13:54.551 03:14:29 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:13:54.551 03:14:29 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:54.551 03:14:29 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:13:54.551 03:14:29 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:54.552 03:14:29 -- common/autotest_common.sh@1217 -- # return 0 00:13:54.552 03:14:29 -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:13:54.552 03:14:29 -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:54.552 03:14:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:54.552 03:14:29 -- common/autotest_common.sh@10 -- # set +x 00:13:54.552 03:14:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:54.552 03:14:29 -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:13:54.552 03:14:29 -- target/nvme_cli.sh@70 -- # nvmftestfini 00:13:54.552 03:14:29 -- nvmf/common.sh@477 -- # nvmfcleanup 00:13:54.552 03:14:29 -- nvmf/common.sh@117 -- # sync 00:13:54.552 03:14:29 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:54.552 03:14:29 -- nvmf/common.sh@120 -- # set +e 00:13:54.552 03:14:29 -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:54.552 03:14:29 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:54.552 rmmod nvme_tcp 00:13:54.810 rmmod nvme_fabrics 00:13:54.810 rmmod nvme_keyring 00:13:54.810 03:14:29 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:54.810 03:14:29 -- nvmf/common.sh@124 -- # set -e 00:13:54.810 03:14:29 -- nvmf/common.sh@125 -- # return 0 00:13:54.810 03:14:29 -- nvmf/common.sh@478 -- # '[' -n 1466135 ']' 00:13:54.810 03:14:29 -- nvmf/common.sh@479 -- # killprocess 1466135 00:13:54.810 03:14:29 -- common/autotest_common.sh@936 -- # '[' -z 1466135 ']' 00:13:54.810 03:14:29 -- common/autotest_common.sh@940 -- # kill -0 1466135 00:13:54.810 03:14:29 -- common/autotest_common.sh@941 -- # uname 00:13:54.810 03:14:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:54.810 03:14:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1466135 00:13:54.810 03:14:29 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:54.810 03:14:29 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:54.810 03:14:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1466135' 00:13:54.810 killing process with pid 1466135 00:13:54.810 03:14:29 -- common/autotest_common.sh@955 -- # kill 1466135 00:13:54.810 03:14:29 -- common/autotest_common.sh@960 -- # wait 1466135 00:13:55.069 03:14:29 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:13:55.069 03:14:29 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:13:55.069 03:14:29 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:13:55.069 03:14:29 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:55.069 03:14:29 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:55.069 03:14:29 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:55.069 03:14:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:55.069 03:14:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:57.603 03:14:31 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:57.604 00:13:57.604 real 0m9.081s 00:13:57.604 user 0m18.947s 00:13:57.604 sys 0m2.223s 00:13:57.604 03:14:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:57.604 03:14:31 -- common/autotest_common.sh@10 -- # set +x 00:13:57.604 ************************************ 00:13:57.604 END TEST nvmf_nvme_cli 00:13:57.604 ************************************ 00:13:57.604 03:14:31 -- nvmf/nvmf.sh@40 -- # [[ 0 -eq 1 ]] 00:13:57.604 03:14:31 -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:13:57.604 03:14:31 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:57.604 03:14:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:57.604 03:14:31 -- common/autotest_common.sh@10 -- # set +x 00:13:57.604 ************************************ 00:13:57.604 START TEST nvmf_host_management 00:13:57.604 ************************************ 00:13:57.604 03:14:31 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:13:57.604 * Looking for test storage... 00:13:57.604 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:57.604 03:14:31 -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:57.604 03:14:31 -- nvmf/common.sh@7 -- # uname -s 00:13:57.604 03:14:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:57.604 03:14:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:57.604 03:14:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:57.604 03:14:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:57.604 03:14:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:57.604 03:14:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:57.604 03:14:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:57.604 03:14:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:57.604 03:14:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:57.604 03:14:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:57.604 03:14:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:57.604 03:14:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:57.604 03:14:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:57.604 03:14:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:57.604 03:14:31 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:57.604 03:14:31 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:57.604 03:14:31 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:57.604 03:14:31 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:57.604 03:14:31 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:57.604 03:14:31 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:57.604 03:14:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:57.604 03:14:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:57.604 03:14:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:57.604 03:14:31 -- paths/export.sh@5 -- # export PATH 00:13:57.604 03:14:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:57.604 03:14:31 -- nvmf/common.sh@47 -- # : 0 00:13:57.604 03:14:31 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:57.604 03:14:31 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:57.604 03:14:31 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:57.604 03:14:31 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:57.604 03:14:31 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:57.604 03:14:31 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:57.604 03:14:31 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:57.604 03:14:31 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:57.604 03:14:31 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:57.604 03:14:31 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:57.604 03:14:31 -- target/host_management.sh@105 -- # nvmftestinit 00:13:57.604 03:14:31 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:13:57.604 03:14:31 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:57.604 03:14:31 -- nvmf/common.sh@437 -- # prepare_net_devs 00:13:57.604 03:14:31 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:13:57.604 03:14:31 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:13:57.604 03:14:31 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:57.604 03:14:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:57.604 03:14:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:57.604 03:14:31 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:13:57.604 03:14:31 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:13:57.604 03:14:31 -- nvmf/common.sh@285 -- # xtrace_disable 00:13:57.604 03:14:31 -- common/autotest_common.sh@10 -- # set +x 00:13:59.506 03:14:33 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:59.506 03:14:33 -- nvmf/common.sh@291 -- # pci_devs=() 00:13:59.506 03:14:33 -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:59.506 03:14:33 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:59.506 03:14:33 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:59.506 03:14:33 -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:59.506 03:14:33 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:59.506 03:14:33 -- nvmf/common.sh@295 -- # net_devs=() 00:13:59.506 03:14:33 -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:59.506 03:14:33 -- nvmf/common.sh@296 -- # e810=() 00:13:59.506 03:14:33 -- nvmf/common.sh@296 -- # local -ga e810 00:13:59.506 03:14:33 -- nvmf/common.sh@297 -- # x722=() 00:13:59.506 03:14:33 -- nvmf/common.sh@297 -- # local -ga x722 00:13:59.506 03:14:33 -- nvmf/common.sh@298 -- # mlx=() 00:13:59.506 03:14:33 -- nvmf/common.sh@298 -- # local -ga mlx 00:13:59.506 03:14:33 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:59.506 03:14:33 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:59.506 03:14:33 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:59.506 03:14:33 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:59.506 03:14:33 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:59.506 03:14:33 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:59.506 03:14:33 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:59.506 03:14:33 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:59.506 03:14:33 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:59.506 03:14:33 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:59.506 03:14:33 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:59.506 03:14:33 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:59.506 03:14:33 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:59.506 03:14:33 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:59.506 03:14:33 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:59.506 03:14:33 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:59.506 03:14:33 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:59.506 03:14:33 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:59.506 03:14:33 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:59.506 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:59.506 03:14:33 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:59.506 03:14:33 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:59.506 03:14:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:59.506 03:14:33 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:59.506 03:14:33 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:59.506 03:14:33 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:59.506 03:14:33 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:59.506 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:59.507 03:14:33 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:59.507 03:14:33 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:59.507 03:14:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:59.507 03:14:33 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:59.507 03:14:33 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:59.507 03:14:33 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:59.507 03:14:33 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:59.507 03:14:33 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:59.507 03:14:33 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:59.507 03:14:33 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:59.507 03:14:33 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:59.507 03:14:33 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:59.507 03:14:33 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:59.507 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:59.507 03:14:33 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:59.507 03:14:33 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:59.507 03:14:33 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:59.507 03:14:33 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:59.507 03:14:33 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:59.507 03:14:33 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:59.507 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:59.507 03:14:33 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:59.507 03:14:33 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:13:59.507 03:14:33 -- nvmf/common.sh@403 -- # is_hw=yes 00:13:59.507 03:14:33 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:13:59.507 03:14:33 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:13:59.507 03:14:33 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:13:59.507 03:14:33 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:59.507 03:14:33 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:59.507 03:14:33 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:59.507 03:14:33 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:59.507 03:14:33 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:59.507 03:14:33 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:59.507 03:14:33 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:59.507 03:14:33 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:59.507 03:14:33 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:59.507 03:14:33 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:59.507 03:14:33 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:59.507 03:14:33 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:59.507 03:14:33 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:59.507 03:14:33 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:59.507 03:14:33 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:59.507 03:14:33 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:59.507 03:14:33 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:59.507 03:14:33 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:59.507 03:14:33 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:59.507 03:14:33 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:59.507 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:59.507 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.259 ms 00:13:59.507 00:13:59.507 --- 10.0.0.2 ping statistics --- 00:13:59.507 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:59.507 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:13:59.507 03:14:33 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:59.507 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:59.507 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:13:59.507 00:13:59.507 --- 10.0.0.1 ping statistics --- 00:13:59.507 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:59.507 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:13:59.507 03:14:33 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:59.507 03:14:33 -- nvmf/common.sh@411 -- # return 0 00:13:59.507 03:14:33 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:13:59.507 03:14:33 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:59.507 03:14:33 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:13:59.507 03:14:33 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:13:59.507 03:14:33 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:59.507 03:14:33 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:13:59.507 03:14:33 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:13:59.507 03:14:33 -- target/host_management.sh@107 -- # run_test nvmf_host_management nvmf_host_management 00:13:59.507 03:14:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:59.507 03:14:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:59.507 03:14:33 -- common/autotest_common.sh@10 -- # set +x 00:13:59.507 ************************************ 00:13:59.507 START TEST nvmf_host_management 00:13:59.507 ************************************ 00:13:59.507 03:14:33 -- common/autotest_common.sh@1111 -- # nvmf_host_management 00:13:59.507 03:14:33 -- target/host_management.sh@69 -- # starttarget 00:13:59.507 03:14:33 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:13:59.507 03:14:33 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:59.507 03:14:33 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:59.507 03:14:33 -- common/autotest_common.sh@10 -- # set +x 00:13:59.507 03:14:33 -- nvmf/common.sh@470 -- # nvmfpid=1468666 00:13:59.507 03:14:33 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:13:59.507 03:14:33 -- nvmf/common.sh@471 -- # waitforlisten 1468666 00:13:59.507 03:14:33 -- common/autotest_common.sh@817 -- # '[' -z 1468666 ']' 00:13:59.507 03:14:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:59.507 03:14:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:59.507 03:14:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:59.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:59.507 03:14:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:59.507 03:14:33 -- common/autotest_common.sh@10 -- # set +x 00:13:59.507 [2024-04-25 03:14:33.959521] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:13:59.507 [2024-04-25 03:14:33.959615] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:59.507 EAL: No free 2048 kB hugepages reported on node 1 00:13:59.766 [2024-04-25 03:14:34.030919] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:59.766 [2024-04-25 03:14:34.150394] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:59.766 [2024-04-25 03:14:34.150463] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:59.766 [2024-04-25 03:14:34.150489] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:59.766 [2024-04-25 03:14:34.150502] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:59.766 [2024-04-25 03:14:34.150514] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:59.766 [2024-04-25 03:14:34.150621] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:59.766 [2024-04-25 03:14:34.150674] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:59.766 [2024-04-25 03:14:34.150727] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:13:59.766 [2024-04-25 03:14:34.150731] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:00.700 03:14:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:00.700 03:14:34 -- common/autotest_common.sh@850 -- # return 0 00:14:00.700 03:14:34 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:00.700 03:14:34 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:00.700 03:14:34 -- common/autotest_common.sh@10 -- # set +x 00:14:00.700 03:14:34 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:00.700 03:14:34 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:00.700 03:14:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:00.700 03:14:34 -- common/autotest_common.sh@10 -- # set +x 00:14:00.700 [2024-04-25 03:14:34.927533] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:00.700 03:14:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:00.700 03:14:34 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:14:00.700 03:14:34 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:00.700 03:14:34 -- common/autotest_common.sh@10 -- # set +x 00:14:00.700 03:14:34 -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:14:00.700 03:14:34 -- target/host_management.sh@23 -- # cat 00:14:00.700 03:14:34 -- target/host_management.sh@30 -- # rpc_cmd 00:14:00.700 03:14:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:00.700 03:14:34 -- common/autotest_common.sh@10 -- # set +x 00:14:00.700 Malloc0 00:14:00.700 [2024-04-25 03:14:34.986587] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:00.700 03:14:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:00.700 03:14:34 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:14:00.700 03:14:34 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:00.700 03:14:34 -- common/autotest_common.sh@10 -- # set +x 00:14:00.700 03:14:35 -- target/host_management.sh@73 -- # perfpid=1468839 00:14:00.700 03:14:35 -- target/host_management.sh@74 -- # waitforlisten 1468839 /var/tmp/bdevperf.sock 00:14:00.700 03:14:35 -- common/autotest_common.sh@817 -- # '[' -z 1468839 ']' 00:14:00.700 03:14:35 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:14:00.700 03:14:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:00.700 03:14:35 -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:14:00.700 03:14:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:00.700 03:14:35 -- nvmf/common.sh@521 -- # config=() 00:14:00.700 03:14:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:00.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:00.700 03:14:35 -- nvmf/common.sh@521 -- # local subsystem config 00:14:00.700 03:14:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:00.700 03:14:35 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:14:00.700 03:14:35 -- common/autotest_common.sh@10 -- # set +x 00:14:00.700 03:14:35 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:14:00.700 { 00:14:00.700 "params": { 00:14:00.700 "name": "Nvme$subsystem", 00:14:00.700 "trtype": "$TEST_TRANSPORT", 00:14:00.700 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:00.700 "adrfam": "ipv4", 00:14:00.700 "trsvcid": "$NVMF_PORT", 00:14:00.700 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:00.700 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:00.700 "hdgst": ${hdgst:-false}, 00:14:00.700 "ddgst": ${ddgst:-false} 00:14:00.700 }, 00:14:00.700 "method": "bdev_nvme_attach_controller" 00:14:00.700 } 00:14:00.700 EOF 00:14:00.700 )") 00:14:00.700 03:14:35 -- nvmf/common.sh@543 -- # cat 00:14:00.700 03:14:35 -- nvmf/common.sh@545 -- # jq . 00:14:00.700 03:14:35 -- nvmf/common.sh@546 -- # IFS=, 00:14:00.700 03:14:35 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:14:00.700 "params": { 00:14:00.700 "name": "Nvme0", 00:14:00.700 "trtype": "tcp", 00:14:00.700 "traddr": "10.0.0.2", 00:14:00.700 "adrfam": "ipv4", 00:14:00.700 "trsvcid": "4420", 00:14:00.700 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:00.700 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:14:00.700 "hdgst": false, 00:14:00.700 "ddgst": false 00:14:00.700 }, 00:14:00.700 "method": "bdev_nvme_attach_controller" 00:14:00.700 }' 00:14:00.700 [2024-04-25 03:14:35.053893] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:14:00.700 [2024-04-25 03:14:35.054007] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1468839 ] 00:14:00.700 EAL: No free 2048 kB hugepages reported on node 1 00:14:00.700 [2024-04-25 03:14:35.115542] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:00.958 [2024-04-25 03:14:35.224148] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:01.217 Running I/O for 10 seconds... 00:14:01.217 03:14:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:01.217 03:14:35 -- common/autotest_common.sh@850 -- # return 0 00:14:01.217 03:14:35 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:14:01.217 03:14:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:01.217 03:14:35 -- common/autotest_common.sh@10 -- # set +x 00:14:01.217 03:14:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:01.217 03:14:35 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:01.217 03:14:35 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:14:01.217 03:14:35 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:14:01.217 03:14:35 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:14:01.217 03:14:35 -- target/host_management.sh@52 -- # local ret=1 00:14:01.217 03:14:35 -- target/host_management.sh@53 -- # local i 00:14:01.217 03:14:35 -- target/host_management.sh@54 -- # (( i = 10 )) 00:14:01.217 03:14:35 -- target/host_management.sh@54 -- # (( i != 0 )) 00:14:01.217 03:14:35 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:14:01.217 03:14:35 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:14:01.217 03:14:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:01.217 03:14:35 -- common/autotest_common.sh@10 -- # set +x 00:14:01.217 03:14:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:01.217 03:14:35 -- target/host_management.sh@55 -- # read_io_count=16 00:14:01.217 03:14:35 -- target/host_management.sh@58 -- # '[' 16 -ge 100 ']' 00:14:01.217 03:14:35 -- target/host_management.sh@62 -- # sleep 0.25 00:14:01.477 03:14:35 -- target/host_management.sh@54 -- # (( i-- )) 00:14:01.477 03:14:35 -- target/host_management.sh@54 -- # (( i != 0 )) 00:14:01.477 03:14:35 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:14:01.477 03:14:35 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:14:01.477 03:14:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:01.477 03:14:35 -- common/autotest_common.sh@10 -- # set +x 00:14:01.477 03:14:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:01.477 03:14:35 -- target/host_management.sh@55 -- # read_io_count=323 00:14:01.477 03:14:35 -- target/host_management.sh@58 -- # '[' 323 -ge 100 ']' 00:14:01.477 03:14:35 -- target/host_management.sh@59 -- # ret=0 00:14:01.477 03:14:35 -- target/host_management.sh@60 -- # break 00:14:01.477 03:14:35 -- target/host_management.sh@64 -- # return 0 00:14:01.477 03:14:35 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:14:01.477 03:14:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:01.477 03:14:35 -- common/autotest_common.sh@10 -- # set +x 00:14:01.477 [2024-04-25 03:14:35.921732] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2510a40 is same with the state(5) to be set 00:14:01.477 [2024-04-25 03:14:35.921826] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2510a40 is same with the state(5) to be set 00:14:01.477 [2024-04-25 03:14:35.921842] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2510a40 is same with the state(5) to be set 00:14:01.477 [2024-04-25 03:14:35.921855] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2510a40 is same with the state(5) to be set 00:14:01.477 [2024-04-25 03:14:35.921867] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2510a40 is same with the state(5) to be set 00:14:01.477 [2024-04-25 03:14:35.921879] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2510a40 is same with the state(5) to be set 00:14:01.477 [2024-04-25 03:14:35.921891] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2510a40 is same with the state(5) to be set 00:14:01.477 [2024-04-25 03:14:35.921903] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2510a40 is same with the state(5) to be set 00:14:01.477 [2024-04-25 03:14:35.924259] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:01.477 [2024-04-25 03:14:35.924312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.477 [2024-04-25 03:14:35.924332] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:01.477 [2024-04-25 03:14:35.924346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.477 [2024-04-25 03:14:35.924359] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:01.477 [2024-04-25 03:14:35.924373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.477 [2024-04-25 03:14:35.924386] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:01.477 [2024-04-25 03:14:35.924399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.477 [2024-04-25 03:14:35.924413] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f89a0 is same with the state(5) to be set 00:14:01.477 [2024-04-25 03:14:35.925390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.477 [2024-04-25 03:14:35.925417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.477 [2024-04-25 03:14:35.925441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:49280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.477 [2024-04-25 03:14:35.925456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.477 [2024-04-25 03:14:35.925472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:49408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.477 [2024-04-25 03:14:35.925486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.477 [2024-04-25 03:14:35.925500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:49536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.477 [2024-04-25 03:14:35.925514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.477 [2024-04-25 03:14:35.925530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:49664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.477 [2024-04-25 03:14:35.925559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.477 [2024-04-25 03:14:35.925575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:49792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.477 [2024-04-25 03:14:35.925588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.477 [2024-04-25 03:14:35.925603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:49920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.477 [2024-04-25 03:14:35.925616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.477 [2024-04-25 03:14:35.925639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:50048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.477 [2024-04-25 03:14:35.925655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.477 [2024-04-25 03:14:35.925669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:50176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.477 [2024-04-25 03:14:35.925683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.477 [2024-04-25 03:14:35.925702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:50304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.477 [2024-04-25 03:14:35.925717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.477 [2024-04-25 03:14:35.925732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:50432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.477 [2024-04-25 03:14:35.925746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.477 [2024-04-25 03:14:35.925760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:50560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.477 [2024-04-25 03:14:35.925773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.477 [2024-04-25 03:14:35.925788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:50688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.477 [2024-04-25 03:14:35.925802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.477 [2024-04-25 03:14:35.925816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:50816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.477 [2024-04-25 03:14:35.925829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.478 [2024-04-25 03:14:35.925844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:50944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.478 [2024-04-25 03:14:35.925857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.478 [2024-04-25 03:14:35.925873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:51072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.478 [2024-04-25 03:14:35.925886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.478 [2024-04-25 03:14:35.925902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:51200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.478 [2024-04-25 03:14:35.925916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.478 [2024-04-25 03:14:35.925939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:51328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.478 [2024-04-25 03:14:35.925953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.478 [2024-04-25 03:14:35.925968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:51456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.478 [2024-04-25 03:14:35.925993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.478 [2024-04-25 03:14:35.926009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:51584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.478 [2024-04-25 03:14:35.926022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.478 [2024-04-25 03:14:35.926037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:51712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.478 [2024-04-25 03:14:35.926051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.478 [2024-04-25 03:14:35.926067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:51840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.478 [2024-04-25 03:14:35.926085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.478 [2024-04-25 03:14:35.926101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:51968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.478 03:14:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:01.478 [2024-04-25 03:14:35.926115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.478 [2024-04-25 03:14:35.926132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:52096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.478 [2024-04-25 03:14:35.926147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.478 [2024-04-25 03:14:35.926161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:52224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.478 [2024-04-25 03:14:35.926175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.478 [2024-04-25 03:14:35.926190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:52352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.478 [2024-04-25 03:14:35.926203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.478 [2024-04-25 03:14:35.926219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:52480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.478 [2024-04-25 03:14:35.926233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.478 [2024-04-25 03:14:35.926249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:52608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.478 [2024-04-25 03:14:35.926262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.478 03:14:35 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:14:01.478 [2024-04-25 03:14:35.926277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:52736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.478 [2024-04-25 03:14:35.926291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.478 [2024-04-25 03:14:35.926306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:52864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.478 [2024-04-25 03:14:35.926319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.478 [2024-04-25 03:14:35.926335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:52992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.478 [2024-04-25 03:14:35.926348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.478 [2024-04-25 03:14:35.926363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:53120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.478 [2024-04-25 03:14:35.926377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.478 [2024-04-25 03:14:35.926392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:53248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.478 03:14:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:01.478 [2024-04-25 03:14:35.926405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.478 [2024-04-25 03:14:35.926425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:53376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.478 [2024-04-25 03:14:35.926439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.478 [2024-04-25 03:14:35.926455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:53504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.478 [2024-04-25 03:14:35.926468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.478 [2024-04-25 03:14:35.926482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:53632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.478 [2024-04-25 03:14:35.926496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.478 03:14:35 -- common/autotest_common.sh@10 -- # set +x 00:14:01.478 [2024-04-25 03:14:35.926511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:53760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.478 [2024-04-25 03:14:35.926526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.478 [2024-04-25 03:14:35.926541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:53888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.478 [2024-04-25 03:14:35.926554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.478 [2024-04-25 03:14:35.926568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:54016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.478 [2024-04-25 03:14:35.926583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.478 [2024-04-25 03:14:35.926599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:54144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.478 [2024-04-25 03:14:35.926612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.478 [2024-04-25 03:14:35.926634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:54272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.478 [2024-04-25 03:14:35.926649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.478 [2024-04-25 03:14:35.926665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:54400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.478 [2024-04-25 03:14:35.926685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.478 [2024-04-25 03:14:35.926700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:54528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.478 [2024-04-25 03:14:35.926714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.478 [2024-04-25 03:14:35.926729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:54656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.478 [2024-04-25 03:14:35.926742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.478 [2024-04-25 03:14:35.926757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:54784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.478 [2024-04-25 03:14:35.926771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.478 [2024-04-25 03:14:35.926786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:54912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.478 [2024-04-25 03:14:35.926803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.478 [2024-04-25 03:14:35.926819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:55040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.478 [2024-04-25 03:14:35.926833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.478 [2024-04-25 03:14:35.926848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:55168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.478 [2024-04-25 03:14:35.926861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.478 [2024-04-25 03:14:35.926876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:55296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.479 [2024-04-25 03:14:35.926889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.479 [2024-04-25 03:14:35.926905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:55424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.479 [2024-04-25 03:14:35.926930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.479 [2024-04-25 03:14:35.926944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:55552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.479 [2024-04-25 03:14:35.926958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.479 [2024-04-25 03:14:35.926972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:55680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.479 [2024-04-25 03:14:35.926995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.479 [2024-04-25 03:14:35.927009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:55808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.479 [2024-04-25 03:14:35.927023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.479 [2024-04-25 03:14:35.927038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:55936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.479 [2024-04-25 03:14:35.927051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.479 [2024-04-25 03:14:35.927066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:56064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.479 [2024-04-25 03:14:35.927081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.479 [2024-04-25 03:14:35.927095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:56192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.479 [2024-04-25 03:14:35.927109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.479 [2024-04-25 03:14:35.927123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:56320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.479 [2024-04-25 03:14:35.927137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.479 [2024-04-25 03:14:35.927153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:56448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.479 [2024-04-25 03:14:35.927166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.479 [2024-04-25 03:14:35.927185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:56576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.479 [2024-04-25 03:14:35.927199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.479 [2024-04-25 03:14:35.927214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:56704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.479 [2024-04-25 03:14:35.927228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.479 [2024-04-25 03:14:35.927243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:56832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.479 [2024-04-25 03:14:35.927256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.479 [2024-04-25 03:14:35.927271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:56960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.479 [2024-04-25 03:14:35.927284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.479 [2024-04-25 03:14:35.927300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:57088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.479 [2024-04-25 03:14:35.927313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.479 [2024-04-25 03:14:35.927328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:57216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.479 [2024-04-25 03:14:35.927342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.479 [2024-04-25 03:14:35.927434] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1c09110 was disconnected and freed. reset controller. 00:14:01.479 [2024-04-25 03:14:35.928530] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:14:01.479 task offset: 49152 on job bdev=Nvme0n1 fails 00:14:01.479 00:14:01.479 Latency(us) 00:14:01.479 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:01.479 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:14:01.479 Job: Nvme0n1 ended in about 0.40 seconds with error 00:14:01.479 Verification LBA range: start 0x0 length 0x400 00:14:01.479 Nvme0n1 : 0.40 969.71 60.61 161.62 0.00 55001.15 2427.26 46020.84 00:14:01.479 =================================================================================================================== 00:14:01.479 Total : 969.71 60.61 161.62 0.00 55001.15 2427.26 46020.84 00:14:01.479 [2024-04-25 03:14:35.930451] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:01.479 [2024-04-25 03:14:35.930479] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f89a0 (9): Bad file descriptor 00:14:01.479 03:14:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:01.479 03:14:35 -- target/host_management.sh@87 -- # sleep 1 00:14:01.479 [2024-04-25 03:14:35.950916] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:02.854 03:14:36 -- target/host_management.sh@91 -- # kill -9 1468839 00:14:02.854 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1468839) - No such process 00:14:02.854 03:14:36 -- target/host_management.sh@91 -- # true 00:14:02.854 03:14:36 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:14:02.854 03:14:36 -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:14:02.854 03:14:36 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:14:02.854 03:14:36 -- nvmf/common.sh@521 -- # config=() 00:14:02.854 03:14:36 -- nvmf/common.sh@521 -- # local subsystem config 00:14:02.854 03:14:36 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:14:02.854 03:14:36 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:14:02.854 { 00:14:02.854 "params": { 00:14:02.854 "name": "Nvme$subsystem", 00:14:02.854 "trtype": "$TEST_TRANSPORT", 00:14:02.854 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:02.854 "adrfam": "ipv4", 00:14:02.854 "trsvcid": "$NVMF_PORT", 00:14:02.854 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:02.854 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:02.854 "hdgst": ${hdgst:-false}, 00:14:02.854 "ddgst": ${ddgst:-false} 00:14:02.854 }, 00:14:02.854 "method": "bdev_nvme_attach_controller" 00:14:02.854 } 00:14:02.854 EOF 00:14:02.854 )") 00:14:02.854 03:14:36 -- nvmf/common.sh@543 -- # cat 00:14:02.854 03:14:36 -- nvmf/common.sh@545 -- # jq . 00:14:02.854 03:14:36 -- nvmf/common.sh@546 -- # IFS=, 00:14:02.854 03:14:36 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:14:02.854 "params": { 00:14:02.854 "name": "Nvme0", 00:14:02.854 "trtype": "tcp", 00:14:02.854 "traddr": "10.0.0.2", 00:14:02.854 "adrfam": "ipv4", 00:14:02.854 "trsvcid": "4420", 00:14:02.854 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:02.854 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:14:02.854 "hdgst": false, 00:14:02.854 "ddgst": false 00:14:02.854 }, 00:14:02.854 "method": "bdev_nvme_attach_controller" 00:14:02.854 }' 00:14:02.854 [2024-04-25 03:14:36.980514] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:14:02.854 [2024-04-25 03:14:36.980590] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1469117 ] 00:14:02.854 EAL: No free 2048 kB hugepages reported on node 1 00:14:02.854 [2024-04-25 03:14:37.042346] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:02.854 [2024-04-25 03:14:37.152093] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:02.854 Running I/O for 1 seconds... 00:14:04.233 00:14:04.233 Latency(us) 00:14:04.233 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:04.233 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:14:04.233 Verification LBA range: start 0x0 length 0x400 00:14:04.233 Nvme0n1 : 1.01 1143.83 71.49 0.00 0.00 55167.01 15049.01 46020.84 00:14:04.233 =================================================================================================================== 00:14:04.233 Total : 1143.83 71.49 0.00 0.00 55167.01 15049.01 46020.84 00:14:04.233 03:14:38 -- target/host_management.sh@102 -- # stoptarget 00:14:04.233 03:14:38 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:14:04.233 03:14:38 -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:14:04.233 03:14:38 -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:14:04.233 03:14:38 -- target/host_management.sh@40 -- # nvmftestfini 00:14:04.233 03:14:38 -- nvmf/common.sh@477 -- # nvmfcleanup 00:14:04.233 03:14:38 -- nvmf/common.sh@117 -- # sync 00:14:04.233 03:14:38 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:04.233 03:14:38 -- nvmf/common.sh@120 -- # set +e 00:14:04.233 03:14:38 -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:04.233 03:14:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:04.233 rmmod nvme_tcp 00:14:04.233 rmmod nvme_fabrics 00:14:04.233 rmmod nvme_keyring 00:14:04.233 03:14:38 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:04.233 03:14:38 -- nvmf/common.sh@124 -- # set -e 00:14:04.233 03:14:38 -- nvmf/common.sh@125 -- # return 0 00:14:04.233 03:14:38 -- nvmf/common.sh@478 -- # '[' -n 1468666 ']' 00:14:04.233 03:14:38 -- nvmf/common.sh@479 -- # killprocess 1468666 00:14:04.233 03:14:38 -- common/autotest_common.sh@936 -- # '[' -z 1468666 ']' 00:14:04.233 03:14:38 -- common/autotest_common.sh@940 -- # kill -0 1468666 00:14:04.233 03:14:38 -- common/autotest_common.sh@941 -- # uname 00:14:04.233 03:14:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:04.233 03:14:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1468666 00:14:04.233 03:14:38 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:04.233 03:14:38 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:04.233 03:14:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1468666' 00:14:04.233 killing process with pid 1468666 00:14:04.233 03:14:38 -- common/autotest_common.sh@955 -- # kill 1468666 00:14:04.233 03:14:38 -- common/autotest_common.sh@960 -- # wait 1468666 00:14:04.498 [2024-04-25 03:14:38.976463] app.c: 630:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:14:04.776 03:14:39 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:14:04.776 03:14:39 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:14:04.776 03:14:39 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:14:04.776 03:14:39 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:04.776 03:14:39 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:04.776 03:14:39 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:04.776 03:14:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:04.776 03:14:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:06.678 03:14:41 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:06.678 00:14:06.678 real 0m7.136s 00:14:06.678 user 0m21.748s 00:14:06.678 sys 0m1.166s 00:14:06.678 03:14:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:06.678 03:14:41 -- common/autotest_common.sh@10 -- # set +x 00:14:06.678 ************************************ 00:14:06.678 END TEST nvmf_host_management 00:14:06.678 ************************************ 00:14:06.678 03:14:41 -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:14:06.678 00:14:06.678 real 0m9.430s 00:14:06.678 user 0m22.588s 00:14:06.678 sys 0m2.633s 00:14:06.678 03:14:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:06.678 03:14:41 -- common/autotest_common.sh@10 -- # set +x 00:14:06.678 ************************************ 00:14:06.678 END TEST nvmf_host_management 00:14:06.678 ************************************ 00:14:06.678 03:14:41 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:14:06.678 03:14:41 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:06.678 03:14:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:06.678 03:14:41 -- common/autotest_common.sh@10 -- # set +x 00:14:06.937 ************************************ 00:14:06.937 START TEST nvmf_lvol 00:14:06.937 ************************************ 00:14:06.938 03:14:41 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:14:06.938 * Looking for test storage... 00:14:06.938 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:06.938 03:14:41 -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:06.938 03:14:41 -- nvmf/common.sh@7 -- # uname -s 00:14:06.938 03:14:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:06.938 03:14:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:06.938 03:14:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:06.938 03:14:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:06.938 03:14:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:06.938 03:14:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:06.938 03:14:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:06.938 03:14:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:06.938 03:14:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:06.938 03:14:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:06.938 03:14:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:06.938 03:14:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:06.938 03:14:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:06.938 03:14:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:06.938 03:14:41 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:06.938 03:14:41 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:06.938 03:14:41 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:06.938 03:14:41 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:06.938 03:14:41 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:06.938 03:14:41 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:06.938 03:14:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:06.938 03:14:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:06.938 03:14:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:06.938 03:14:41 -- paths/export.sh@5 -- # export PATH 00:14:06.938 03:14:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:06.938 03:14:41 -- nvmf/common.sh@47 -- # : 0 00:14:06.938 03:14:41 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:06.938 03:14:41 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:06.938 03:14:41 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:06.938 03:14:41 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:06.938 03:14:41 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:06.938 03:14:41 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:06.938 03:14:41 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:06.938 03:14:41 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:06.938 03:14:41 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:06.938 03:14:41 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:06.938 03:14:41 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:14:06.938 03:14:41 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:14:06.938 03:14:41 -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:06.938 03:14:41 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:14:06.938 03:14:41 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:14:06.938 03:14:41 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:06.938 03:14:41 -- nvmf/common.sh@437 -- # prepare_net_devs 00:14:06.938 03:14:41 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:14:06.938 03:14:41 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:14:06.938 03:14:41 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:06.938 03:14:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:06.938 03:14:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:06.938 03:14:41 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:14:06.938 03:14:41 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:14:06.938 03:14:41 -- nvmf/common.sh@285 -- # xtrace_disable 00:14:06.938 03:14:41 -- common/autotest_common.sh@10 -- # set +x 00:14:08.839 03:14:43 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:08.839 03:14:43 -- nvmf/common.sh@291 -- # pci_devs=() 00:14:08.839 03:14:43 -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:08.839 03:14:43 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:08.839 03:14:43 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:08.839 03:14:43 -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:08.839 03:14:43 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:08.839 03:14:43 -- nvmf/common.sh@295 -- # net_devs=() 00:14:08.839 03:14:43 -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:08.839 03:14:43 -- nvmf/common.sh@296 -- # e810=() 00:14:08.839 03:14:43 -- nvmf/common.sh@296 -- # local -ga e810 00:14:08.839 03:14:43 -- nvmf/common.sh@297 -- # x722=() 00:14:08.839 03:14:43 -- nvmf/common.sh@297 -- # local -ga x722 00:14:08.839 03:14:43 -- nvmf/common.sh@298 -- # mlx=() 00:14:08.839 03:14:43 -- nvmf/common.sh@298 -- # local -ga mlx 00:14:08.839 03:14:43 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:08.839 03:14:43 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:08.839 03:14:43 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:08.839 03:14:43 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:08.839 03:14:43 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:08.839 03:14:43 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:08.839 03:14:43 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:08.839 03:14:43 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:08.839 03:14:43 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:08.839 03:14:43 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:08.839 03:14:43 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:08.839 03:14:43 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:08.839 03:14:43 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:08.839 03:14:43 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:08.839 03:14:43 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:08.839 03:14:43 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:08.839 03:14:43 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:08.839 03:14:43 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:08.839 03:14:43 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:08.839 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:08.839 03:14:43 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:08.839 03:14:43 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:08.839 03:14:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:08.839 03:14:43 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:08.839 03:14:43 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:08.839 03:14:43 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:08.839 03:14:43 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:08.839 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:08.839 03:14:43 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:08.839 03:14:43 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:08.839 03:14:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:08.839 03:14:43 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:08.839 03:14:43 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:08.839 03:14:43 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:08.839 03:14:43 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:08.839 03:14:43 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:08.839 03:14:43 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:08.839 03:14:43 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:08.839 03:14:43 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:08.839 03:14:43 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:08.839 03:14:43 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:08.839 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:08.839 03:14:43 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:08.839 03:14:43 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:08.839 03:14:43 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:08.839 03:14:43 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:08.839 03:14:43 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:08.839 03:14:43 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:08.839 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:08.839 03:14:43 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:08.839 03:14:43 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:14:08.839 03:14:43 -- nvmf/common.sh@403 -- # is_hw=yes 00:14:08.839 03:14:43 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:14:08.839 03:14:43 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:14:08.839 03:14:43 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:14:08.839 03:14:43 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:08.839 03:14:43 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:08.839 03:14:43 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:08.839 03:14:43 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:08.839 03:14:43 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:08.839 03:14:43 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:08.839 03:14:43 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:08.839 03:14:43 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:08.839 03:14:43 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:08.839 03:14:43 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:08.839 03:14:43 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:08.839 03:14:43 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:08.839 03:14:43 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:08.839 03:14:43 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:08.839 03:14:43 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:08.839 03:14:43 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:08.839 03:14:43 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:08.839 03:14:43 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:08.839 03:14:43 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:08.839 03:14:43 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:09.098 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:09.098 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.194 ms 00:14:09.098 00:14:09.098 --- 10.0.0.2 ping statistics --- 00:14:09.098 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:09.098 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:14:09.098 03:14:43 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:09.098 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:09.098 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:14:09.098 00:14:09.098 --- 10.0.0.1 ping statistics --- 00:14:09.098 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:09.098 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:14:09.098 03:14:43 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:09.098 03:14:43 -- nvmf/common.sh@411 -- # return 0 00:14:09.098 03:14:43 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:14:09.098 03:14:43 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:09.098 03:14:43 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:14:09.098 03:14:43 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:14:09.098 03:14:43 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:09.098 03:14:43 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:14:09.098 03:14:43 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:14:09.098 03:14:43 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:14:09.098 03:14:43 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:09.098 03:14:43 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:09.098 03:14:43 -- common/autotest_common.sh@10 -- # set +x 00:14:09.098 03:14:43 -- nvmf/common.sh@470 -- # nvmfpid=1471216 00:14:09.098 03:14:43 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:14:09.098 03:14:43 -- nvmf/common.sh@471 -- # waitforlisten 1471216 00:14:09.098 03:14:43 -- common/autotest_common.sh@817 -- # '[' -z 1471216 ']' 00:14:09.098 03:14:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:09.098 03:14:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:09.098 03:14:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:09.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:09.098 03:14:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:09.098 03:14:43 -- common/autotest_common.sh@10 -- # set +x 00:14:09.098 [2024-04-25 03:14:43.417816] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:14:09.098 [2024-04-25 03:14:43.417903] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:09.098 EAL: No free 2048 kB hugepages reported on node 1 00:14:09.098 [2024-04-25 03:14:43.493534] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:09.356 [2024-04-25 03:14:43.613971] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:09.356 [2024-04-25 03:14:43.614022] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:09.356 [2024-04-25 03:14:43.614036] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:09.356 [2024-04-25 03:14:43.614049] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:09.356 [2024-04-25 03:14:43.614059] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:09.356 [2024-04-25 03:14:43.614117] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:09.356 [2024-04-25 03:14:43.614156] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:09.356 [2024-04-25 03:14:43.614159] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:09.356 03:14:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:09.356 03:14:43 -- common/autotest_common.sh@850 -- # return 0 00:14:09.356 03:14:43 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:09.356 03:14:43 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:09.356 03:14:43 -- common/autotest_common.sh@10 -- # set +x 00:14:09.356 03:14:43 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:09.356 03:14:43 -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:09.614 [2024-04-25 03:14:43.968910] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:09.614 03:14:43 -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:09.872 03:14:44 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:14:09.872 03:14:44 -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:10.131 03:14:44 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:14:10.131 03:14:44 -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:14:10.389 03:14:44 -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:14:10.648 03:14:45 -- target/nvmf_lvol.sh@29 -- # lvs=97d6d383-f549-4426-be84-3b9f5022d124 00:14:10.648 03:14:45 -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 97d6d383-f549-4426-be84-3b9f5022d124 lvol 20 00:14:10.906 03:14:45 -- target/nvmf_lvol.sh@32 -- # lvol=b216bc0b-f6af-4158-beee-0126a6e0bbc8 00:14:10.906 03:14:45 -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:11.163 03:14:45 -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b216bc0b-f6af-4158-beee-0126a6e0bbc8 00:14:11.420 03:14:45 -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:11.678 [2024-04-25 03:14:45.992695] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:11.678 03:14:46 -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:11.935 03:14:46 -- target/nvmf_lvol.sh@42 -- # perf_pid=1471638 00:14:11.935 03:14:46 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:14:11.935 03:14:46 -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:14:11.935 EAL: No free 2048 kB hugepages reported on node 1 00:14:12.869 03:14:47 -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot b216bc0b-f6af-4158-beee-0126a6e0bbc8 MY_SNAPSHOT 00:14:13.126 03:14:47 -- target/nvmf_lvol.sh@47 -- # snapshot=4efba74e-3b8a-4959-8d04-08b489e52331 00:14:13.126 03:14:47 -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize b216bc0b-f6af-4158-beee-0126a6e0bbc8 30 00:14:13.384 03:14:47 -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 4efba74e-3b8a-4959-8d04-08b489e52331 MY_CLONE 00:14:13.642 03:14:48 -- target/nvmf_lvol.sh@49 -- # clone=f30c35b5-ccf5-43e8-8f19-c13961aa7bee 00:14:13.642 03:14:48 -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate f30c35b5-ccf5-43e8-8f19-c13961aa7bee 00:14:14.207 03:14:48 -- target/nvmf_lvol.sh@53 -- # wait 1471638 00:14:22.320 Initializing NVMe Controllers 00:14:22.320 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:14:22.320 Controller IO queue size 128, less than required. 00:14:22.320 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:22.320 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:14:22.320 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:14:22.320 Initialization complete. Launching workers. 00:14:22.320 ======================================================== 00:14:22.320 Latency(us) 00:14:22.320 Device Information : IOPS MiB/s Average min max 00:14:22.320 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 8259.50 32.26 15508.95 464.22 104205.46 00:14:22.320 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 11128.20 43.47 11509.63 2363.47 72120.45 00:14:22.320 ======================================================== 00:14:22.320 Total : 19387.70 75.73 13213.41 464.22 104205.46 00:14:22.320 00:14:22.320 03:14:56 -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:22.578 03:14:56 -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete b216bc0b-f6af-4158-beee-0126a6e0bbc8 00:14:22.836 03:14:57 -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 97d6d383-f549-4426-be84-3b9f5022d124 00:14:23.094 03:14:57 -- target/nvmf_lvol.sh@60 -- # rm -f 00:14:23.094 03:14:57 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:14:23.094 03:14:57 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:14:23.094 03:14:57 -- nvmf/common.sh@477 -- # nvmfcleanup 00:14:23.094 03:14:57 -- nvmf/common.sh@117 -- # sync 00:14:23.094 03:14:57 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:23.094 03:14:57 -- nvmf/common.sh@120 -- # set +e 00:14:23.094 03:14:57 -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:23.094 03:14:57 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:23.094 rmmod nvme_tcp 00:14:23.094 rmmod nvme_fabrics 00:14:23.094 rmmod nvme_keyring 00:14:23.094 03:14:57 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:23.094 03:14:57 -- nvmf/common.sh@124 -- # set -e 00:14:23.094 03:14:57 -- nvmf/common.sh@125 -- # return 0 00:14:23.094 03:14:57 -- nvmf/common.sh@478 -- # '[' -n 1471216 ']' 00:14:23.094 03:14:57 -- nvmf/common.sh@479 -- # killprocess 1471216 00:14:23.094 03:14:57 -- common/autotest_common.sh@936 -- # '[' -z 1471216 ']' 00:14:23.094 03:14:57 -- common/autotest_common.sh@940 -- # kill -0 1471216 00:14:23.094 03:14:57 -- common/autotest_common.sh@941 -- # uname 00:14:23.094 03:14:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:23.094 03:14:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1471216 00:14:23.094 03:14:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:23.094 03:14:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:23.094 03:14:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1471216' 00:14:23.094 killing process with pid 1471216 00:14:23.094 03:14:57 -- common/autotest_common.sh@955 -- # kill 1471216 00:14:23.094 03:14:57 -- common/autotest_common.sh@960 -- # wait 1471216 00:14:23.661 03:14:57 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:14:23.661 03:14:57 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:14:23.661 03:14:57 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:14:23.661 03:14:57 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:23.661 03:14:57 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:23.661 03:14:57 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:23.661 03:14:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:23.661 03:14:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:25.564 03:14:59 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:25.564 00:14:25.564 real 0m18.723s 00:14:25.564 user 0m57.076s 00:14:25.564 sys 0m8.254s 00:14:25.564 03:14:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:25.564 03:14:59 -- common/autotest_common.sh@10 -- # set +x 00:14:25.564 ************************************ 00:14:25.564 END TEST nvmf_lvol 00:14:25.564 ************************************ 00:14:25.564 03:14:59 -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:25.564 03:14:59 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:25.564 03:14:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:25.564 03:14:59 -- common/autotest_common.sh@10 -- # set +x 00:14:25.564 ************************************ 00:14:25.564 START TEST nvmf_lvs_grow 00:14:25.564 ************************************ 00:14:25.564 03:15:00 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:25.823 * Looking for test storage... 00:14:25.823 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:25.823 03:15:00 -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:25.823 03:15:00 -- nvmf/common.sh@7 -- # uname -s 00:14:25.823 03:15:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:25.823 03:15:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:25.823 03:15:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:25.823 03:15:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:25.823 03:15:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:25.823 03:15:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:25.823 03:15:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:25.823 03:15:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:25.823 03:15:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:25.823 03:15:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:25.823 03:15:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:25.823 03:15:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:25.823 03:15:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:25.823 03:15:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:25.823 03:15:00 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:25.823 03:15:00 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:25.823 03:15:00 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:25.823 03:15:00 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:25.823 03:15:00 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:25.823 03:15:00 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:25.823 03:15:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:25.823 03:15:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:25.823 03:15:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:25.823 03:15:00 -- paths/export.sh@5 -- # export PATH 00:14:25.823 03:15:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:25.823 03:15:00 -- nvmf/common.sh@47 -- # : 0 00:14:25.823 03:15:00 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:25.823 03:15:00 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:25.823 03:15:00 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:25.823 03:15:00 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:25.823 03:15:00 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:25.823 03:15:00 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:25.823 03:15:00 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:25.823 03:15:00 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:25.823 03:15:00 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:25.823 03:15:00 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:25.824 03:15:00 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:14:25.824 03:15:00 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:14:25.824 03:15:00 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:25.824 03:15:00 -- nvmf/common.sh@437 -- # prepare_net_devs 00:14:25.824 03:15:00 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:14:25.824 03:15:00 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:14:25.824 03:15:00 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:25.824 03:15:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:25.824 03:15:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:25.824 03:15:00 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:14:25.824 03:15:00 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:14:25.824 03:15:00 -- nvmf/common.sh@285 -- # xtrace_disable 00:14:25.824 03:15:00 -- common/autotest_common.sh@10 -- # set +x 00:14:27.728 03:15:01 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:27.728 03:15:01 -- nvmf/common.sh@291 -- # pci_devs=() 00:14:27.728 03:15:01 -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:27.728 03:15:01 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:27.728 03:15:01 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:27.728 03:15:01 -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:27.728 03:15:01 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:27.728 03:15:01 -- nvmf/common.sh@295 -- # net_devs=() 00:14:27.728 03:15:01 -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:27.728 03:15:01 -- nvmf/common.sh@296 -- # e810=() 00:14:27.728 03:15:01 -- nvmf/common.sh@296 -- # local -ga e810 00:14:27.728 03:15:01 -- nvmf/common.sh@297 -- # x722=() 00:14:27.728 03:15:01 -- nvmf/common.sh@297 -- # local -ga x722 00:14:27.728 03:15:01 -- nvmf/common.sh@298 -- # mlx=() 00:14:27.728 03:15:01 -- nvmf/common.sh@298 -- # local -ga mlx 00:14:27.728 03:15:01 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:27.728 03:15:01 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:27.728 03:15:01 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:27.728 03:15:01 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:27.728 03:15:01 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:27.728 03:15:01 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:27.728 03:15:01 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:27.728 03:15:01 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:27.728 03:15:01 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:27.728 03:15:01 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:27.728 03:15:01 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:27.728 03:15:01 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:27.728 03:15:01 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:27.728 03:15:01 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:27.728 03:15:01 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:27.728 03:15:01 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:27.728 03:15:01 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:27.728 03:15:01 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:27.728 03:15:01 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:27.728 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:27.728 03:15:01 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:27.728 03:15:01 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:27.728 03:15:01 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:27.728 03:15:01 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:27.728 03:15:01 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:27.728 03:15:01 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:27.728 03:15:01 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:27.728 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:27.728 03:15:01 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:27.728 03:15:01 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:27.728 03:15:01 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:27.728 03:15:01 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:27.728 03:15:01 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:27.728 03:15:01 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:27.728 03:15:01 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:27.728 03:15:01 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:27.728 03:15:01 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:27.728 03:15:01 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:27.728 03:15:01 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:27.728 03:15:01 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:27.728 03:15:01 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:27.728 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:27.728 03:15:01 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:27.728 03:15:01 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:27.728 03:15:01 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:27.728 03:15:01 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:27.728 03:15:01 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:27.728 03:15:01 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:27.728 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:27.728 03:15:01 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:27.728 03:15:01 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:14:27.728 03:15:01 -- nvmf/common.sh@403 -- # is_hw=yes 00:14:27.728 03:15:01 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:14:27.728 03:15:01 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:14:27.728 03:15:01 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:14:27.728 03:15:01 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:27.728 03:15:01 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:27.728 03:15:01 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:27.728 03:15:01 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:27.728 03:15:01 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:27.728 03:15:01 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:27.728 03:15:01 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:27.728 03:15:01 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:27.728 03:15:01 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:27.728 03:15:01 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:27.728 03:15:01 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:27.728 03:15:01 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:27.728 03:15:01 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:27.728 03:15:02 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:27.728 03:15:02 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:27.728 03:15:02 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:27.728 03:15:02 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:27.728 03:15:02 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:27.728 03:15:02 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:27.728 03:15:02 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:27.728 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:27.728 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.216 ms 00:14:27.728 00:14:27.728 --- 10.0.0.2 ping statistics --- 00:14:27.728 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:27.728 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:14:27.728 03:15:02 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:27.728 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:27.728 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:14:27.728 00:14:27.728 --- 10.0.0.1 ping statistics --- 00:14:27.728 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:27.728 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:14:27.728 03:15:02 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:27.728 03:15:02 -- nvmf/common.sh@411 -- # return 0 00:14:27.728 03:15:02 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:14:27.728 03:15:02 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:27.728 03:15:02 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:14:27.728 03:15:02 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:14:27.728 03:15:02 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:27.728 03:15:02 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:14:27.728 03:15:02 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:14:27.728 03:15:02 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:14:27.728 03:15:02 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:27.728 03:15:02 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:27.728 03:15:02 -- common/autotest_common.sh@10 -- # set +x 00:14:27.728 03:15:02 -- nvmf/common.sh@470 -- # nvmfpid=1475022 00:14:27.728 03:15:02 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:27.728 03:15:02 -- nvmf/common.sh@471 -- # waitforlisten 1475022 00:14:27.728 03:15:02 -- common/autotest_common.sh@817 -- # '[' -z 1475022 ']' 00:14:27.728 03:15:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:27.728 03:15:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:27.728 03:15:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:27.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:27.728 03:15:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:27.728 03:15:02 -- common/autotest_common.sh@10 -- # set +x 00:14:27.728 [2024-04-25 03:15:02.163542] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:14:27.728 [2024-04-25 03:15:02.163643] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:27.728 EAL: No free 2048 kB hugepages reported on node 1 00:14:27.987 [2024-04-25 03:15:02.235795] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:27.987 [2024-04-25 03:15:02.349913] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:27.987 [2024-04-25 03:15:02.349990] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:27.987 [2024-04-25 03:15:02.350003] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:27.987 [2024-04-25 03:15:02.350015] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:27.987 [2024-04-25 03:15:02.350026] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:27.987 [2024-04-25 03:15:02.350053] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:27.987 03:15:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:27.987 03:15:02 -- common/autotest_common.sh@850 -- # return 0 00:14:27.987 03:15:02 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:27.987 03:15:02 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:27.987 03:15:02 -- common/autotest_common.sh@10 -- # set +x 00:14:28.244 03:15:02 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:28.244 03:15:02 -- target/nvmf_lvs_grow.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:28.503 [2024-04-25 03:15:02.766238] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:28.503 03:15:02 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:14:28.503 03:15:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:28.503 03:15:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:28.503 03:15:02 -- common/autotest_common.sh@10 -- # set +x 00:14:28.503 ************************************ 00:14:28.503 START TEST lvs_grow_clean 00:14:28.503 ************************************ 00:14:28.503 03:15:02 -- common/autotest_common.sh@1111 -- # lvs_grow 00:14:28.503 03:15:02 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:28.503 03:15:02 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:28.503 03:15:02 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:28.503 03:15:02 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:28.503 03:15:02 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:28.503 03:15:02 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:28.503 03:15:02 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:28.503 03:15:02 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:28.503 03:15:02 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:28.761 03:15:03 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:28.761 03:15:03 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:29.019 03:15:03 -- target/nvmf_lvs_grow.sh@28 -- # lvs=7e37d01b-2749-47e1-aa26-24aa4eac4b0a 00:14:29.019 03:15:03 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7e37d01b-2749-47e1-aa26-24aa4eac4b0a 00:14:29.019 03:15:03 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:29.277 03:15:03 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:29.277 03:15:03 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:29.277 03:15:03 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 7e37d01b-2749-47e1-aa26-24aa4eac4b0a lvol 150 00:14:29.534 03:15:03 -- target/nvmf_lvs_grow.sh@33 -- # lvol=75c9e816-6c2a-488d-8971-e80f7babe93c 00:14:29.534 03:15:03 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:29.534 03:15:03 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:29.792 [2024-04-25 03:15:04.191911] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:29.792 [2024-04-25 03:15:04.192004] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:29.792 true 00:14:29.792 03:15:04 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7e37d01b-2749-47e1-aa26-24aa4eac4b0a 00:14:29.792 03:15:04 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:30.050 03:15:04 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:30.050 03:15:04 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:30.308 03:15:04 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 75c9e816-6c2a-488d-8971-e80f7babe93c 00:14:30.566 03:15:04 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:30.824 [2024-04-25 03:15:05.158881] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:30.824 03:15:05 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:31.082 03:15:05 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1475463 00:14:31.082 03:15:05 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:31.082 03:15:05 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:31.082 03:15:05 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1475463 /var/tmp/bdevperf.sock 00:14:31.082 03:15:05 -- common/autotest_common.sh@817 -- # '[' -z 1475463 ']' 00:14:31.082 03:15:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:31.082 03:15:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:31.082 03:15:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:31.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:31.082 03:15:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:31.082 03:15:05 -- common/autotest_common.sh@10 -- # set +x 00:14:31.082 [2024-04-25 03:15:05.452077] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:14:31.082 [2024-04-25 03:15:05.452147] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1475463 ] 00:14:31.082 EAL: No free 2048 kB hugepages reported on node 1 00:14:31.082 [2024-04-25 03:15:05.512767] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:31.340 [2024-04-25 03:15:05.627394] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:31.340 03:15:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:31.340 03:15:05 -- common/autotest_common.sh@850 -- # return 0 00:14:31.340 03:15:05 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:31.906 Nvme0n1 00:14:31.906 03:15:06 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:32.164 [ 00:14:32.164 { 00:14:32.164 "name": "Nvme0n1", 00:14:32.164 "aliases": [ 00:14:32.164 "75c9e816-6c2a-488d-8971-e80f7babe93c" 00:14:32.164 ], 00:14:32.164 "product_name": "NVMe disk", 00:14:32.164 "block_size": 4096, 00:14:32.164 "num_blocks": 38912, 00:14:32.164 "uuid": "75c9e816-6c2a-488d-8971-e80f7babe93c", 00:14:32.164 "assigned_rate_limits": { 00:14:32.164 "rw_ios_per_sec": 0, 00:14:32.164 "rw_mbytes_per_sec": 0, 00:14:32.164 "r_mbytes_per_sec": 0, 00:14:32.164 "w_mbytes_per_sec": 0 00:14:32.164 }, 00:14:32.164 "claimed": false, 00:14:32.164 "zoned": false, 00:14:32.164 "supported_io_types": { 00:14:32.164 "read": true, 00:14:32.164 "write": true, 00:14:32.164 "unmap": true, 00:14:32.164 "write_zeroes": true, 00:14:32.164 "flush": true, 00:14:32.164 "reset": true, 00:14:32.164 "compare": true, 00:14:32.164 "compare_and_write": true, 00:14:32.164 "abort": true, 00:14:32.164 "nvme_admin": true, 00:14:32.164 "nvme_io": true 00:14:32.164 }, 00:14:32.164 "memory_domains": [ 00:14:32.164 { 00:14:32.164 "dma_device_id": "system", 00:14:32.164 "dma_device_type": 1 00:14:32.164 } 00:14:32.164 ], 00:14:32.164 "driver_specific": { 00:14:32.164 "nvme": [ 00:14:32.164 { 00:14:32.164 "trid": { 00:14:32.164 "trtype": "TCP", 00:14:32.164 "adrfam": "IPv4", 00:14:32.164 "traddr": "10.0.0.2", 00:14:32.164 "trsvcid": "4420", 00:14:32.164 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:14:32.164 }, 00:14:32.165 "ctrlr_data": { 00:14:32.165 "cntlid": 1, 00:14:32.165 "vendor_id": "0x8086", 00:14:32.165 "model_number": "SPDK bdev Controller", 00:14:32.165 "serial_number": "SPDK0", 00:14:32.165 "firmware_revision": "24.05", 00:14:32.165 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:32.165 "oacs": { 00:14:32.165 "security": 0, 00:14:32.165 "format": 0, 00:14:32.165 "firmware": 0, 00:14:32.165 "ns_manage": 0 00:14:32.165 }, 00:14:32.165 "multi_ctrlr": true, 00:14:32.165 "ana_reporting": false 00:14:32.165 }, 00:14:32.165 "vs": { 00:14:32.165 "nvme_version": "1.3" 00:14:32.165 }, 00:14:32.165 "ns_data": { 00:14:32.165 "id": 1, 00:14:32.165 "can_share": true 00:14:32.165 } 00:14:32.165 } 00:14:32.165 ], 00:14:32.165 "mp_policy": "active_passive" 00:14:32.165 } 00:14:32.165 } 00:14:32.165 ] 00:14:32.165 03:15:06 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1475594 00:14:32.165 03:15:06 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:32.165 03:15:06 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:32.165 Running I/O for 10 seconds... 00:14:33.099 Latency(us) 00:14:33.099 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:33.099 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:33.099 Nvme0n1 : 1.00 12995.00 50.76 0.00 0.00 0.00 0.00 0.00 00:14:33.099 =================================================================================================================== 00:14:33.099 Total : 12995.00 50.76 0.00 0.00 0.00 0.00 0.00 00:14:33.099 00:14:34.048 03:15:08 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 7e37d01b-2749-47e1-aa26-24aa4eac4b0a 00:14:34.048 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:34.048 Nvme0n1 : 2.00 13185.50 51.51 0.00 0.00 0.00 0.00 0.00 00:14:34.048 =================================================================================================================== 00:14:34.048 Total : 13185.50 51.51 0.00 0.00 0.00 0.00 0.00 00:14:34.048 00:14:34.306 true 00:14:34.306 03:15:08 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7e37d01b-2749-47e1-aa26-24aa4eac4b0a 00:14:34.306 03:15:08 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:34.566 03:15:08 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:34.566 03:15:08 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:34.566 03:15:08 -- target/nvmf_lvs_grow.sh@65 -- # wait 1475594 00:14:35.134 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:35.134 Nvme0n1 : 3.00 13305.00 51.97 0.00 0.00 0.00 0.00 0.00 00:14:35.134 =================================================================================================================== 00:14:35.134 Total : 13305.00 51.97 0.00 0.00 0.00 0.00 0.00 00:14:35.134 00:14:36.071 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:36.071 Nvme0n1 : 4.00 13324.75 52.05 0.00 0.00 0.00 0.00 0.00 00:14:36.071 =================================================================================================================== 00:14:36.071 Total : 13324.75 52.05 0.00 0.00 0.00 0.00 0.00 00:14:36.071 00:14:37.449 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:37.449 Nvme0n1 : 5.00 13362.20 52.20 0.00 0.00 0.00 0.00 0.00 00:14:37.449 =================================================================================================================== 00:14:37.449 Total : 13362.20 52.20 0.00 0.00 0.00 0.00 0.00 00:14:37.449 00:14:38.389 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:38.389 Nvme0n1 : 6.00 13424.50 52.44 0.00 0.00 0.00 0.00 0.00 00:14:38.389 =================================================================================================================== 00:14:38.389 Total : 13424.50 52.44 0.00 0.00 0.00 0.00 0.00 00:14:38.389 00:14:39.330 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:39.330 Nvme0n1 : 7.00 13449.57 52.54 0.00 0.00 0.00 0.00 0.00 00:14:39.330 =================================================================================================================== 00:14:39.330 Total : 13449.57 52.54 0.00 0.00 0.00 0.00 0.00 00:14:39.330 00:14:40.268 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:40.268 Nvme0n1 : 8.00 13499.38 52.73 0.00 0.00 0.00 0.00 0.00 00:14:40.268 =================================================================================================================== 00:14:40.268 Total : 13499.38 52.73 0.00 0.00 0.00 0.00 0.00 00:14:40.268 00:14:41.209 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:41.209 Nvme0n1 : 9.00 13529.22 52.85 0.00 0.00 0.00 0.00 0.00 00:14:41.209 =================================================================================================================== 00:14:41.209 Total : 13529.22 52.85 0.00 0.00 0.00 0.00 0.00 00:14:41.209 00:14:42.145 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:42.145 Nvme0n1 : 10.00 13549.90 52.93 0.00 0.00 0.00 0.00 0.00 00:14:42.145 =================================================================================================================== 00:14:42.145 Total : 13549.90 52.93 0.00 0.00 0.00 0.00 0.00 00:14:42.145 00:14:42.145 00:14:42.145 Latency(us) 00:14:42.145 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:42.145 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:42.145 Nvme0n1 : 10.01 13550.24 52.93 0.00 0.00 9438.22 3835.07 13883.92 00:14:42.145 =================================================================================================================== 00:14:42.145 Total : 13550.24 52.93 0.00 0.00 9438.22 3835.07 13883.92 00:14:42.145 0 00:14:42.145 03:15:16 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1475463 00:14:42.145 03:15:16 -- common/autotest_common.sh@936 -- # '[' -z 1475463 ']' 00:14:42.145 03:15:16 -- common/autotest_common.sh@940 -- # kill -0 1475463 00:14:42.145 03:15:16 -- common/autotest_common.sh@941 -- # uname 00:14:42.145 03:15:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:42.145 03:15:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1475463 00:14:42.145 03:15:16 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:42.145 03:15:16 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:42.145 03:15:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1475463' 00:14:42.145 killing process with pid 1475463 00:14:42.145 03:15:16 -- common/autotest_common.sh@955 -- # kill 1475463 00:14:42.145 Received shutdown signal, test time was about 10.000000 seconds 00:14:42.145 00:14:42.145 Latency(us) 00:14:42.145 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:42.145 =================================================================================================================== 00:14:42.145 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:42.145 03:15:16 -- common/autotest_common.sh@960 -- # wait 1475463 00:14:42.403 03:15:16 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:42.661 03:15:17 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7e37d01b-2749-47e1-aa26-24aa4eac4b0a 00:14:42.661 03:15:17 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:14:42.920 03:15:17 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:14:42.920 03:15:17 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:14:42.920 03:15:17 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:43.180 [2024-04-25 03:15:17.621396] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:43.180 03:15:17 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7e37d01b-2749-47e1-aa26-24aa4eac4b0a 00:14:43.180 03:15:17 -- common/autotest_common.sh@638 -- # local es=0 00:14:43.180 03:15:17 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7e37d01b-2749-47e1-aa26-24aa4eac4b0a 00:14:43.180 03:15:17 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:43.180 03:15:17 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:43.180 03:15:17 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:43.180 03:15:17 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:43.180 03:15:17 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:43.180 03:15:17 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:43.180 03:15:17 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:43.180 03:15:17 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:43.180 03:15:17 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7e37d01b-2749-47e1-aa26-24aa4eac4b0a 00:14:43.440 request: 00:14:43.440 { 00:14:43.440 "uuid": "7e37d01b-2749-47e1-aa26-24aa4eac4b0a", 00:14:43.440 "method": "bdev_lvol_get_lvstores", 00:14:43.440 "req_id": 1 00:14:43.440 } 00:14:43.440 Got JSON-RPC error response 00:14:43.440 response: 00:14:43.440 { 00:14:43.440 "code": -19, 00:14:43.440 "message": "No such device" 00:14:43.440 } 00:14:43.440 03:15:17 -- common/autotest_common.sh@641 -- # es=1 00:14:43.440 03:15:17 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:14:43.440 03:15:17 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:14:43.440 03:15:17 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:14:43.440 03:15:17 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:43.700 aio_bdev 00:14:43.700 03:15:18 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 75c9e816-6c2a-488d-8971-e80f7babe93c 00:14:43.700 03:15:18 -- common/autotest_common.sh@885 -- # local bdev_name=75c9e816-6c2a-488d-8971-e80f7babe93c 00:14:43.700 03:15:18 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:14:43.700 03:15:18 -- common/autotest_common.sh@887 -- # local i 00:14:43.700 03:15:18 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:14:43.700 03:15:18 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:14:43.700 03:15:18 -- common/autotest_common.sh@890 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:43.959 03:15:18 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 75c9e816-6c2a-488d-8971-e80f7babe93c -t 2000 00:14:44.217 [ 00:14:44.217 { 00:14:44.217 "name": "75c9e816-6c2a-488d-8971-e80f7babe93c", 00:14:44.217 "aliases": [ 00:14:44.217 "lvs/lvol" 00:14:44.217 ], 00:14:44.217 "product_name": "Logical Volume", 00:14:44.217 "block_size": 4096, 00:14:44.217 "num_blocks": 38912, 00:14:44.217 "uuid": "75c9e816-6c2a-488d-8971-e80f7babe93c", 00:14:44.217 "assigned_rate_limits": { 00:14:44.217 "rw_ios_per_sec": 0, 00:14:44.217 "rw_mbytes_per_sec": 0, 00:14:44.217 "r_mbytes_per_sec": 0, 00:14:44.218 "w_mbytes_per_sec": 0 00:14:44.218 }, 00:14:44.218 "claimed": false, 00:14:44.218 "zoned": false, 00:14:44.218 "supported_io_types": { 00:14:44.218 "read": true, 00:14:44.218 "write": true, 00:14:44.218 "unmap": true, 00:14:44.218 "write_zeroes": true, 00:14:44.218 "flush": false, 00:14:44.218 "reset": true, 00:14:44.218 "compare": false, 00:14:44.218 "compare_and_write": false, 00:14:44.218 "abort": false, 00:14:44.218 "nvme_admin": false, 00:14:44.218 "nvme_io": false 00:14:44.218 }, 00:14:44.218 "driver_specific": { 00:14:44.218 "lvol": { 00:14:44.218 "lvol_store_uuid": "7e37d01b-2749-47e1-aa26-24aa4eac4b0a", 00:14:44.218 "base_bdev": "aio_bdev", 00:14:44.218 "thin_provision": false, 00:14:44.218 "snapshot": false, 00:14:44.218 "clone": false, 00:14:44.218 "esnap_clone": false 00:14:44.218 } 00:14:44.218 } 00:14:44.218 } 00:14:44.218 ] 00:14:44.218 03:15:18 -- common/autotest_common.sh@893 -- # return 0 00:14:44.218 03:15:18 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:14:44.218 03:15:18 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7e37d01b-2749-47e1-aa26-24aa4eac4b0a 00:14:44.476 03:15:18 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:14:44.476 03:15:18 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7e37d01b-2749-47e1-aa26-24aa4eac4b0a 00:14:44.476 03:15:18 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:14:44.737 03:15:19 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:14:44.737 03:15:19 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 75c9e816-6c2a-488d-8971-e80f7babe93c 00:14:44.995 03:15:19 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7e37d01b-2749-47e1-aa26-24aa4eac4b0a 00:14:45.256 03:15:19 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:45.523 03:15:19 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:45.523 00:14:45.523 real 0m17.027s 00:14:45.523 user 0m16.526s 00:14:45.523 sys 0m1.894s 00:14:45.523 03:15:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:45.523 03:15:19 -- common/autotest_common.sh@10 -- # set +x 00:14:45.523 ************************************ 00:14:45.523 END TEST lvs_grow_clean 00:14:45.523 ************************************ 00:14:45.523 03:15:19 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:14:45.523 03:15:19 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:45.523 03:15:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:45.523 03:15:19 -- common/autotest_common.sh@10 -- # set +x 00:14:45.782 ************************************ 00:14:45.782 START TEST lvs_grow_dirty 00:14:45.782 ************************************ 00:14:45.782 03:15:20 -- common/autotest_common.sh@1111 -- # lvs_grow dirty 00:14:45.782 03:15:20 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:45.782 03:15:20 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:45.782 03:15:20 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:45.782 03:15:20 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:45.782 03:15:20 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:45.782 03:15:20 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:45.782 03:15:20 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:45.782 03:15:20 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:45.782 03:15:20 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:46.041 03:15:20 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:46.041 03:15:20 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:46.300 03:15:20 -- target/nvmf_lvs_grow.sh@28 -- # lvs=4b1b9416-3d36-4a74-8226-a5983effa4c7 00:14:46.300 03:15:20 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4b1b9416-3d36-4a74-8226-a5983effa4c7 00:14:46.300 03:15:20 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:46.559 03:15:20 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:46.559 03:15:20 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:46.559 03:15:20 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 4b1b9416-3d36-4a74-8226-a5983effa4c7 lvol 150 00:14:46.819 03:15:21 -- target/nvmf_lvs_grow.sh@33 -- # lvol=b3911868-e296-427a-9b5f-693c920be35d 00:14:46.819 03:15:21 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:46.819 03:15:21 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:47.078 [2024-04-25 03:15:21.430029] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:47.078 [2024-04-25 03:15:21.430127] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:47.078 true 00:14:47.078 03:15:21 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4b1b9416-3d36-4a74-8226-a5983effa4c7 00:14:47.078 03:15:21 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:47.336 03:15:21 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:47.336 03:15:21 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:47.596 03:15:21 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b3911868-e296-427a-9b5f-693c920be35d 00:14:47.857 03:15:22 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:48.118 03:15:22 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:48.377 03:15:22 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1478019 00:14:48.377 03:15:22 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:48.377 03:15:22 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1478019 /var/tmp/bdevperf.sock 00:14:48.377 03:15:22 -- common/autotest_common.sh@817 -- # '[' -z 1478019 ']' 00:14:48.377 03:15:22 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:48.377 03:15:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:48.377 03:15:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:48.377 03:15:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:48.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:48.377 03:15:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:48.377 03:15:22 -- common/autotest_common.sh@10 -- # set +x 00:14:48.377 [2024-04-25 03:15:22.770774] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:14:48.377 [2024-04-25 03:15:22.770861] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1478019 ] 00:14:48.377 EAL: No free 2048 kB hugepages reported on node 1 00:14:48.377 [2024-04-25 03:15:22.832854] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:48.637 [2024-04-25 03:15:22.951278] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:48.637 03:15:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:48.637 03:15:23 -- common/autotest_common.sh@850 -- # return 0 00:14:48.637 03:15:23 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:49.204 Nvme0n1 00:14:49.204 03:15:23 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:49.464 [ 00:14:49.464 { 00:14:49.464 "name": "Nvme0n1", 00:14:49.464 "aliases": [ 00:14:49.464 "b3911868-e296-427a-9b5f-693c920be35d" 00:14:49.464 ], 00:14:49.464 "product_name": "NVMe disk", 00:14:49.464 "block_size": 4096, 00:14:49.464 "num_blocks": 38912, 00:14:49.465 "uuid": "b3911868-e296-427a-9b5f-693c920be35d", 00:14:49.465 "assigned_rate_limits": { 00:14:49.465 "rw_ios_per_sec": 0, 00:14:49.465 "rw_mbytes_per_sec": 0, 00:14:49.465 "r_mbytes_per_sec": 0, 00:14:49.465 "w_mbytes_per_sec": 0 00:14:49.465 }, 00:14:49.465 "claimed": false, 00:14:49.465 "zoned": false, 00:14:49.465 "supported_io_types": { 00:14:49.465 "read": true, 00:14:49.465 "write": true, 00:14:49.465 "unmap": true, 00:14:49.465 "write_zeroes": true, 00:14:49.465 "flush": true, 00:14:49.465 "reset": true, 00:14:49.465 "compare": true, 00:14:49.465 "compare_and_write": true, 00:14:49.465 "abort": true, 00:14:49.465 "nvme_admin": true, 00:14:49.465 "nvme_io": true 00:14:49.465 }, 00:14:49.465 "memory_domains": [ 00:14:49.465 { 00:14:49.465 "dma_device_id": "system", 00:14:49.465 "dma_device_type": 1 00:14:49.465 } 00:14:49.465 ], 00:14:49.465 "driver_specific": { 00:14:49.465 "nvme": [ 00:14:49.465 { 00:14:49.465 "trid": { 00:14:49.465 "trtype": "TCP", 00:14:49.465 "adrfam": "IPv4", 00:14:49.465 "traddr": "10.0.0.2", 00:14:49.465 "trsvcid": "4420", 00:14:49.465 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:14:49.465 }, 00:14:49.465 "ctrlr_data": { 00:14:49.465 "cntlid": 1, 00:14:49.465 "vendor_id": "0x8086", 00:14:49.465 "model_number": "SPDK bdev Controller", 00:14:49.465 "serial_number": "SPDK0", 00:14:49.465 "firmware_revision": "24.05", 00:14:49.465 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:49.465 "oacs": { 00:14:49.465 "security": 0, 00:14:49.465 "format": 0, 00:14:49.465 "firmware": 0, 00:14:49.465 "ns_manage": 0 00:14:49.465 }, 00:14:49.465 "multi_ctrlr": true, 00:14:49.465 "ana_reporting": false 00:14:49.465 }, 00:14:49.465 "vs": { 00:14:49.465 "nvme_version": "1.3" 00:14:49.465 }, 00:14:49.465 "ns_data": { 00:14:49.465 "id": 1, 00:14:49.465 "can_share": true 00:14:49.465 } 00:14:49.465 } 00:14:49.465 ], 00:14:49.465 "mp_policy": "active_passive" 00:14:49.465 } 00:14:49.465 } 00:14:49.465 ] 00:14:49.465 03:15:23 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1478155 00:14:49.465 03:15:23 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:49.465 03:15:23 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:49.465 Running I/O for 10 seconds... 00:14:50.841 Latency(us) 00:14:50.841 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:50.841 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:50.841 Nvme0n1 : 1.00 12947.00 50.57 0.00 0.00 0.00 0.00 0.00 00:14:50.841 =================================================================================================================== 00:14:50.841 Total : 12947.00 50.57 0.00 0.00 0.00 0.00 0.00 00:14:50.841 00:14:51.410 03:15:25 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 4b1b9416-3d36-4a74-8226-a5983effa4c7 00:14:51.410 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:51.410 Nvme0n1 : 2.00 13069.50 51.05 0.00 0.00 0.00 0.00 0.00 00:14:51.410 =================================================================================================================== 00:14:51.410 Total : 13069.50 51.05 0.00 0.00 0.00 0.00 0.00 00:14:51.410 00:14:51.669 true 00:14:51.669 03:15:26 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4b1b9416-3d36-4a74-8226-a5983effa4c7 00:14:51.669 03:15:26 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:51.927 03:15:26 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:51.927 03:15:26 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:51.928 03:15:26 -- target/nvmf_lvs_grow.sh@65 -- # wait 1478155 00:14:52.498 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:52.498 Nvme0n1 : 3.00 13139.67 51.33 0.00 0.00 0.00 0.00 0.00 00:14:52.498 =================================================================================================================== 00:14:52.498 Total : 13139.67 51.33 0.00 0.00 0.00 0.00 0.00 00:14:52.498 00:14:53.435 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:53.435 Nvme0n1 : 4.00 13200.75 51.57 0.00 0.00 0.00 0.00 0.00 00:14:53.435 =================================================================================================================== 00:14:53.435 Total : 13200.75 51.57 0.00 0.00 0.00 0.00 0.00 00:14:53.435 00:14:54.816 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:54.816 Nvme0n1 : 5.00 13251.80 51.76 0.00 0.00 0.00 0.00 0.00 00:14:54.816 =================================================================================================================== 00:14:54.816 Total : 13251.80 51.76 0.00 0.00 0.00 0.00 0.00 00:14:54.816 00:14:55.756 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:55.756 Nvme0n1 : 6.00 13283.17 51.89 0.00 0.00 0.00 0.00 0.00 00:14:55.756 =================================================================================================================== 00:14:55.756 Total : 13283.17 51.89 0.00 0.00 0.00 0.00 0.00 00:14:55.756 00:14:56.694 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:56.694 Nvme0n1 : 7.00 13313.57 52.01 0.00 0.00 0.00 0.00 0.00 00:14:56.694 =================================================================================================================== 00:14:56.694 Total : 13313.57 52.01 0.00 0.00 0.00 0.00 0.00 00:14:56.694 00:14:57.641 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:57.641 Nvme0n1 : 8.00 13342.38 52.12 0.00 0.00 0.00 0.00 0.00 00:14:57.641 =================================================================================================================== 00:14:57.641 Total : 13342.38 52.12 0.00 0.00 0.00 0.00 0.00 00:14:57.641 00:14:58.580 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:58.580 Nvme0n1 : 9.00 13410.11 52.38 0.00 0.00 0.00 0.00 0.00 00:14:58.580 =================================================================================================================== 00:14:58.580 Total : 13410.11 52.38 0.00 0.00 0.00 0.00 0.00 00:14:58.580 00:14:59.515 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:59.515 Nvme0n1 : 10.00 13427.50 52.45 0.00 0.00 0.00 0.00 0.00 00:14:59.515 =================================================================================================================== 00:14:59.515 Total : 13427.50 52.45 0.00 0.00 0.00 0.00 0.00 00:14:59.515 00:14:59.515 00:14:59.515 Latency(us) 00:14:59.515 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:59.515 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:59.515 Nvme0n1 : 10.01 13428.28 52.45 0.00 0.00 9524.04 4441.88 14563.56 00:14:59.515 =================================================================================================================== 00:14:59.515 Total : 13428.28 52.45 0.00 0.00 9524.04 4441.88 14563.56 00:14:59.515 0 00:14:59.515 03:15:33 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1478019 00:14:59.516 03:15:33 -- common/autotest_common.sh@936 -- # '[' -z 1478019 ']' 00:14:59.516 03:15:33 -- common/autotest_common.sh@940 -- # kill -0 1478019 00:14:59.516 03:15:33 -- common/autotest_common.sh@941 -- # uname 00:14:59.516 03:15:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:59.516 03:15:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1478019 00:14:59.516 03:15:33 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:59.516 03:15:33 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:59.516 03:15:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1478019' 00:14:59.516 killing process with pid 1478019 00:14:59.516 03:15:33 -- common/autotest_common.sh@955 -- # kill 1478019 00:14:59.516 Received shutdown signal, test time was about 10.000000 seconds 00:14:59.516 00:14:59.516 Latency(us) 00:14:59.516 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:59.516 =================================================================================================================== 00:14:59.516 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:59.516 03:15:33 -- common/autotest_common.sh@960 -- # wait 1478019 00:14:59.773 03:15:34 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:00.343 03:15:34 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4b1b9416-3d36-4a74-8226-a5983effa4c7 00:15:00.343 03:15:34 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:15:00.343 03:15:34 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:15:00.343 03:15:34 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:15:00.343 03:15:34 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 1475022 00:15:00.343 03:15:34 -- target/nvmf_lvs_grow.sh@74 -- # wait 1475022 00:15:00.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 1475022 Killed "${NVMF_APP[@]}" "$@" 00:15:00.343 03:15:34 -- target/nvmf_lvs_grow.sh@74 -- # true 00:15:00.343 03:15:34 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:15:00.343 03:15:34 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:00.343 03:15:34 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:00.343 03:15:34 -- common/autotest_common.sh@10 -- # set +x 00:15:00.343 03:15:34 -- nvmf/common.sh@470 -- # nvmfpid=1479481 00:15:00.343 03:15:34 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:00.343 03:15:34 -- nvmf/common.sh@471 -- # waitforlisten 1479481 00:15:00.343 03:15:34 -- common/autotest_common.sh@817 -- # '[' -z 1479481 ']' 00:15:00.343 03:15:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:00.343 03:15:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:00.343 03:15:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:00.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:00.343 03:15:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:00.343 03:15:34 -- common/autotest_common.sh@10 -- # set +x 00:15:00.603 [2024-04-25 03:15:34.872290] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:15:00.603 [2024-04-25 03:15:34.872372] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:00.603 EAL: No free 2048 kB hugepages reported on node 1 00:15:00.603 [2024-04-25 03:15:34.938040] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:00.603 [2024-04-25 03:15:35.044513] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:00.603 [2024-04-25 03:15:35.044575] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:00.603 [2024-04-25 03:15:35.044588] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:00.603 [2024-04-25 03:15:35.044599] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:00.603 [2024-04-25 03:15:35.044623] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:00.603 [2024-04-25 03:15:35.044663] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:00.862 03:15:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:00.862 03:15:35 -- common/autotest_common.sh@850 -- # return 0 00:15:00.862 03:15:35 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:00.862 03:15:35 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:00.862 03:15:35 -- common/autotest_common.sh@10 -- # set +x 00:15:00.862 03:15:35 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:00.862 03:15:35 -- target/nvmf_lvs_grow.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:01.121 [2024-04-25 03:15:35.456681] blobstore.c:4779:bs_recover: *NOTICE*: Performing recovery on blobstore 00:15:01.121 [2024-04-25 03:15:35.456827] blobstore.c:4726:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:15:01.121 [2024-04-25 03:15:35.456876] blobstore.c:4726:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:15:01.121 03:15:35 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:15:01.121 03:15:35 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev b3911868-e296-427a-9b5f-693c920be35d 00:15:01.121 03:15:35 -- common/autotest_common.sh@885 -- # local bdev_name=b3911868-e296-427a-9b5f-693c920be35d 00:15:01.121 03:15:35 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:15:01.121 03:15:35 -- common/autotest_common.sh@887 -- # local i 00:15:01.121 03:15:35 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:15:01.121 03:15:35 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:15:01.121 03:15:35 -- common/autotest_common.sh@890 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:01.379 03:15:35 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b b3911868-e296-427a-9b5f-693c920be35d -t 2000 00:15:01.640 [ 00:15:01.640 { 00:15:01.640 "name": "b3911868-e296-427a-9b5f-693c920be35d", 00:15:01.640 "aliases": [ 00:15:01.640 "lvs/lvol" 00:15:01.640 ], 00:15:01.640 "product_name": "Logical Volume", 00:15:01.640 "block_size": 4096, 00:15:01.640 "num_blocks": 38912, 00:15:01.640 "uuid": "b3911868-e296-427a-9b5f-693c920be35d", 00:15:01.640 "assigned_rate_limits": { 00:15:01.640 "rw_ios_per_sec": 0, 00:15:01.640 "rw_mbytes_per_sec": 0, 00:15:01.640 "r_mbytes_per_sec": 0, 00:15:01.640 "w_mbytes_per_sec": 0 00:15:01.640 }, 00:15:01.640 "claimed": false, 00:15:01.640 "zoned": false, 00:15:01.640 "supported_io_types": { 00:15:01.640 "read": true, 00:15:01.640 "write": true, 00:15:01.640 "unmap": true, 00:15:01.640 "write_zeroes": true, 00:15:01.640 "flush": false, 00:15:01.640 "reset": true, 00:15:01.640 "compare": false, 00:15:01.640 "compare_and_write": false, 00:15:01.640 "abort": false, 00:15:01.640 "nvme_admin": false, 00:15:01.640 "nvme_io": false 00:15:01.640 }, 00:15:01.640 "driver_specific": { 00:15:01.640 "lvol": { 00:15:01.640 "lvol_store_uuid": "4b1b9416-3d36-4a74-8226-a5983effa4c7", 00:15:01.640 "base_bdev": "aio_bdev", 00:15:01.640 "thin_provision": false, 00:15:01.640 "snapshot": false, 00:15:01.640 "clone": false, 00:15:01.640 "esnap_clone": false 00:15:01.640 } 00:15:01.640 } 00:15:01.640 } 00:15:01.640 ] 00:15:01.640 03:15:35 -- common/autotest_common.sh@893 -- # return 0 00:15:01.640 03:15:36 -- target/nvmf_lvs_grow.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4b1b9416-3d36-4a74-8226-a5983effa4c7 00:15:01.640 03:15:36 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:15:01.900 03:15:36 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:15:01.900 03:15:36 -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4b1b9416-3d36-4a74-8226-a5983effa4c7 00:15:01.900 03:15:36 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:15:02.159 03:15:36 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:15:02.159 03:15:36 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:02.417 [2024-04-25 03:15:36.757951] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:15:02.417 03:15:36 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4b1b9416-3d36-4a74-8226-a5983effa4c7 00:15:02.417 03:15:36 -- common/autotest_common.sh@638 -- # local es=0 00:15:02.417 03:15:36 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4b1b9416-3d36-4a74-8226-a5983effa4c7 00:15:02.417 03:15:36 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:02.417 03:15:36 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:02.417 03:15:36 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:02.417 03:15:36 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:02.417 03:15:36 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:02.417 03:15:36 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:02.417 03:15:36 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:02.417 03:15:36 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:02.417 03:15:36 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4b1b9416-3d36-4a74-8226-a5983effa4c7 00:15:02.676 request: 00:15:02.676 { 00:15:02.676 "uuid": "4b1b9416-3d36-4a74-8226-a5983effa4c7", 00:15:02.676 "method": "bdev_lvol_get_lvstores", 00:15:02.676 "req_id": 1 00:15:02.676 } 00:15:02.676 Got JSON-RPC error response 00:15:02.676 response: 00:15:02.676 { 00:15:02.676 "code": -19, 00:15:02.676 "message": "No such device" 00:15:02.676 } 00:15:02.676 03:15:37 -- common/autotest_common.sh@641 -- # es=1 00:15:02.676 03:15:37 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:15:02.676 03:15:37 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:15:02.676 03:15:37 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:15:02.676 03:15:37 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:02.934 aio_bdev 00:15:02.934 03:15:37 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev b3911868-e296-427a-9b5f-693c920be35d 00:15:02.934 03:15:37 -- common/autotest_common.sh@885 -- # local bdev_name=b3911868-e296-427a-9b5f-693c920be35d 00:15:02.934 03:15:37 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:15:02.934 03:15:37 -- common/autotest_common.sh@887 -- # local i 00:15:02.934 03:15:37 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:15:02.934 03:15:37 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:15:02.934 03:15:37 -- common/autotest_common.sh@890 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:03.194 03:15:37 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b b3911868-e296-427a-9b5f-693c920be35d -t 2000 00:15:03.454 [ 00:15:03.454 { 00:15:03.454 "name": "b3911868-e296-427a-9b5f-693c920be35d", 00:15:03.454 "aliases": [ 00:15:03.454 "lvs/lvol" 00:15:03.454 ], 00:15:03.454 "product_name": "Logical Volume", 00:15:03.454 "block_size": 4096, 00:15:03.454 "num_blocks": 38912, 00:15:03.454 "uuid": "b3911868-e296-427a-9b5f-693c920be35d", 00:15:03.454 "assigned_rate_limits": { 00:15:03.454 "rw_ios_per_sec": 0, 00:15:03.454 "rw_mbytes_per_sec": 0, 00:15:03.454 "r_mbytes_per_sec": 0, 00:15:03.454 "w_mbytes_per_sec": 0 00:15:03.454 }, 00:15:03.454 "claimed": false, 00:15:03.454 "zoned": false, 00:15:03.454 "supported_io_types": { 00:15:03.454 "read": true, 00:15:03.454 "write": true, 00:15:03.454 "unmap": true, 00:15:03.454 "write_zeroes": true, 00:15:03.454 "flush": false, 00:15:03.454 "reset": true, 00:15:03.454 "compare": false, 00:15:03.454 "compare_and_write": false, 00:15:03.454 "abort": false, 00:15:03.454 "nvme_admin": false, 00:15:03.454 "nvme_io": false 00:15:03.454 }, 00:15:03.454 "driver_specific": { 00:15:03.454 "lvol": { 00:15:03.454 "lvol_store_uuid": "4b1b9416-3d36-4a74-8226-a5983effa4c7", 00:15:03.454 "base_bdev": "aio_bdev", 00:15:03.454 "thin_provision": false, 00:15:03.454 "snapshot": false, 00:15:03.454 "clone": false, 00:15:03.454 "esnap_clone": false 00:15:03.454 } 00:15:03.454 } 00:15:03.454 } 00:15:03.454 ] 00:15:03.454 03:15:37 -- common/autotest_common.sh@893 -- # return 0 00:15:03.454 03:15:37 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4b1b9416-3d36-4a74-8226-a5983effa4c7 00:15:03.454 03:15:37 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:15:03.713 03:15:38 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:15:03.713 03:15:38 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4b1b9416-3d36-4a74-8226-a5983effa4c7 00:15:03.713 03:15:38 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:15:03.973 03:15:38 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:15:03.973 03:15:38 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete b3911868-e296-427a-9b5f-693c920be35d 00:15:04.232 03:15:38 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4b1b9416-3d36-4a74-8226-a5983effa4c7 00:15:04.491 03:15:38 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:04.750 03:15:39 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:05.011 00:15:05.011 real 0m19.218s 00:15:05.011 user 0m47.707s 00:15:05.011 sys 0m5.069s 00:15:05.011 03:15:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:05.011 03:15:39 -- common/autotest_common.sh@10 -- # set +x 00:15:05.011 ************************************ 00:15:05.011 END TEST lvs_grow_dirty 00:15:05.011 ************************************ 00:15:05.011 03:15:39 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:15:05.011 03:15:39 -- common/autotest_common.sh@794 -- # type=--id 00:15:05.011 03:15:39 -- common/autotest_common.sh@795 -- # id=0 00:15:05.011 03:15:39 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:15:05.011 03:15:39 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:05.011 03:15:39 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:15:05.011 03:15:39 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:15:05.011 03:15:39 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:15:05.011 03:15:39 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:05.011 nvmf_trace.0 00:15:05.011 03:15:39 -- common/autotest_common.sh@809 -- # return 0 00:15:05.011 03:15:39 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:15:05.011 03:15:39 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:05.011 03:15:39 -- nvmf/common.sh@117 -- # sync 00:15:05.011 03:15:39 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:05.011 03:15:39 -- nvmf/common.sh@120 -- # set +e 00:15:05.011 03:15:39 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:05.011 03:15:39 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:05.011 rmmod nvme_tcp 00:15:05.011 rmmod nvme_fabrics 00:15:05.011 rmmod nvme_keyring 00:15:05.011 03:15:39 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:05.011 03:15:39 -- nvmf/common.sh@124 -- # set -e 00:15:05.011 03:15:39 -- nvmf/common.sh@125 -- # return 0 00:15:05.011 03:15:39 -- nvmf/common.sh@478 -- # '[' -n 1479481 ']' 00:15:05.011 03:15:39 -- nvmf/common.sh@479 -- # killprocess 1479481 00:15:05.011 03:15:39 -- common/autotest_common.sh@936 -- # '[' -z 1479481 ']' 00:15:05.011 03:15:39 -- common/autotest_common.sh@940 -- # kill -0 1479481 00:15:05.011 03:15:39 -- common/autotest_common.sh@941 -- # uname 00:15:05.011 03:15:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:05.011 03:15:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1479481 00:15:05.011 03:15:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:05.011 03:15:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:05.011 03:15:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1479481' 00:15:05.011 killing process with pid 1479481 00:15:05.011 03:15:39 -- common/autotest_common.sh@955 -- # kill 1479481 00:15:05.011 03:15:39 -- common/autotest_common.sh@960 -- # wait 1479481 00:15:05.271 03:15:39 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:05.271 03:15:39 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:05.271 03:15:39 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:05.271 03:15:39 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:05.271 03:15:39 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:05.271 03:15:39 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:05.271 03:15:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:05.271 03:15:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:07.810 03:15:41 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:07.810 00:15:07.810 real 0m41.711s 00:15:07.810 user 1m10.178s 00:15:07.810 sys 0m8.880s 00:15:07.810 03:15:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:07.810 03:15:41 -- common/autotest_common.sh@10 -- # set +x 00:15:07.810 ************************************ 00:15:07.810 END TEST nvmf_lvs_grow 00:15:07.810 ************************************ 00:15:07.810 03:15:41 -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:15:07.810 03:15:41 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:07.810 03:15:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:07.810 03:15:41 -- common/autotest_common.sh@10 -- # set +x 00:15:07.810 ************************************ 00:15:07.810 START TEST nvmf_bdev_io_wait 00:15:07.810 ************************************ 00:15:07.810 03:15:41 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:15:07.810 * Looking for test storage... 00:15:07.810 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:07.810 03:15:41 -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:07.810 03:15:41 -- nvmf/common.sh@7 -- # uname -s 00:15:07.810 03:15:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:07.810 03:15:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:07.810 03:15:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:07.810 03:15:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:07.810 03:15:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:07.810 03:15:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:07.810 03:15:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:07.810 03:15:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:07.810 03:15:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:07.810 03:15:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:07.810 03:15:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:07.810 03:15:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:07.810 03:15:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:07.810 03:15:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:07.810 03:15:41 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:07.810 03:15:41 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:07.810 03:15:41 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:07.810 03:15:41 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:07.810 03:15:41 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:07.810 03:15:41 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:07.810 03:15:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.810 03:15:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.810 03:15:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.810 03:15:41 -- paths/export.sh@5 -- # export PATH 00:15:07.810 03:15:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.810 03:15:41 -- nvmf/common.sh@47 -- # : 0 00:15:07.810 03:15:41 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:07.810 03:15:41 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:07.810 03:15:41 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:07.810 03:15:41 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:07.810 03:15:41 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:07.810 03:15:41 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:07.810 03:15:41 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:07.810 03:15:41 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:07.810 03:15:41 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:07.810 03:15:41 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:07.810 03:15:41 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:15:07.810 03:15:41 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:15:07.810 03:15:41 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:07.810 03:15:41 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:07.810 03:15:41 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:07.810 03:15:41 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:07.810 03:15:41 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:07.810 03:15:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:07.810 03:15:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:07.810 03:15:41 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:15:07.810 03:15:41 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:15:07.810 03:15:41 -- nvmf/common.sh@285 -- # xtrace_disable 00:15:07.810 03:15:41 -- common/autotest_common.sh@10 -- # set +x 00:15:09.719 03:15:43 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:09.719 03:15:43 -- nvmf/common.sh@291 -- # pci_devs=() 00:15:09.719 03:15:43 -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:09.719 03:15:43 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:09.719 03:15:43 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:09.719 03:15:43 -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:09.719 03:15:43 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:09.719 03:15:43 -- nvmf/common.sh@295 -- # net_devs=() 00:15:09.719 03:15:43 -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:09.719 03:15:43 -- nvmf/common.sh@296 -- # e810=() 00:15:09.719 03:15:43 -- nvmf/common.sh@296 -- # local -ga e810 00:15:09.719 03:15:43 -- nvmf/common.sh@297 -- # x722=() 00:15:09.719 03:15:43 -- nvmf/common.sh@297 -- # local -ga x722 00:15:09.719 03:15:43 -- nvmf/common.sh@298 -- # mlx=() 00:15:09.719 03:15:43 -- nvmf/common.sh@298 -- # local -ga mlx 00:15:09.719 03:15:43 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:09.719 03:15:43 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:09.719 03:15:43 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:09.719 03:15:43 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:09.719 03:15:43 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:09.719 03:15:43 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:09.719 03:15:43 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:09.719 03:15:43 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:09.719 03:15:43 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:09.719 03:15:43 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:09.719 03:15:43 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:09.719 03:15:43 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:09.719 03:15:43 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:09.719 03:15:43 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:09.719 03:15:43 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:09.719 03:15:43 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:09.719 03:15:43 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:09.719 03:15:43 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:09.719 03:15:43 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:09.719 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:09.719 03:15:43 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:09.719 03:15:43 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:09.719 03:15:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:09.719 03:15:43 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:09.719 03:15:43 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:09.719 03:15:43 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:09.719 03:15:43 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:09.719 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:09.719 03:15:43 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:09.719 03:15:43 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:09.719 03:15:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:09.719 03:15:43 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:09.719 03:15:43 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:09.719 03:15:43 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:09.719 03:15:43 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:09.719 03:15:43 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:09.719 03:15:43 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:09.719 03:15:43 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:09.719 03:15:43 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:09.719 03:15:43 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:09.719 03:15:43 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:09.719 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:09.719 03:15:43 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:09.719 03:15:43 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:09.719 03:15:43 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:09.719 03:15:43 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:09.719 03:15:43 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:09.719 03:15:43 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:09.719 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:09.719 03:15:43 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:09.719 03:15:43 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:15:09.719 03:15:43 -- nvmf/common.sh@403 -- # is_hw=yes 00:15:09.719 03:15:43 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:15:09.719 03:15:43 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:15:09.719 03:15:43 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:15:09.719 03:15:43 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:09.719 03:15:43 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:09.719 03:15:43 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:09.719 03:15:43 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:09.720 03:15:43 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:09.720 03:15:43 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:09.720 03:15:43 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:09.720 03:15:43 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:09.720 03:15:43 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:09.720 03:15:43 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:09.720 03:15:43 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:09.720 03:15:43 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:09.720 03:15:43 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:09.720 03:15:43 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:09.720 03:15:43 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:09.720 03:15:44 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:09.720 03:15:44 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:09.720 03:15:44 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:09.720 03:15:44 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:09.720 03:15:44 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:09.720 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:09.720 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.245 ms 00:15:09.720 00:15:09.720 --- 10.0.0.2 ping statistics --- 00:15:09.720 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:09.720 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:15:09.720 03:15:44 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:09.720 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:09.720 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:15:09.720 00:15:09.720 --- 10.0.0.1 ping statistics --- 00:15:09.720 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:09.720 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:15:09.720 03:15:44 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:09.720 03:15:44 -- nvmf/common.sh@411 -- # return 0 00:15:09.720 03:15:44 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:09.720 03:15:44 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:09.720 03:15:44 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:09.720 03:15:44 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:09.720 03:15:44 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:09.720 03:15:44 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:09.720 03:15:44 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:09.720 03:15:44 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:15:09.720 03:15:44 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:09.720 03:15:44 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:09.720 03:15:44 -- common/autotest_common.sh@10 -- # set +x 00:15:09.720 03:15:44 -- nvmf/common.sh@470 -- # nvmfpid=1482012 00:15:09.720 03:15:44 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:15:09.720 03:15:44 -- nvmf/common.sh@471 -- # waitforlisten 1482012 00:15:09.720 03:15:44 -- common/autotest_common.sh@817 -- # '[' -z 1482012 ']' 00:15:09.720 03:15:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:09.720 03:15:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:09.720 03:15:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:09.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:09.720 03:15:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:09.720 03:15:44 -- common/autotest_common.sh@10 -- # set +x 00:15:09.720 [2024-04-25 03:15:44.137449] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:15:09.720 [2024-04-25 03:15:44.137524] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:09.720 EAL: No free 2048 kB hugepages reported on node 1 00:15:09.720 [2024-04-25 03:15:44.209889] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:09.979 [2024-04-25 03:15:44.331102] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:09.980 [2024-04-25 03:15:44.331169] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:09.980 [2024-04-25 03:15:44.331185] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:09.980 [2024-04-25 03:15:44.331197] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:09.980 [2024-04-25 03:15:44.331209] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:09.980 [2024-04-25 03:15:44.331292] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:09.980 [2024-04-25 03:15:44.331360] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:09.980 [2024-04-25 03:15:44.331382] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:09.980 [2024-04-25 03:15:44.331385] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:09.980 03:15:44 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:09.980 03:15:44 -- common/autotest_common.sh@850 -- # return 0 00:15:09.980 03:15:44 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:09.980 03:15:44 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:09.980 03:15:44 -- common/autotest_common.sh@10 -- # set +x 00:15:09.980 03:15:44 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:09.980 03:15:44 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:15:09.980 03:15:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:09.980 03:15:44 -- common/autotest_common.sh@10 -- # set +x 00:15:09.980 03:15:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:09.980 03:15:44 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:15:09.980 03:15:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:09.980 03:15:44 -- common/autotest_common.sh@10 -- # set +x 00:15:10.238 03:15:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:10.238 03:15:44 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:10.238 03:15:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:10.238 03:15:44 -- common/autotest_common.sh@10 -- # set +x 00:15:10.238 [2024-04-25 03:15:44.487777] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:10.238 03:15:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:10.238 03:15:44 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:10.238 03:15:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:10.238 03:15:44 -- common/autotest_common.sh@10 -- # set +x 00:15:10.238 Malloc0 00:15:10.238 03:15:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:10.239 03:15:44 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:10.239 03:15:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:10.239 03:15:44 -- common/autotest_common.sh@10 -- # set +x 00:15:10.239 03:15:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:10.239 03:15:44 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:10.239 03:15:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:10.239 03:15:44 -- common/autotest_common.sh@10 -- # set +x 00:15:10.239 03:15:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:10.239 03:15:44 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:10.239 03:15:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:10.239 03:15:44 -- common/autotest_common.sh@10 -- # set +x 00:15:10.239 [2024-04-25 03:15:44.562281] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:10.239 03:15:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:10.239 03:15:44 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1482047 00:15:10.239 03:15:44 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:15:10.239 03:15:44 -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:15:10.239 03:15:44 -- nvmf/common.sh@521 -- # config=() 00:15:10.239 03:15:44 -- nvmf/common.sh@521 -- # local subsystem config 00:15:10.239 03:15:44 -- target/bdev_io_wait.sh@30 -- # READ_PID=1482049 00:15:10.239 03:15:44 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:15:10.239 03:15:44 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:15:10.239 { 00:15:10.239 "params": { 00:15:10.239 "name": "Nvme$subsystem", 00:15:10.239 "trtype": "$TEST_TRANSPORT", 00:15:10.239 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:10.239 "adrfam": "ipv4", 00:15:10.239 "trsvcid": "$NVMF_PORT", 00:15:10.239 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:10.239 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:10.239 "hdgst": ${hdgst:-false}, 00:15:10.239 "ddgst": ${ddgst:-false} 00:15:10.239 }, 00:15:10.239 "method": "bdev_nvme_attach_controller" 00:15:10.239 } 00:15:10.239 EOF 00:15:10.239 )") 00:15:10.239 03:15:44 -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:15:10.239 03:15:44 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:15:10.239 03:15:44 -- nvmf/common.sh@521 -- # config=() 00:15:10.239 03:15:44 -- nvmf/common.sh@521 -- # local subsystem config 00:15:10.239 03:15:44 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1482052 00:15:10.239 03:15:44 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:15:10.239 03:15:44 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:15:10.239 { 00:15:10.239 "params": { 00:15:10.239 "name": "Nvme$subsystem", 00:15:10.239 "trtype": "$TEST_TRANSPORT", 00:15:10.239 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:10.239 "adrfam": "ipv4", 00:15:10.239 "trsvcid": "$NVMF_PORT", 00:15:10.239 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:10.239 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:10.239 "hdgst": ${hdgst:-false}, 00:15:10.239 "ddgst": ${ddgst:-false} 00:15:10.239 }, 00:15:10.239 "method": "bdev_nvme_attach_controller" 00:15:10.239 } 00:15:10.239 EOF 00:15:10.239 )") 00:15:10.239 03:15:44 -- nvmf/common.sh@543 -- # cat 00:15:10.239 03:15:44 -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:15:10.239 03:15:44 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:15:10.239 03:15:44 -- nvmf/common.sh@521 -- # config=() 00:15:10.239 03:15:44 -- nvmf/common.sh@521 -- # local subsystem config 00:15:10.239 03:15:44 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1482055 00:15:10.239 03:15:44 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:15:10.239 03:15:44 -- target/bdev_io_wait.sh@35 -- # sync 00:15:10.239 03:15:44 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:15:10.239 { 00:15:10.239 "params": { 00:15:10.239 "name": "Nvme$subsystem", 00:15:10.239 "trtype": "$TEST_TRANSPORT", 00:15:10.239 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:10.239 "adrfam": "ipv4", 00:15:10.239 "trsvcid": "$NVMF_PORT", 00:15:10.239 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:10.239 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:10.239 "hdgst": ${hdgst:-false}, 00:15:10.239 "ddgst": ${ddgst:-false} 00:15:10.239 }, 00:15:10.239 "method": "bdev_nvme_attach_controller" 00:15:10.239 } 00:15:10.239 EOF 00:15:10.239 )") 00:15:10.239 03:15:44 -- nvmf/common.sh@543 -- # cat 00:15:10.239 03:15:44 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:15:10.239 03:15:44 -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:15:10.239 03:15:44 -- nvmf/common.sh@521 -- # config=() 00:15:10.239 03:15:44 -- nvmf/common.sh@521 -- # local subsystem config 00:15:10.239 03:15:44 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:15:10.239 03:15:44 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:15:10.239 { 00:15:10.239 "params": { 00:15:10.239 "name": "Nvme$subsystem", 00:15:10.239 "trtype": "$TEST_TRANSPORT", 00:15:10.239 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:10.239 "adrfam": "ipv4", 00:15:10.239 "trsvcid": "$NVMF_PORT", 00:15:10.239 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:10.239 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:10.239 "hdgst": ${hdgst:-false}, 00:15:10.239 "ddgst": ${ddgst:-false} 00:15:10.239 }, 00:15:10.239 "method": "bdev_nvme_attach_controller" 00:15:10.239 } 00:15:10.239 EOF 00:15:10.239 )") 00:15:10.239 03:15:44 -- nvmf/common.sh@543 -- # cat 00:15:10.239 03:15:44 -- nvmf/common.sh@545 -- # jq . 00:15:10.239 03:15:44 -- nvmf/common.sh@543 -- # cat 00:15:10.239 03:15:44 -- target/bdev_io_wait.sh@37 -- # wait 1482047 00:15:10.239 03:15:44 -- nvmf/common.sh@545 -- # jq . 00:15:10.239 03:15:44 -- nvmf/common.sh@546 -- # IFS=, 00:15:10.239 03:15:44 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:15:10.239 "params": { 00:15:10.239 "name": "Nvme1", 00:15:10.239 "trtype": "tcp", 00:15:10.239 "traddr": "10.0.0.2", 00:15:10.239 "adrfam": "ipv4", 00:15:10.239 "trsvcid": "4420", 00:15:10.239 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:10.239 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:10.239 "hdgst": false, 00:15:10.239 "ddgst": false 00:15:10.239 }, 00:15:10.239 "method": "bdev_nvme_attach_controller" 00:15:10.239 }' 00:15:10.239 03:15:44 -- nvmf/common.sh@545 -- # jq . 00:15:10.239 03:15:44 -- nvmf/common.sh@546 -- # IFS=, 00:15:10.239 03:15:44 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:15:10.239 "params": { 00:15:10.239 "name": "Nvme1", 00:15:10.239 "trtype": "tcp", 00:15:10.239 "traddr": "10.0.0.2", 00:15:10.239 "adrfam": "ipv4", 00:15:10.239 "trsvcid": "4420", 00:15:10.239 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:10.239 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:10.239 "hdgst": false, 00:15:10.239 "ddgst": false 00:15:10.239 }, 00:15:10.239 "method": "bdev_nvme_attach_controller" 00:15:10.239 }' 00:15:10.239 03:15:44 -- nvmf/common.sh@545 -- # jq . 00:15:10.239 03:15:44 -- nvmf/common.sh@546 -- # IFS=, 00:15:10.239 03:15:44 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:15:10.239 "params": { 00:15:10.239 "name": "Nvme1", 00:15:10.239 "trtype": "tcp", 00:15:10.239 "traddr": "10.0.0.2", 00:15:10.240 "adrfam": "ipv4", 00:15:10.240 "trsvcid": "4420", 00:15:10.240 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:10.240 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:10.240 "hdgst": false, 00:15:10.240 "ddgst": false 00:15:10.240 }, 00:15:10.240 "method": "bdev_nvme_attach_controller" 00:15:10.240 }' 00:15:10.240 03:15:44 -- nvmf/common.sh@546 -- # IFS=, 00:15:10.240 03:15:44 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:15:10.240 "params": { 00:15:10.240 "name": "Nvme1", 00:15:10.240 "trtype": "tcp", 00:15:10.240 "traddr": "10.0.0.2", 00:15:10.240 "adrfam": "ipv4", 00:15:10.240 "trsvcid": "4420", 00:15:10.240 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:10.240 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:10.240 "hdgst": false, 00:15:10.240 "ddgst": false 00:15:10.240 }, 00:15:10.240 "method": "bdev_nvme_attach_controller" 00:15:10.240 }' 00:15:10.240 [2024-04-25 03:15:44.605682] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:15:10.240 [2024-04-25 03:15:44.605682] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:15:10.240 [2024-04-25 03:15:44.605773] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-04-25 03:15:44.605773] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:15:10.240 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:15:10.240 [2024-04-25 03:15:44.607503] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:15:10.240 [2024-04-25 03:15:44.607503] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:15:10.240 [2024-04-25 03:15:44.607583] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-04-25 03:15:44.607584] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:15:10.240 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:15:10.240 EAL: No free 2048 kB hugepages reported on node 1 00:15:10.497 EAL: No free 2048 kB hugepages reported on node 1 00:15:10.497 [2024-04-25 03:15:44.785716] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:10.497 EAL: No free 2048 kB hugepages reported on node 1 00:15:10.497 [2024-04-25 03:15:44.880148] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:15:10.497 [2024-04-25 03:15:44.889992] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:10.497 EAL: No free 2048 kB hugepages reported on node 1 00:15:10.497 [2024-04-25 03:15:44.955704] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:10.497 [2024-04-25 03:15:44.983582] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:15:10.756 [2024-04-25 03:15:45.032691] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:10.756 [2024-04-25 03:15:45.046837] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:15:10.756 [2024-04-25 03:15:45.120677] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:15:10.756 Running I/O for 1 seconds... 00:15:10.756 Running I/O for 1 seconds... 00:15:10.756 Running I/O for 1 seconds... 00:15:11.016 Running I/O for 1 seconds... 00:15:11.952 00:15:11.952 Latency(us) 00:15:11.952 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:11.952 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:15:11.953 Nvme1n1 : 1.01 9984.10 39.00 0.00 0.00 12784.01 5437.06 28350.39 00:15:11.953 =================================================================================================================== 00:15:11.953 Total : 9984.10 39.00 0.00 0.00 12784.01 5437.06 28350.39 00:15:11.953 00:15:11.953 Latency(us) 00:15:11.953 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:11.953 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:15:11.953 Nvme1n1 : 1.01 6863.12 26.81 0.00 0.00 18503.71 6505.05 26020.22 00:15:11.953 =================================================================================================================== 00:15:11.953 Total : 6863.12 26.81 0.00 0.00 18503.71 6505.05 26020.22 00:15:11.953 00:15:11.953 Latency(us) 00:15:11.953 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:11.953 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:15:11.953 Nvme1n1 : 1.00 200763.37 784.23 0.00 0.00 635.09 251.83 776.72 00:15:11.953 =================================================================================================================== 00:15:11.953 Total : 200763.37 784.23 0.00 0.00 635.09 251.83 776.72 00:15:11.953 00:15:11.953 Latency(us) 00:15:11.953 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:11.953 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:15:11.953 Nvme1n1 : 1.01 7312.21 28.56 0.00 0.00 17448.19 5704.06 42525.58 00:15:11.953 =================================================================================================================== 00:15:11.953 Total : 7312.21 28.56 0.00 0.00 17448.19 5704.06 42525.58 00:15:12.211 03:15:46 -- target/bdev_io_wait.sh@38 -- # wait 1482049 00:15:12.211 03:15:46 -- target/bdev_io_wait.sh@39 -- # wait 1482052 00:15:12.211 03:15:46 -- target/bdev_io_wait.sh@40 -- # wait 1482055 00:15:12.211 03:15:46 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:12.211 03:15:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:12.211 03:15:46 -- common/autotest_common.sh@10 -- # set +x 00:15:12.211 03:15:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:12.211 03:15:46 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:15:12.211 03:15:46 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:15:12.211 03:15:46 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:12.211 03:15:46 -- nvmf/common.sh@117 -- # sync 00:15:12.211 03:15:46 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:12.211 03:15:46 -- nvmf/common.sh@120 -- # set +e 00:15:12.211 03:15:46 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:12.211 03:15:46 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:12.211 rmmod nvme_tcp 00:15:12.211 rmmod nvme_fabrics 00:15:12.471 rmmod nvme_keyring 00:15:12.471 03:15:46 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:12.471 03:15:46 -- nvmf/common.sh@124 -- # set -e 00:15:12.471 03:15:46 -- nvmf/common.sh@125 -- # return 0 00:15:12.471 03:15:46 -- nvmf/common.sh@478 -- # '[' -n 1482012 ']' 00:15:12.471 03:15:46 -- nvmf/common.sh@479 -- # killprocess 1482012 00:15:12.471 03:15:46 -- common/autotest_common.sh@936 -- # '[' -z 1482012 ']' 00:15:12.471 03:15:46 -- common/autotest_common.sh@940 -- # kill -0 1482012 00:15:12.471 03:15:46 -- common/autotest_common.sh@941 -- # uname 00:15:12.471 03:15:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:12.471 03:15:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1482012 00:15:12.471 03:15:46 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:12.471 03:15:46 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:12.471 03:15:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1482012' 00:15:12.471 killing process with pid 1482012 00:15:12.471 03:15:46 -- common/autotest_common.sh@955 -- # kill 1482012 00:15:12.471 03:15:46 -- common/autotest_common.sh@960 -- # wait 1482012 00:15:12.731 03:15:47 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:12.731 03:15:47 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:12.731 03:15:47 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:12.731 03:15:47 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:12.731 03:15:47 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:12.731 03:15:47 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:12.731 03:15:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:12.731 03:15:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:14.638 03:15:49 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:14.638 00:15:14.638 real 0m7.196s 00:15:14.638 user 0m16.022s 00:15:14.638 sys 0m3.330s 00:15:14.638 03:15:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:14.638 03:15:49 -- common/autotest_common.sh@10 -- # set +x 00:15:14.638 ************************************ 00:15:14.638 END TEST nvmf_bdev_io_wait 00:15:14.638 ************************************ 00:15:14.638 03:15:49 -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:14.638 03:15:49 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:14.638 03:15:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:14.638 03:15:49 -- common/autotest_common.sh@10 -- # set +x 00:15:14.897 ************************************ 00:15:14.897 START TEST nvmf_queue_depth 00:15:14.897 ************************************ 00:15:14.898 03:15:49 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:14.898 * Looking for test storage... 00:15:14.898 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:14.898 03:15:49 -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:14.898 03:15:49 -- nvmf/common.sh@7 -- # uname -s 00:15:14.898 03:15:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:14.898 03:15:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:14.898 03:15:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:14.898 03:15:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:14.898 03:15:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:14.898 03:15:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:14.898 03:15:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:14.898 03:15:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:14.898 03:15:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:14.898 03:15:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:14.898 03:15:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:14.898 03:15:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:14.898 03:15:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:14.898 03:15:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:14.898 03:15:49 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:14.898 03:15:49 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:14.898 03:15:49 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:14.898 03:15:49 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:14.898 03:15:49 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:14.898 03:15:49 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:14.898 03:15:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:14.898 03:15:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:14.898 03:15:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:14.898 03:15:49 -- paths/export.sh@5 -- # export PATH 00:15:14.898 03:15:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:14.898 03:15:49 -- nvmf/common.sh@47 -- # : 0 00:15:14.898 03:15:49 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:14.898 03:15:49 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:14.898 03:15:49 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:14.898 03:15:49 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:14.898 03:15:49 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:14.898 03:15:49 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:14.898 03:15:49 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:14.898 03:15:49 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:14.898 03:15:49 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:15:14.898 03:15:49 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:15:14.898 03:15:49 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:14.898 03:15:49 -- target/queue_depth.sh@19 -- # nvmftestinit 00:15:14.898 03:15:49 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:15:14.898 03:15:49 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:14.898 03:15:49 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:14.898 03:15:49 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:14.898 03:15:49 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:14.898 03:15:49 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:14.898 03:15:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:14.898 03:15:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:14.898 03:15:49 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:15:14.898 03:15:49 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:15:14.898 03:15:49 -- nvmf/common.sh@285 -- # xtrace_disable 00:15:14.898 03:15:49 -- common/autotest_common.sh@10 -- # set +x 00:15:16.804 03:15:51 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:16.804 03:15:51 -- nvmf/common.sh@291 -- # pci_devs=() 00:15:16.804 03:15:51 -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:16.804 03:15:51 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:16.804 03:15:51 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:16.804 03:15:51 -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:16.804 03:15:51 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:16.804 03:15:51 -- nvmf/common.sh@295 -- # net_devs=() 00:15:16.804 03:15:51 -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:16.804 03:15:51 -- nvmf/common.sh@296 -- # e810=() 00:15:16.804 03:15:51 -- nvmf/common.sh@296 -- # local -ga e810 00:15:16.804 03:15:51 -- nvmf/common.sh@297 -- # x722=() 00:15:16.804 03:15:51 -- nvmf/common.sh@297 -- # local -ga x722 00:15:16.804 03:15:51 -- nvmf/common.sh@298 -- # mlx=() 00:15:16.804 03:15:51 -- nvmf/common.sh@298 -- # local -ga mlx 00:15:16.804 03:15:51 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:16.804 03:15:51 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:16.804 03:15:51 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:16.804 03:15:51 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:16.804 03:15:51 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:16.804 03:15:51 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:16.804 03:15:51 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:16.804 03:15:51 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:16.804 03:15:51 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:16.804 03:15:51 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:16.804 03:15:51 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:16.804 03:15:51 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:16.804 03:15:51 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:16.804 03:15:51 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:16.804 03:15:51 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:16.804 03:15:51 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:16.804 03:15:51 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:16.804 03:15:51 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:16.804 03:15:51 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:16.804 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:16.804 03:15:51 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:16.804 03:15:51 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:16.804 03:15:51 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:16.804 03:15:51 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:16.804 03:15:51 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:16.804 03:15:51 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:16.804 03:15:51 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:16.804 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:16.804 03:15:51 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:16.804 03:15:51 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:16.804 03:15:51 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:16.804 03:15:51 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:16.804 03:15:51 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:16.804 03:15:51 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:16.804 03:15:51 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:16.804 03:15:51 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:16.804 03:15:51 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:16.804 03:15:51 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:16.804 03:15:51 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:16.804 03:15:51 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:16.804 03:15:51 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:16.804 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:16.804 03:15:51 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:16.804 03:15:51 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:16.804 03:15:51 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:16.804 03:15:51 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:16.804 03:15:51 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:16.804 03:15:51 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:16.804 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:16.804 03:15:51 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:16.804 03:15:51 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:15:16.804 03:15:51 -- nvmf/common.sh@403 -- # is_hw=yes 00:15:16.804 03:15:51 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:15:16.804 03:15:51 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:15:16.804 03:15:51 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:15:16.804 03:15:51 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:16.804 03:15:51 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:16.804 03:15:51 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:16.804 03:15:51 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:16.804 03:15:51 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:16.804 03:15:51 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:16.804 03:15:51 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:16.804 03:15:51 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:16.804 03:15:51 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:16.804 03:15:51 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:16.804 03:15:51 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:16.804 03:15:51 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:16.804 03:15:51 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:16.804 03:15:51 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:16.804 03:15:51 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:16.804 03:15:51 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:16.804 03:15:51 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:17.063 03:15:51 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:17.063 03:15:51 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:17.063 03:15:51 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:17.063 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:17.063 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.240 ms 00:15:17.063 00:15:17.063 --- 10.0.0.2 ping statistics --- 00:15:17.063 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:17.063 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:15:17.063 03:15:51 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:17.063 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:17.063 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:15:17.063 00:15:17.063 --- 10.0.0.1 ping statistics --- 00:15:17.063 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:17.063 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:15:17.063 03:15:51 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:17.063 03:15:51 -- nvmf/common.sh@411 -- # return 0 00:15:17.063 03:15:51 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:17.063 03:15:51 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:17.063 03:15:51 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:17.063 03:15:51 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:17.063 03:15:51 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:17.063 03:15:51 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:17.063 03:15:51 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:17.063 03:15:51 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:15:17.063 03:15:51 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:17.063 03:15:51 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:17.063 03:15:51 -- common/autotest_common.sh@10 -- # set +x 00:15:17.063 03:15:51 -- nvmf/common.sh@470 -- # nvmfpid=1484279 00:15:17.063 03:15:51 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:17.063 03:15:51 -- nvmf/common.sh@471 -- # waitforlisten 1484279 00:15:17.063 03:15:51 -- common/autotest_common.sh@817 -- # '[' -z 1484279 ']' 00:15:17.063 03:15:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:17.063 03:15:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:17.063 03:15:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:17.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:17.063 03:15:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:17.063 03:15:51 -- common/autotest_common.sh@10 -- # set +x 00:15:17.063 [2024-04-25 03:15:51.409281] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:15:17.063 [2024-04-25 03:15:51.409356] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:17.063 EAL: No free 2048 kB hugepages reported on node 1 00:15:17.063 [2024-04-25 03:15:51.479517] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:17.322 [2024-04-25 03:15:51.594030] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:17.322 [2024-04-25 03:15:51.594093] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:17.322 [2024-04-25 03:15:51.594118] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:17.322 [2024-04-25 03:15:51.594132] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:17.322 [2024-04-25 03:15:51.594143] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:17.323 [2024-04-25 03:15:51.594175] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:17.891 03:15:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:17.891 03:15:52 -- common/autotest_common.sh@850 -- # return 0 00:15:17.891 03:15:52 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:17.891 03:15:52 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:17.891 03:15:52 -- common/autotest_common.sh@10 -- # set +x 00:15:18.149 03:15:52 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:18.149 03:15:52 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:18.149 03:15:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:18.149 03:15:52 -- common/autotest_common.sh@10 -- # set +x 00:15:18.149 [2024-04-25 03:15:52.413953] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:18.149 03:15:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:18.149 03:15:52 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:18.149 03:15:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:18.149 03:15:52 -- common/autotest_common.sh@10 -- # set +x 00:15:18.149 Malloc0 00:15:18.149 03:15:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:18.149 03:15:52 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:18.149 03:15:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:18.149 03:15:52 -- common/autotest_common.sh@10 -- # set +x 00:15:18.149 03:15:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:18.149 03:15:52 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:18.149 03:15:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:18.149 03:15:52 -- common/autotest_common.sh@10 -- # set +x 00:15:18.149 03:15:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:18.149 03:15:52 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:18.149 03:15:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:18.149 03:15:52 -- common/autotest_common.sh@10 -- # set +x 00:15:18.149 [2024-04-25 03:15:52.482572] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:18.149 03:15:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:18.149 03:15:52 -- target/queue_depth.sh@30 -- # bdevperf_pid=1484435 00:15:18.149 03:15:52 -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:15:18.149 03:15:52 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:18.149 03:15:52 -- target/queue_depth.sh@33 -- # waitforlisten 1484435 /var/tmp/bdevperf.sock 00:15:18.149 03:15:52 -- common/autotest_common.sh@817 -- # '[' -z 1484435 ']' 00:15:18.149 03:15:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:18.149 03:15:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:18.149 03:15:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:18.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:18.150 03:15:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:18.150 03:15:52 -- common/autotest_common.sh@10 -- # set +x 00:15:18.150 [2024-04-25 03:15:52.525669] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:15:18.150 [2024-04-25 03:15:52.525738] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1484435 ] 00:15:18.150 EAL: No free 2048 kB hugepages reported on node 1 00:15:18.150 [2024-04-25 03:15:52.586002] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:18.408 [2024-04-25 03:15:52.692077] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:18.408 03:15:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:18.408 03:15:52 -- common/autotest_common.sh@850 -- # return 0 00:15:18.408 03:15:52 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:18.408 03:15:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:18.408 03:15:52 -- common/autotest_common.sh@10 -- # set +x 00:15:18.408 NVMe0n1 00:15:18.408 03:15:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:18.408 03:15:52 -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:18.668 Running I/O for 10 seconds... 00:15:28.680 00:15:28.680 Latency(us) 00:15:28.680 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:28.680 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:15:28.680 Verification LBA range: start 0x0 length 0x4000 00:15:28.680 NVMe0n1 : 10.08 8336.52 32.56 0.00 0.00 122309.76 18155.90 84662.80 00:15:28.680 =================================================================================================================== 00:15:28.680 Total : 8336.52 32.56 0.00 0.00 122309.76 18155.90 84662.80 00:15:28.680 0 00:15:28.680 03:16:03 -- target/queue_depth.sh@39 -- # killprocess 1484435 00:15:28.680 03:16:03 -- common/autotest_common.sh@936 -- # '[' -z 1484435 ']' 00:15:28.680 03:16:03 -- common/autotest_common.sh@940 -- # kill -0 1484435 00:15:28.680 03:16:03 -- common/autotest_common.sh@941 -- # uname 00:15:28.680 03:16:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:28.680 03:16:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1484435 00:15:28.680 03:16:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:28.680 03:16:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:28.680 03:16:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1484435' 00:15:28.680 killing process with pid 1484435 00:15:28.680 03:16:03 -- common/autotest_common.sh@955 -- # kill 1484435 00:15:28.680 Received shutdown signal, test time was about 10.000000 seconds 00:15:28.680 00:15:28.680 Latency(us) 00:15:28.680 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:28.680 =================================================================================================================== 00:15:28.680 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:28.680 03:16:03 -- common/autotest_common.sh@960 -- # wait 1484435 00:15:28.939 03:16:03 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:15:28.939 03:16:03 -- target/queue_depth.sh@43 -- # nvmftestfini 00:15:28.939 03:16:03 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:28.939 03:16:03 -- nvmf/common.sh@117 -- # sync 00:15:28.939 03:16:03 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:28.939 03:16:03 -- nvmf/common.sh@120 -- # set +e 00:15:28.939 03:16:03 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:28.939 03:16:03 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:28.939 rmmod nvme_tcp 00:15:29.198 rmmod nvme_fabrics 00:15:29.198 rmmod nvme_keyring 00:15:29.198 03:16:03 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:29.198 03:16:03 -- nvmf/common.sh@124 -- # set -e 00:15:29.198 03:16:03 -- nvmf/common.sh@125 -- # return 0 00:15:29.198 03:16:03 -- nvmf/common.sh@478 -- # '[' -n 1484279 ']' 00:15:29.198 03:16:03 -- nvmf/common.sh@479 -- # killprocess 1484279 00:15:29.198 03:16:03 -- common/autotest_common.sh@936 -- # '[' -z 1484279 ']' 00:15:29.198 03:16:03 -- common/autotest_common.sh@940 -- # kill -0 1484279 00:15:29.199 03:16:03 -- common/autotest_common.sh@941 -- # uname 00:15:29.199 03:16:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:29.199 03:16:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1484279 00:15:29.199 03:16:03 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:29.199 03:16:03 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:29.199 03:16:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1484279' 00:15:29.199 killing process with pid 1484279 00:15:29.199 03:16:03 -- common/autotest_common.sh@955 -- # kill 1484279 00:15:29.199 03:16:03 -- common/autotest_common.sh@960 -- # wait 1484279 00:15:29.457 03:16:03 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:29.457 03:16:03 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:29.457 03:16:03 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:29.457 03:16:03 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:29.457 03:16:03 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:29.457 03:16:03 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:29.457 03:16:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:29.457 03:16:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:31.365 03:16:05 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:31.624 00:15:31.624 real 0m16.672s 00:15:31.624 user 0m23.374s 00:15:31.624 sys 0m3.055s 00:15:31.624 03:16:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:31.624 03:16:05 -- common/autotest_common.sh@10 -- # set +x 00:15:31.624 ************************************ 00:15:31.624 END TEST nvmf_queue_depth 00:15:31.624 ************************************ 00:15:31.624 03:16:05 -- nvmf/nvmf.sh@52 -- # run_test nvmf_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:31.624 03:16:05 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:31.624 03:16:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:31.624 03:16:05 -- common/autotest_common.sh@10 -- # set +x 00:15:31.624 ************************************ 00:15:31.624 START TEST nvmf_multipath 00:15:31.624 ************************************ 00:15:31.624 03:16:05 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:31.624 * Looking for test storage... 00:15:31.624 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:31.624 03:16:06 -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:31.624 03:16:06 -- nvmf/common.sh@7 -- # uname -s 00:15:31.624 03:16:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:31.624 03:16:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:31.624 03:16:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:31.624 03:16:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:31.624 03:16:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:31.624 03:16:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:31.624 03:16:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:31.624 03:16:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:31.624 03:16:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:31.624 03:16:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:31.624 03:16:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:31.624 03:16:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:31.624 03:16:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:31.624 03:16:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:31.624 03:16:06 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:31.624 03:16:06 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:31.624 03:16:06 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:31.624 03:16:06 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:31.624 03:16:06 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:31.624 03:16:06 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:31.624 03:16:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.625 03:16:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.625 03:16:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.625 03:16:06 -- paths/export.sh@5 -- # export PATH 00:15:31.625 03:16:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.625 03:16:06 -- nvmf/common.sh@47 -- # : 0 00:15:31.625 03:16:06 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:31.625 03:16:06 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:31.625 03:16:06 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:31.625 03:16:06 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:31.625 03:16:06 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:31.625 03:16:06 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:31.625 03:16:06 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:31.625 03:16:06 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:31.625 03:16:06 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:31.625 03:16:06 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:31.625 03:16:06 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:15:31.625 03:16:06 -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:31.625 03:16:06 -- target/multipath.sh@43 -- # nvmftestinit 00:15:31.625 03:16:06 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:15:31.625 03:16:06 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:31.625 03:16:06 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:31.625 03:16:06 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:31.625 03:16:06 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:31.625 03:16:06 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:31.625 03:16:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:31.625 03:16:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:31.625 03:16:06 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:15:31.625 03:16:06 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:15:31.625 03:16:06 -- nvmf/common.sh@285 -- # xtrace_disable 00:15:31.625 03:16:06 -- common/autotest_common.sh@10 -- # set +x 00:15:33.530 03:16:07 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:33.530 03:16:07 -- nvmf/common.sh@291 -- # pci_devs=() 00:15:33.530 03:16:07 -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:33.530 03:16:07 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:33.530 03:16:07 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:33.530 03:16:07 -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:33.530 03:16:07 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:33.530 03:16:07 -- nvmf/common.sh@295 -- # net_devs=() 00:15:33.530 03:16:07 -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:33.530 03:16:07 -- nvmf/common.sh@296 -- # e810=() 00:15:33.530 03:16:07 -- nvmf/common.sh@296 -- # local -ga e810 00:15:33.530 03:16:07 -- nvmf/common.sh@297 -- # x722=() 00:15:33.530 03:16:07 -- nvmf/common.sh@297 -- # local -ga x722 00:15:33.530 03:16:07 -- nvmf/common.sh@298 -- # mlx=() 00:15:33.530 03:16:07 -- nvmf/common.sh@298 -- # local -ga mlx 00:15:33.530 03:16:07 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:33.530 03:16:07 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:33.530 03:16:07 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:33.531 03:16:07 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:33.531 03:16:07 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:33.531 03:16:07 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:33.531 03:16:07 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:33.531 03:16:07 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:33.531 03:16:07 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:33.531 03:16:07 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:33.531 03:16:07 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:33.531 03:16:07 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:33.531 03:16:07 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:33.531 03:16:07 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:33.531 03:16:07 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:33.531 03:16:07 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:33.531 03:16:07 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:33.531 03:16:07 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:33.531 03:16:07 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:33.531 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:33.531 03:16:07 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:33.531 03:16:07 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:33.531 03:16:07 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:33.531 03:16:07 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:33.531 03:16:07 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:33.531 03:16:07 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:33.531 03:16:07 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:33.531 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:33.531 03:16:07 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:33.531 03:16:07 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:33.531 03:16:07 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:33.531 03:16:07 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:33.531 03:16:07 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:33.531 03:16:07 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:33.531 03:16:07 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:33.531 03:16:07 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:33.531 03:16:07 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:33.531 03:16:07 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:33.531 03:16:07 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:33.531 03:16:07 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:33.531 03:16:07 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:33.531 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:33.531 03:16:07 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:33.531 03:16:07 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:33.531 03:16:07 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:33.531 03:16:07 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:33.531 03:16:07 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:33.531 03:16:07 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:33.531 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:33.531 03:16:07 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:33.531 03:16:07 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:15:33.531 03:16:07 -- nvmf/common.sh@403 -- # is_hw=yes 00:15:33.531 03:16:07 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:15:33.531 03:16:07 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:15:33.531 03:16:07 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:15:33.531 03:16:07 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:33.531 03:16:07 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:33.531 03:16:07 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:33.531 03:16:07 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:33.531 03:16:07 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:33.531 03:16:07 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:33.531 03:16:07 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:33.531 03:16:07 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:33.531 03:16:07 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:33.531 03:16:07 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:33.531 03:16:07 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:33.531 03:16:07 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:33.531 03:16:07 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:33.531 03:16:08 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:33.531 03:16:08 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:33.531 03:16:08 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:33.531 03:16:08 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:33.790 03:16:08 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:33.790 03:16:08 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:33.790 03:16:08 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:33.790 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:33.790 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.211 ms 00:15:33.790 00:15:33.790 --- 10.0.0.2 ping statistics --- 00:15:33.790 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:33.790 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:15:33.790 03:16:08 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:33.790 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:33.790 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.131 ms 00:15:33.790 00:15:33.790 --- 10.0.0.1 ping statistics --- 00:15:33.790 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:33.790 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:15:33.790 03:16:08 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:33.790 03:16:08 -- nvmf/common.sh@411 -- # return 0 00:15:33.790 03:16:08 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:33.790 03:16:08 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:33.790 03:16:08 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:33.790 03:16:08 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:33.790 03:16:08 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:33.790 03:16:08 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:33.790 03:16:08 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:33.790 03:16:08 -- target/multipath.sh@45 -- # '[' -z ']' 00:15:33.790 03:16:08 -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:15:33.790 only one NIC for nvmf test 00:15:33.790 03:16:08 -- target/multipath.sh@47 -- # nvmftestfini 00:15:33.790 03:16:08 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:33.790 03:16:08 -- nvmf/common.sh@117 -- # sync 00:15:33.790 03:16:08 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:33.790 03:16:08 -- nvmf/common.sh@120 -- # set +e 00:15:33.790 03:16:08 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:33.790 03:16:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:33.790 rmmod nvme_tcp 00:15:33.790 rmmod nvme_fabrics 00:15:33.790 rmmod nvme_keyring 00:15:33.790 03:16:08 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:33.790 03:16:08 -- nvmf/common.sh@124 -- # set -e 00:15:33.790 03:16:08 -- nvmf/common.sh@125 -- # return 0 00:15:33.790 03:16:08 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:15:33.790 03:16:08 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:33.790 03:16:08 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:33.790 03:16:08 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:33.790 03:16:08 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:33.790 03:16:08 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:33.790 03:16:08 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:33.790 03:16:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:33.790 03:16:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:36.387 03:16:10 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:36.387 03:16:10 -- target/multipath.sh@48 -- # exit 0 00:15:36.387 03:16:10 -- target/multipath.sh@1 -- # nvmftestfini 00:15:36.387 03:16:10 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:36.387 03:16:10 -- nvmf/common.sh@117 -- # sync 00:15:36.387 03:16:10 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:36.387 03:16:10 -- nvmf/common.sh@120 -- # set +e 00:15:36.387 03:16:10 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:36.387 03:16:10 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:36.387 03:16:10 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:36.387 03:16:10 -- nvmf/common.sh@124 -- # set -e 00:15:36.387 03:16:10 -- nvmf/common.sh@125 -- # return 0 00:15:36.387 03:16:10 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:15:36.387 03:16:10 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:36.387 03:16:10 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:36.387 03:16:10 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:36.387 03:16:10 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:36.387 03:16:10 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:36.387 03:16:10 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:36.387 03:16:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:36.387 03:16:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:36.387 03:16:10 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:36.387 00:15:36.387 real 0m4.221s 00:15:36.387 user 0m0.737s 00:15:36.387 sys 0m1.467s 00:15:36.387 03:16:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:36.387 03:16:10 -- common/autotest_common.sh@10 -- # set +x 00:15:36.387 ************************************ 00:15:36.387 END TEST nvmf_multipath 00:15:36.387 ************************************ 00:15:36.387 03:16:10 -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:36.387 03:16:10 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:36.387 03:16:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:36.387 03:16:10 -- common/autotest_common.sh@10 -- # set +x 00:15:36.387 ************************************ 00:15:36.387 START TEST nvmf_zcopy 00:15:36.387 ************************************ 00:15:36.387 03:16:10 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:36.387 * Looking for test storage... 00:15:36.387 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:36.387 03:16:10 -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:36.387 03:16:10 -- nvmf/common.sh@7 -- # uname -s 00:15:36.387 03:16:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:36.387 03:16:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:36.387 03:16:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:36.387 03:16:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:36.387 03:16:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:36.387 03:16:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:36.387 03:16:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:36.388 03:16:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:36.388 03:16:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:36.388 03:16:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:36.388 03:16:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:36.388 03:16:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:36.388 03:16:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:36.388 03:16:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:36.388 03:16:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:36.388 03:16:10 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:36.388 03:16:10 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:36.388 03:16:10 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:36.388 03:16:10 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:36.388 03:16:10 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:36.388 03:16:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:36.388 03:16:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:36.388 03:16:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:36.388 03:16:10 -- paths/export.sh@5 -- # export PATH 00:15:36.388 03:16:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:36.388 03:16:10 -- nvmf/common.sh@47 -- # : 0 00:15:36.388 03:16:10 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:36.388 03:16:10 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:36.388 03:16:10 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:36.388 03:16:10 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:36.388 03:16:10 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:36.388 03:16:10 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:36.388 03:16:10 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:36.388 03:16:10 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:36.388 03:16:10 -- target/zcopy.sh@12 -- # nvmftestinit 00:15:36.388 03:16:10 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:15:36.388 03:16:10 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:36.388 03:16:10 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:36.388 03:16:10 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:36.388 03:16:10 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:36.388 03:16:10 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:36.388 03:16:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:36.388 03:16:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:36.388 03:16:10 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:15:36.388 03:16:10 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:15:36.388 03:16:10 -- nvmf/common.sh@285 -- # xtrace_disable 00:15:36.388 03:16:10 -- common/autotest_common.sh@10 -- # set +x 00:15:38.326 03:16:12 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:38.326 03:16:12 -- nvmf/common.sh@291 -- # pci_devs=() 00:15:38.326 03:16:12 -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:38.326 03:16:12 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:38.326 03:16:12 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:38.326 03:16:12 -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:38.326 03:16:12 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:38.326 03:16:12 -- nvmf/common.sh@295 -- # net_devs=() 00:15:38.326 03:16:12 -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:38.326 03:16:12 -- nvmf/common.sh@296 -- # e810=() 00:15:38.326 03:16:12 -- nvmf/common.sh@296 -- # local -ga e810 00:15:38.326 03:16:12 -- nvmf/common.sh@297 -- # x722=() 00:15:38.326 03:16:12 -- nvmf/common.sh@297 -- # local -ga x722 00:15:38.326 03:16:12 -- nvmf/common.sh@298 -- # mlx=() 00:15:38.326 03:16:12 -- nvmf/common.sh@298 -- # local -ga mlx 00:15:38.326 03:16:12 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:38.326 03:16:12 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:38.326 03:16:12 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:38.326 03:16:12 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:38.326 03:16:12 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:38.326 03:16:12 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:38.326 03:16:12 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:38.326 03:16:12 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:38.326 03:16:12 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:38.326 03:16:12 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:38.326 03:16:12 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:38.326 03:16:12 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:38.326 03:16:12 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:38.326 03:16:12 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:38.326 03:16:12 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:38.326 03:16:12 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:38.326 03:16:12 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:38.326 03:16:12 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:38.326 03:16:12 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:38.326 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:38.326 03:16:12 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:38.326 03:16:12 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:38.326 03:16:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:38.326 03:16:12 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:38.326 03:16:12 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:38.326 03:16:12 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:38.326 03:16:12 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:38.326 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:38.326 03:16:12 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:38.326 03:16:12 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:38.326 03:16:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:38.326 03:16:12 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:38.326 03:16:12 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:38.326 03:16:12 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:38.326 03:16:12 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:38.326 03:16:12 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:38.326 03:16:12 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:38.326 03:16:12 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:38.327 03:16:12 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:38.327 03:16:12 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:38.327 03:16:12 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:38.327 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:38.327 03:16:12 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:38.327 03:16:12 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:38.327 03:16:12 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:38.327 03:16:12 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:38.327 03:16:12 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:38.327 03:16:12 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:38.327 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:38.327 03:16:12 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:38.327 03:16:12 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:15:38.327 03:16:12 -- nvmf/common.sh@403 -- # is_hw=yes 00:15:38.327 03:16:12 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:15:38.327 03:16:12 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:15:38.327 03:16:12 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:15:38.327 03:16:12 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:38.327 03:16:12 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:38.327 03:16:12 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:38.327 03:16:12 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:38.327 03:16:12 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:38.327 03:16:12 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:38.327 03:16:12 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:38.327 03:16:12 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:38.327 03:16:12 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:38.327 03:16:12 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:38.327 03:16:12 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:38.327 03:16:12 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:38.327 03:16:12 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:38.327 03:16:12 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:38.327 03:16:12 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:38.327 03:16:12 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:38.327 03:16:12 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:38.327 03:16:12 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:38.327 03:16:12 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:38.327 03:16:12 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:38.327 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:38.327 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.199 ms 00:15:38.327 00:15:38.327 --- 10.0.0.2 ping statistics --- 00:15:38.327 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:38.327 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:15:38.327 03:16:12 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:38.327 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:38.327 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:15:38.327 00:15:38.327 --- 10.0.0.1 ping statistics --- 00:15:38.327 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:38.327 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:15:38.327 03:16:12 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:38.327 03:16:12 -- nvmf/common.sh@411 -- # return 0 00:15:38.327 03:16:12 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:38.327 03:16:12 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:38.327 03:16:12 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:38.327 03:16:12 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:38.327 03:16:12 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:38.327 03:16:12 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:38.327 03:16:12 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:38.327 03:16:12 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:15:38.327 03:16:12 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:38.327 03:16:12 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:38.327 03:16:12 -- common/autotest_common.sh@10 -- # set +x 00:15:38.327 03:16:12 -- nvmf/common.sh@470 -- # nvmfpid=1489603 00:15:38.327 03:16:12 -- nvmf/common.sh@471 -- # waitforlisten 1489603 00:15:38.327 03:16:12 -- common/autotest_common.sh@817 -- # '[' -z 1489603 ']' 00:15:38.327 03:16:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:38.327 03:16:12 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:38.327 03:16:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:38.327 03:16:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:38.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:38.327 03:16:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:38.327 03:16:12 -- common/autotest_common.sh@10 -- # set +x 00:15:38.327 [2024-04-25 03:16:12.708234] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:15:38.327 [2024-04-25 03:16:12.708324] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:38.327 EAL: No free 2048 kB hugepages reported on node 1 00:15:38.327 [2024-04-25 03:16:12.779900] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:38.585 [2024-04-25 03:16:12.898111] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:38.585 [2024-04-25 03:16:12.898170] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:38.585 [2024-04-25 03:16:12.898187] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:38.585 [2024-04-25 03:16:12.898201] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:38.585 [2024-04-25 03:16:12.898213] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:38.585 [2024-04-25 03:16:12.898245] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:39.518 03:16:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:39.518 03:16:13 -- common/autotest_common.sh@850 -- # return 0 00:15:39.518 03:16:13 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:39.518 03:16:13 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:39.518 03:16:13 -- common/autotest_common.sh@10 -- # set +x 00:15:39.518 03:16:13 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:39.518 03:16:13 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:15:39.518 03:16:13 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:15:39.518 03:16:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:39.518 03:16:13 -- common/autotest_common.sh@10 -- # set +x 00:15:39.518 [2024-04-25 03:16:13.704741] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:39.518 03:16:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:39.518 03:16:13 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:39.518 03:16:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:39.518 03:16:13 -- common/autotest_common.sh@10 -- # set +x 00:15:39.518 03:16:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:39.518 03:16:13 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:39.518 03:16:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:39.518 03:16:13 -- common/autotest_common.sh@10 -- # set +x 00:15:39.518 [2024-04-25 03:16:13.720936] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:39.518 03:16:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:39.518 03:16:13 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:39.518 03:16:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:39.518 03:16:13 -- common/autotest_common.sh@10 -- # set +x 00:15:39.518 03:16:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:39.518 03:16:13 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:15:39.518 03:16:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:39.518 03:16:13 -- common/autotest_common.sh@10 -- # set +x 00:15:39.518 malloc0 00:15:39.518 03:16:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:39.518 03:16:13 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:39.518 03:16:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:39.518 03:16:13 -- common/autotest_common.sh@10 -- # set +x 00:15:39.518 03:16:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:39.518 03:16:13 -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:15:39.518 03:16:13 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:15:39.518 03:16:13 -- nvmf/common.sh@521 -- # config=() 00:15:39.518 03:16:13 -- nvmf/common.sh@521 -- # local subsystem config 00:15:39.518 03:16:13 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:15:39.518 03:16:13 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:15:39.518 { 00:15:39.518 "params": { 00:15:39.518 "name": "Nvme$subsystem", 00:15:39.518 "trtype": "$TEST_TRANSPORT", 00:15:39.518 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:39.518 "adrfam": "ipv4", 00:15:39.518 "trsvcid": "$NVMF_PORT", 00:15:39.518 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:39.518 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:39.518 "hdgst": ${hdgst:-false}, 00:15:39.518 "ddgst": ${ddgst:-false} 00:15:39.518 }, 00:15:39.518 "method": "bdev_nvme_attach_controller" 00:15:39.518 } 00:15:39.518 EOF 00:15:39.518 )") 00:15:39.518 03:16:13 -- nvmf/common.sh@543 -- # cat 00:15:39.518 03:16:13 -- nvmf/common.sh@545 -- # jq . 00:15:39.518 03:16:13 -- nvmf/common.sh@546 -- # IFS=, 00:15:39.518 03:16:13 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:15:39.518 "params": { 00:15:39.518 "name": "Nvme1", 00:15:39.518 "trtype": "tcp", 00:15:39.518 "traddr": "10.0.0.2", 00:15:39.518 "adrfam": "ipv4", 00:15:39.518 "trsvcid": "4420", 00:15:39.518 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:39.518 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:39.518 "hdgst": false, 00:15:39.518 "ddgst": false 00:15:39.518 }, 00:15:39.518 "method": "bdev_nvme_attach_controller" 00:15:39.518 }' 00:15:39.518 [2024-04-25 03:16:13.796745] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:15:39.518 [2024-04-25 03:16:13.796825] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1489758 ] 00:15:39.518 EAL: No free 2048 kB hugepages reported on node 1 00:15:39.518 [2024-04-25 03:16:13.860006] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:39.518 [2024-04-25 03:16:13.981402] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:40.085 Running I/O for 10 seconds... 00:15:50.051 00:15:50.051 Latency(us) 00:15:50.051 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:50.051 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:15:50.051 Verification LBA range: start 0x0 length 0x1000 00:15:50.051 Nvme1n1 : 10.02 5879.87 45.94 0.00 0.00 21710.73 2779.21 37088.52 00:15:50.051 =================================================================================================================== 00:15:50.051 Total : 5879.87 45.94 0.00 0.00 21710.73 2779.21 37088.52 00:15:50.312 03:16:24 -- target/zcopy.sh@39 -- # perfpid=1491066 00:15:50.312 03:16:24 -- target/zcopy.sh@41 -- # xtrace_disable 00:15:50.312 03:16:24 -- common/autotest_common.sh@10 -- # set +x 00:15:50.312 03:16:24 -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:15:50.312 03:16:24 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:15:50.312 03:16:24 -- nvmf/common.sh@521 -- # config=() 00:15:50.312 03:16:24 -- nvmf/common.sh@521 -- # local subsystem config 00:15:50.312 03:16:24 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:15:50.312 03:16:24 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:15:50.312 { 00:15:50.312 "params": { 00:15:50.312 "name": "Nvme$subsystem", 00:15:50.312 "trtype": "$TEST_TRANSPORT", 00:15:50.312 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:50.312 "adrfam": "ipv4", 00:15:50.312 "trsvcid": "$NVMF_PORT", 00:15:50.312 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:50.312 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:50.312 "hdgst": ${hdgst:-false}, 00:15:50.312 "ddgst": ${ddgst:-false} 00:15:50.312 }, 00:15:50.312 "method": "bdev_nvme_attach_controller" 00:15:50.312 } 00:15:50.312 EOF 00:15:50.312 )") 00:15:50.312 03:16:24 -- nvmf/common.sh@543 -- # cat 00:15:50.312 [2024-04-25 03:16:24.601833] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.312 [2024-04-25 03:16:24.601878] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.312 03:16:24 -- nvmf/common.sh@545 -- # jq . 00:15:50.312 03:16:24 -- nvmf/common.sh@546 -- # IFS=, 00:15:50.312 03:16:24 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:15:50.312 "params": { 00:15:50.312 "name": "Nvme1", 00:15:50.312 "trtype": "tcp", 00:15:50.312 "traddr": "10.0.0.2", 00:15:50.312 "adrfam": "ipv4", 00:15:50.312 "trsvcid": "4420", 00:15:50.312 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:50.312 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:50.312 "hdgst": false, 00:15:50.312 "ddgst": false 00:15:50.312 }, 00:15:50.312 "method": "bdev_nvme_attach_controller" 00:15:50.312 }' 00:15:50.312 [2024-04-25 03:16:24.609783] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.312 [2024-04-25 03:16:24.609808] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.312 [2024-04-25 03:16:24.617804] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.312 [2024-04-25 03:16:24.617827] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.312 [2024-04-25 03:16:24.625823] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.312 [2024-04-25 03:16:24.625852] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.312 [2024-04-25 03:16:24.633845] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.312 [2024-04-25 03:16:24.633866] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.312 [2024-04-25 03:16:24.638178] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:15:50.312 [2024-04-25 03:16:24.638246] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1491066 ] 00:15:50.312 [2024-04-25 03:16:24.641866] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.312 [2024-04-25 03:16:24.641888] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.312 [2024-04-25 03:16:24.649887] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.312 [2024-04-25 03:16:24.649923] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.312 [2024-04-25 03:16:24.657935] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.312 [2024-04-25 03:16:24.657956] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.312 EAL: No free 2048 kB hugepages reported on node 1 00:15:50.312 [2024-04-25 03:16:24.665948] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.312 [2024-04-25 03:16:24.665968] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.312 [2024-04-25 03:16:24.673969] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.312 [2024-04-25 03:16:24.673991] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.312 [2024-04-25 03:16:24.682005] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.312 [2024-04-25 03:16:24.682026] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.312 [2024-04-25 03:16:24.690026] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.312 [2024-04-25 03:16:24.690047] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.312 [2024-04-25 03:16:24.698033] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.312 [2024-04-25 03:16:24.698053] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.312 [2024-04-25 03:16:24.698146] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:50.312 [2024-04-25 03:16:24.706119] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.312 [2024-04-25 03:16:24.706156] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.312 [2024-04-25 03:16:24.714110] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.312 [2024-04-25 03:16:24.714148] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.312 [2024-04-25 03:16:24.722099] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.312 [2024-04-25 03:16:24.722121] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.312 [2024-04-25 03:16:24.730120] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.312 [2024-04-25 03:16:24.730141] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.312 [2024-04-25 03:16:24.738144] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.312 [2024-04-25 03:16:24.738165] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.312 [2024-04-25 03:16:24.746170] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.312 [2024-04-25 03:16:24.746191] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.312 [2024-04-25 03:16:24.754188] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.312 [2024-04-25 03:16:24.754209] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.312 [2024-04-25 03:16:24.762244] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.312 [2024-04-25 03:16:24.762278] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.312 [2024-04-25 03:16:24.770262] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.312 [2024-04-25 03:16:24.770296] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.312 [2024-04-25 03:16:24.778255] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.312 [2024-04-25 03:16:24.778276] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.312 [2024-04-25 03:16:24.786291] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.312 [2024-04-25 03:16:24.786318] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.312 [2024-04-25 03:16:24.794314] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.312 [2024-04-25 03:16:24.794339] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.312 [2024-04-25 03:16:24.802333] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.312 [2024-04-25 03:16:24.802359] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.313 [2024-04-25 03:16:24.810358] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.313 [2024-04-25 03:16:24.810384] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.573 [2024-04-25 03:16:24.813110] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:50.573 [2024-04-25 03:16:24.818383] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.573 [2024-04-25 03:16:24.818409] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.573 [2024-04-25 03:16:24.826405] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.573 [2024-04-25 03:16:24.826432] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.573 [2024-04-25 03:16:24.834456] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.573 [2024-04-25 03:16:24.834494] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.573 [2024-04-25 03:16:24.842482] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.573 [2024-04-25 03:16:24.842521] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.573 [2024-04-25 03:16:24.850503] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.573 [2024-04-25 03:16:24.850544] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.573 [2024-04-25 03:16:24.858527] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.573 [2024-04-25 03:16:24.858566] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.573 [2024-04-25 03:16:24.866554] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.573 [2024-04-25 03:16:24.866594] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.573 [2024-04-25 03:16:24.874574] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.573 [2024-04-25 03:16:24.874616] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.573 [2024-04-25 03:16:24.882559] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.573 [2024-04-25 03:16:24.882584] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.573 [2024-04-25 03:16:24.890602] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.573 [2024-04-25 03:16:24.890646] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.573 [2024-04-25 03:16:24.898639] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.573 [2024-04-25 03:16:24.898690] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.573 [2024-04-25 03:16:24.906680] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.573 [2024-04-25 03:16:24.906728] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.573 [2024-04-25 03:16:24.914657] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.573 [2024-04-25 03:16:24.914694] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.573 [2024-04-25 03:16:24.922688] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.573 [2024-04-25 03:16:24.922710] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.573 [2024-04-25 03:16:24.930718] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.573 [2024-04-25 03:16:24.930742] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.573 [2024-04-25 03:16:24.938735] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.573 [2024-04-25 03:16:24.938760] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.573 [2024-04-25 03:16:24.946757] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.573 [2024-04-25 03:16:24.946781] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.573 [2024-04-25 03:16:24.954763] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.573 [2024-04-25 03:16:24.954785] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.573 [2024-04-25 03:16:24.962783] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.573 [2024-04-25 03:16:24.962806] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.573 [2024-04-25 03:16:24.970803] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.573 [2024-04-25 03:16:24.970825] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.573 [2024-04-25 03:16:24.978825] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.573 [2024-04-25 03:16:24.978847] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.573 [2024-04-25 03:16:24.986847] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.573 [2024-04-25 03:16:24.986868] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.573 [2024-04-25 03:16:24.994868] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.573 [2024-04-25 03:16:24.994889] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.573 [2024-04-25 03:16:25.002928] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.573 [2024-04-25 03:16:25.002956] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.573 [2024-04-25 03:16:25.010932] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.573 [2024-04-25 03:16:25.010956] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.573 [2024-04-25 03:16:25.018957] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.573 [2024-04-25 03:16:25.018997] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.573 [2024-04-25 03:16:25.026988] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.573 [2024-04-25 03:16:25.027014] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.573 [2024-04-25 03:16:25.035011] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.573 [2024-04-25 03:16:25.035043] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.573 [2024-04-25 03:16:25.043027] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.574 [2024-04-25 03:16:25.043053] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.574 Running I/O for 5 seconds... 00:15:50.574 [2024-04-25 03:16:25.054468] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.574 [2024-04-25 03:16:25.054501] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.574 [2024-04-25 03:16:25.064272] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.574 [2024-04-25 03:16:25.064312] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.833 [2024-04-25 03:16:25.076555] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.833 [2024-04-25 03:16:25.076590] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.834 [2024-04-25 03:16:25.087375] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.834 [2024-04-25 03:16:25.087407] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.834 [2024-04-25 03:16:25.099345] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.834 [2024-04-25 03:16:25.099382] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.834 [2024-04-25 03:16:25.111016] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.834 [2024-04-25 03:16:25.111048] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.834 [2024-04-25 03:16:25.122866] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.834 [2024-04-25 03:16:25.122894] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.834 [2024-04-25 03:16:25.135078] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.834 [2024-04-25 03:16:25.135109] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.834 [2024-04-25 03:16:25.146656] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.834 [2024-04-25 03:16:25.146699] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.834 [2024-04-25 03:16:25.158162] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.834 [2024-04-25 03:16:25.158193] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.834 [2024-04-25 03:16:25.169720] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.834 [2024-04-25 03:16:25.169748] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.834 [2024-04-25 03:16:25.180681] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.834 [2024-04-25 03:16:25.180710] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.834 [2024-04-25 03:16:25.191869] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.834 [2024-04-25 03:16:25.191899] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.834 [2024-04-25 03:16:25.202098] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.834 [2024-04-25 03:16:25.202126] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.834 [2024-04-25 03:16:25.213092] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.834 [2024-04-25 03:16:25.213119] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.834 [2024-04-25 03:16:25.223986] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.834 [2024-04-25 03:16:25.224014] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.834 [2024-04-25 03:16:25.233957] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.834 [2024-04-25 03:16:25.233984] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.834 [2024-04-25 03:16:25.245581] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.834 [2024-04-25 03:16:25.245624] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.834 [2024-04-25 03:16:25.256261] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.834 [2024-04-25 03:16:25.256288] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.834 [2024-04-25 03:16:25.267353] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.834 [2024-04-25 03:16:25.267381] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.834 [2024-04-25 03:16:25.277727] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.834 [2024-04-25 03:16:25.277762] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.834 [2024-04-25 03:16:25.288498] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.834 [2024-04-25 03:16:25.288525] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.834 [2024-04-25 03:16:25.298467] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.834 [2024-04-25 03:16:25.298494] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.834 [2024-04-25 03:16:25.309553] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.834 [2024-04-25 03:16:25.309582] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.834 [2024-04-25 03:16:25.320355] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.834 [2024-04-25 03:16:25.320382] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.834 [2024-04-25 03:16:25.330443] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.834 [2024-04-25 03:16:25.330470] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.092 [2024-04-25 03:16:25.341848] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.093 [2024-04-25 03:16:25.341904] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.093 [2024-04-25 03:16:25.352648] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.093 [2024-04-25 03:16:25.352676] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.093 [2024-04-25 03:16:25.365381] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.093 [2024-04-25 03:16:25.365407] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.093 [2024-04-25 03:16:25.375383] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.093 [2024-04-25 03:16:25.375409] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.093 [2024-04-25 03:16:25.385741] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.093 [2024-04-25 03:16:25.385769] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.093 [2024-04-25 03:16:25.397036] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.093 [2024-04-25 03:16:25.397064] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.093 [2024-04-25 03:16:25.407328] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.093 [2024-04-25 03:16:25.407355] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.093 [2024-04-25 03:16:25.418133] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.093 [2024-04-25 03:16:25.418161] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.093 [2024-04-25 03:16:25.429613] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.093 [2024-04-25 03:16:25.429664] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.093 [2024-04-25 03:16:25.439837] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.093 [2024-04-25 03:16:25.439865] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.093 [2024-04-25 03:16:25.450539] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.093 [2024-04-25 03:16:25.450580] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.093 [2024-04-25 03:16:25.461156] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.093 [2024-04-25 03:16:25.461183] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.093 [2024-04-25 03:16:25.474017] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.093 [2024-04-25 03:16:25.474045] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.093 [2024-04-25 03:16:25.483300] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.093 [2024-04-25 03:16:25.483326] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.093 [2024-04-25 03:16:25.494263] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.093 [2024-04-25 03:16:25.494291] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.093 [2024-04-25 03:16:25.504095] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.093 [2024-04-25 03:16:25.504121] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.093 [2024-04-25 03:16:25.515803] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.093 [2024-04-25 03:16:25.515831] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.093 [2024-04-25 03:16:25.525741] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.093 [2024-04-25 03:16:25.525769] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.093 [2024-04-25 03:16:25.537143] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.093 [2024-04-25 03:16:25.537170] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.093 [2024-04-25 03:16:25.547644] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.093 [2024-04-25 03:16:25.547672] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.093 [2024-04-25 03:16:25.558829] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.093 [2024-04-25 03:16:25.558872] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.093 [2024-04-25 03:16:25.569293] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.093 [2024-04-25 03:16:25.569321] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.093 [2024-04-25 03:16:25.579654] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.093 [2024-04-25 03:16:25.579696] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.093 [2024-04-25 03:16:25.591001] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.093 [2024-04-25 03:16:25.591029] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.351 [2024-04-25 03:16:25.601441] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.352 [2024-04-25 03:16:25.601470] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.352 [2024-04-25 03:16:25.612206] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.352 [2024-04-25 03:16:25.612233] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.352 [2024-04-25 03:16:25.622436] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.352 [2024-04-25 03:16:25.622463] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.352 [2024-04-25 03:16:25.633916] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.352 [2024-04-25 03:16:25.633944] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.352 [2024-04-25 03:16:25.644285] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.352 [2024-04-25 03:16:25.644312] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.352 [2024-04-25 03:16:25.655418] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.352 [2024-04-25 03:16:25.655446] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.352 [2024-04-25 03:16:25.666037] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.352 [2024-04-25 03:16:25.666064] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.352 [2024-04-25 03:16:25.677012] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.352 [2024-04-25 03:16:25.677040] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.352 [2024-04-25 03:16:25.687587] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.352 [2024-04-25 03:16:25.687638] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.352 [2024-04-25 03:16:25.698533] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.352 [2024-04-25 03:16:25.698561] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.352 [2024-04-25 03:16:25.709282] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.352 [2024-04-25 03:16:25.709310] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.352 [2024-04-25 03:16:25.720453] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.352 [2024-04-25 03:16:25.720480] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.352 [2024-04-25 03:16:25.730546] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.352 [2024-04-25 03:16:25.730572] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.352 [2024-04-25 03:16:25.741877] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.352 [2024-04-25 03:16:25.741906] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.352 [2024-04-25 03:16:25.752081] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.352 [2024-04-25 03:16:25.752110] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.352 [2024-04-25 03:16:25.763116] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.352 [2024-04-25 03:16:25.763142] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.352 [2024-04-25 03:16:25.773171] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.352 [2024-04-25 03:16:25.773198] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.352 [2024-04-25 03:16:25.785112] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.352 [2024-04-25 03:16:25.785139] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.352 [2024-04-25 03:16:25.795715] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.352 [2024-04-25 03:16:25.795743] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.352 [2024-04-25 03:16:25.807352] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.352 [2024-04-25 03:16:25.807379] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.352 [2024-04-25 03:16:25.817403] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.352 [2024-04-25 03:16:25.817430] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.352 [2024-04-25 03:16:25.828870] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.352 [2024-04-25 03:16:25.828897] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.352 [2024-04-25 03:16:25.839053] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.352 [2024-04-25 03:16:25.839080] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.352 [2024-04-25 03:16:25.850196] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.352 [2024-04-25 03:16:25.850225] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.611 [2024-04-25 03:16:25.860419] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.611 [2024-04-25 03:16:25.860447] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.611 [2024-04-25 03:16:25.872125] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.611 [2024-04-25 03:16:25.872152] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.611 [2024-04-25 03:16:25.882847] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.611 [2024-04-25 03:16:25.882874] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.611 [2024-04-25 03:16:25.893789] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.611 [2024-04-25 03:16:25.893817] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.612 [2024-04-25 03:16:25.903113] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.612 [2024-04-25 03:16:25.903140] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.612 [2024-04-25 03:16:25.915094] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.612 [2024-04-25 03:16:25.915121] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.612 [2024-04-25 03:16:25.925557] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.612 [2024-04-25 03:16:25.925584] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.612 [2024-04-25 03:16:25.938256] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.612 [2024-04-25 03:16:25.938283] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.612 [2024-04-25 03:16:25.949478] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.612 [2024-04-25 03:16:25.949505] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.612 [2024-04-25 03:16:25.958487] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.612 [2024-04-25 03:16:25.958514] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.612 [2024-04-25 03:16:25.970009] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.612 [2024-04-25 03:16:25.970036] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.612 [2024-04-25 03:16:25.980140] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.612 [2024-04-25 03:16:25.980168] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.612 [2024-04-25 03:16:25.990997] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.612 [2024-04-25 03:16:25.991023] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.612 [2024-04-25 03:16:26.001333] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.612 [2024-04-25 03:16:26.001361] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.612 [2024-04-25 03:16:26.011853] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.612 [2024-04-25 03:16:26.011880] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.612 [2024-04-25 03:16:26.022508] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.612 [2024-04-25 03:16:26.022535] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.612 [2024-04-25 03:16:26.033505] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.612 [2024-04-25 03:16:26.033531] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.612 [2024-04-25 03:16:26.043776] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.612 [2024-04-25 03:16:26.043803] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.612 [2024-04-25 03:16:26.055190] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.612 [2024-04-25 03:16:26.055217] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.612 [2024-04-25 03:16:26.065050] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.612 [2024-04-25 03:16:26.065076] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.612 [2024-04-25 03:16:26.076101] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.612 [2024-04-25 03:16:26.076128] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.612 [2024-04-25 03:16:26.085891] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.612 [2024-04-25 03:16:26.085940] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.612 [2024-04-25 03:16:26.097166] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.612 [2024-04-25 03:16:26.097193] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.612 [2024-04-25 03:16:26.107829] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.612 [2024-04-25 03:16:26.107857] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.870 [2024-04-25 03:16:26.118020] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.870 [2024-04-25 03:16:26.118048] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.870 [2024-04-25 03:16:26.129011] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.870 [2024-04-25 03:16:26.129038] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.870 [2024-04-25 03:16:26.139701] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.870 [2024-04-25 03:16:26.139728] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.870 [2024-04-25 03:16:26.149799] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.870 [2024-04-25 03:16:26.149843] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.870 [2024-04-25 03:16:26.160479] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.870 [2024-04-25 03:16:26.160506] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.870 [2024-04-25 03:16:26.170696] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.870 [2024-04-25 03:16:26.170725] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.870 [2024-04-25 03:16:26.182006] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.870 [2024-04-25 03:16:26.182033] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.870 [2024-04-25 03:16:26.193175] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.870 [2024-04-25 03:16:26.193202] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.870 [2024-04-25 03:16:26.203264] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.870 [2024-04-25 03:16:26.203291] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.870 [2024-04-25 03:16:26.213926] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.870 [2024-04-25 03:16:26.213953] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.870 [2024-04-25 03:16:26.224711] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.870 [2024-04-25 03:16:26.224739] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.871 [2024-04-25 03:16:26.234551] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.871 [2024-04-25 03:16:26.234595] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.871 [2024-04-25 03:16:26.245224] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.871 [2024-04-25 03:16:26.245251] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.871 [2024-04-25 03:16:26.258290] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.871 [2024-04-25 03:16:26.258318] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.871 [2024-04-25 03:16:26.267789] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.871 [2024-04-25 03:16:26.267817] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.871 [2024-04-25 03:16:26.279315] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.871 [2024-04-25 03:16:26.279342] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.871 [2024-04-25 03:16:26.292334] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.871 [2024-04-25 03:16:26.292372] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.871 [2024-04-25 03:16:26.301823] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.871 [2024-04-25 03:16:26.301851] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.871 [2024-04-25 03:16:26.312995] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.871 [2024-04-25 03:16:26.313023] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.871 [2024-04-25 03:16:26.323673] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.871 [2024-04-25 03:16:26.323701] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.871 [2024-04-25 03:16:26.335120] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.871 [2024-04-25 03:16:26.335147] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.871 [2024-04-25 03:16:26.345191] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.871 [2024-04-25 03:16:26.345218] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.871 [2024-04-25 03:16:26.356419] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.871 [2024-04-25 03:16:26.356446] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.871 [2024-04-25 03:16:26.367051] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.871 [2024-04-25 03:16:26.367078] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.131 [2024-04-25 03:16:26.378395] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.131 [2024-04-25 03:16:26.378423] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.131 [2024-04-25 03:16:26.387988] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.131 [2024-04-25 03:16:26.388015] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.131 [2024-04-25 03:16:26.399348] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.131 [2024-04-25 03:16:26.399375] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.131 [2024-04-25 03:16:26.409636] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.131 [2024-04-25 03:16:26.409688] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.131 [2024-04-25 03:16:26.420852] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.131 [2024-04-25 03:16:26.420881] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.131 [2024-04-25 03:16:26.431498] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.131 [2024-04-25 03:16:26.431525] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.131 [2024-04-25 03:16:26.441960] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.131 [2024-04-25 03:16:26.441988] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.132 [2024-04-25 03:16:26.453175] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.132 [2024-04-25 03:16:26.453203] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.132 [2024-04-25 03:16:26.463032] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.132 [2024-04-25 03:16:26.463059] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.132 [2024-04-25 03:16:26.473978] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.132 [2024-04-25 03:16:26.474005] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.132 [2024-04-25 03:16:26.484325] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.132 [2024-04-25 03:16:26.484367] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.132 [2024-04-25 03:16:26.495941] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.132 [2024-04-25 03:16:26.495991] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.132 [2024-04-25 03:16:26.505779] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.132 [2024-04-25 03:16:26.505807] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.132 [2024-04-25 03:16:26.516813] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.132 [2024-04-25 03:16:26.516842] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.132 [2024-04-25 03:16:26.527650] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.132 [2024-04-25 03:16:26.527677] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.132 [2024-04-25 03:16:26.537898] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.132 [2024-04-25 03:16:26.537926] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.132 [2024-04-25 03:16:26.547972] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.132 [2024-04-25 03:16:26.548000] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.132 [2024-04-25 03:16:26.557944] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.132 [2024-04-25 03:16:26.557972] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.132 [2024-04-25 03:16:26.569127] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.132 [2024-04-25 03:16:26.569155] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.132 [2024-04-25 03:16:26.579864] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.132 [2024-04-25 03:16:26.579903] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.132 [2024-04-25 03:16:26.590058] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.132 [2024-04-25 03:16:26.590086] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.132 [2024-04-25 03:16:26.601490] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.132 [2024-04-25 03:16:26.601517] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.132 [2024-04-25 03:16:26.612059] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.132 [2024-04-25 03:16:26.612086] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.132 [2024-04-25 03:16:26.622038] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.132 [2024-04-25 03:16:26.622065] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.392 [2024-04-25 03:16:26.632772] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.392 [2024-04-25 03:16:26.632802] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.392 [2024-04-25 03:16:26.643588] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.392 [2024-04-25 03:16:26.643636] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.392 [2024-04-25 03:16:26.654648] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.392 [2024-04-25 03:16:26.654676] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.392 [2024-04-25 03:16:26.665625] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.392 [2024-04-25 03:16:26.665661] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.392 [2024-04-25 03:16:26.676350] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.392 [2024-04-25 03:16:26.676378] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.392 [2024-04-25 03:16:26.687153] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.392 [2024-04-25 03:16:26.687180] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.392 [2024-04-25 03:16:26.697553] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.392 [2024-04-25 03:16:26.697587] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.392 [2024-04-25 03:16:26.708704] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.392 [2024-04-25 03:16:26.708733] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.392 [2024-04-25 03:16:26.719088] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.392 [2024-04-25 03:16:26.719116] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.392 [2024-04-25 03:16:26.730250] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.392 [2024-04-25 03:16:26.730278] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.392 [2024-04-25 03:16:26.740543] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.392 [2024-04-25 03:16:26.740572] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.392 [2024-04-25 03:16:26.750681] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.392 [2024-04-25 03:16:26.750714] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.392 [2024-04-25 03:16:26.762068] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.392 [2024-04-25 03:16:26.762095] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.392 [2024-04-25 03:16:26.772510] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.392 [2024-04-25 03:16:26.772536] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.392 [2024-04-25 03:16:26.783541] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.392 [2024-04-25 03:16:26.783567] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.392 [2024-04-25 03:16:26.793676] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.392 [2024-04-25 03:16:26.793703] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.392 [2024-04-25 03:16:26.804872] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.392 [2024-04-25 03:16:26.804901] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.392 [2024-04-25 03:16:26.815233] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.392 [2024-04-25 03:16:26.815260] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.392 [2024-04-25 03:16:26.825999] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.392 [2024-04-25 03:16:26.826027] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.392 [2024-04-25 03:16:26.836320] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.392 [2024-04-25 03:16:26.836363] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.393 [2024-04-25 03:16:26.847541] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.393 [2024-04-25 03:16:26.847569] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.393 [2024-04-25 03:16:26.857901] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.393 [2024-04-25 03:16:26.857945] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.393 [2024-04-25 03:16:26.868800] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.393 [2024-04-25 03:16:26.868828] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.393 [2024-04-25 03:16:26.880723] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.393 [2024-04-25 03:16:26.880751] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.393 [2024-04-25 03:16:26.891102] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.393 [2024-04-25 03:16:26.891129] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.653 [2024-04-25 03:16:26.902657] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.653 [2024-04-25 03:16:26.902704] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.653 [2024-04-25 03:16:26.913095] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.653 [2024-04-25 03:16:26.913122] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.653 [2024-04-25 03:16:26.923656] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.653 [2024-04-25 03:16:26.923691] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.653 [2024-04-25 03:16:26.934789] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.653 [2024-04-25 03:16:26.934819] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.653 [2024-04-25 03:16:26.945040] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.653 [2024-04-25 03:16:26.945068] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.653 [2024-04-25 03:16:26.956355] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.653 [2024-04-25 03:16:26.956383] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.653 [2024-04-25 03:16:26.966422] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.653 [2024-04-25 03:16:26.966450] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.653 [2024-04-25 03:16:26.978159] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.653 [2024-04-25 03:16:26.978187] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.653 [2024-04-25 03:16:26.988513] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.653 [2024-04-25 03:16:26.988540] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.653 [2024-04-25 03:16:26.999429] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.653 [2024-04-25 03:16:26.999456] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.653 [2024-04-25 03:16:27.009343] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.653 [2024-04-25 03:16:27.009370] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.653 [2024-04-25 03:16:27.021022] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.653 [2024-04-25 03:16:27.021049] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.653 [2024-04-25 03:16:27.030970] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.653 [2024-04-25 03:16:27.030997] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.653 [2024-04-25 03:16:27.042318] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.653 [2024-04-25 03:16:27.042345] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.653 [2024-04-25 03:16:27.052999] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.653 [2024-04-25 03:16:27.053026] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.653 [2024-04-25 03:16:27.063021] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.653 [2024-04-25 03:16:27.063049] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.653 [2024-04-25 03:16:27.073734] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.653 [2024-04-25 03:16:27.073762] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.653 [2024-04-25 03:16:27.084586] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.653 [2024-04-25 03:16:27.084613] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.653 [2024-04-25 03:16:27.095791] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.653 [2024-04-25 03:16:27.095819] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.653 [2024-04-25 03:16:27.107069] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.653 [2024-04-25 03:16:27.107097] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.653 [2024-04-25 03:16:27.117486] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.653 [2024-04-25 03:16:27.117513] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.653 [2024-04-25 03:16:27.127925] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.653 [2024-04-25 03:16:27.127967] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.653 [2024-04-25 03:16:27.138709] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.653 [2024-04-25 03:16:27.138737] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.653 [2024-04-25 03:16:27.149952] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.653 [2024-04-25 03:16:27.149980] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.912 [2024-04-25 03:16:27.160139] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.912 [2024-04-25 03:16:27.160167] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.912 [2024-04-25 03:16:27.171569] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.912 [2024-04-25 03:16:27.171597] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.912 [2024-04-25 03:16:27.182269] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.912 [2024-04-25 03:16:27.182296] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.913 [2024-04-25 03:16:27.192787] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.913 [2024-04-25 03:16:27.192815] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.913 [2024-04-25 03:16:27.205866] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.913 [2024-04-25 03:16:27.205894] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.913 [2024-04-25 03:16:27.215557] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.913 [2024-04-25 03:16:27.215585] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.913 [2024-04-25 03:16:27.226857] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.913 [2024-04-25 03:16:27.226886] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.913 [2024-04-25 03:16:27.237145] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.913 [2024-04-25 03:16:27.237172] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.913 [2024-04-25 03:16:27.248527] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.913 [2024-04-25 03:16:27.248555] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.913 [2024-04-25 03:16:27.258955] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.913 [2024-04-25 03:16:27.258983] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.913 [2024-04-25 03:16:27.269818] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.913 [2024-04-25 03:16:27.269846] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.913 [2024-04-25 03:16:27.280424] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.913 [2024-04-25 03:16:27.280451] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.913 [2024-04-25 03:16:27.290743] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.913 [2024-04-25 03:16:27.290771] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.913 [2024-04-25 03:16:27.300705] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.913 [2024-04-25 03:16:27.300733] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.913 [2024-04-25 03:16:27.312002] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.913 [2024-04-25 03:16:27.312029] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.913 [2024-04-25 03:16:27.322261] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.913 [2024-04-25 03:16:27.322288] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.913 [2024-04-25 03:16:27.332951] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.913 [2024-04-25 03:16:27.332977] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.913 [2024-04-25 03:16:27.343430] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.913 [2024-04-25 03:16:27.343458] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.913 [2024-04-25 03:16:27.354962] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.913 [2024-04-25 03:16:27.354990] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.913 [2024-04-25 03:16:27.365291] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.913 [2024-04-25 03:16:27.365317] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.913 [2024-04-25 03:16:27.375437] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.913 [2024-04-25 03:16:27.375464] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.913 [2024-04-25 03:16:27.385946] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.913 [2024-04-25 03:16:27.385974] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.913 [2024-04-25 03:16:27.396009] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.913 [2024-04-25 03:16:27.396038] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.913 [2024-04-25 03:16:27.407400] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.913 [2024-04-25 03:16:27.407427] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.172 [2024-04-25 03:16:27.418169] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.172 [2024-04-25 03:16:27.418196] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.172 [2024-04-25 03:16:27.428855] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.172 [2024-04-25 03:16:27.428892] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.172 [2024-04-25 03:16:27.439721] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.172 [2024-04-25 03:16:27.439749] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.172 [2024-04-25 03:16:27.450228] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.172 [2024-04-25 03:16:27.450254] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.172 [2024-04-25 03:16:27.462816] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.172 [2024-04-25 03:16:27.462844] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.172 [2024-04-25 03:16:27.472053] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.172 [2024-04-25 03:16:27.472080] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.172 [2024-04-25 03:16:27.483819] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.172 [2024-04-25 03:16:27.483847] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.172 [2024-04-25 03:16:27.494107] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.172 [2024-04-25 03:16:27.494134] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.172 [2024-04-25 03:16:27.505347] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.172 [2024-04-25 03:16:27.505374] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.172 [2024-04-25 03:16:27.515701] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.172 [2024-04-25 03:16:27.515729] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.172 [2024-04-25 03:16:27.526469] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.172 [2024-04-25 03:16:27.526496] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.172 [2024-04-25 03:16:27.538262] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.172 [2024-04-25 03:16:27.538288] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.172 [2024-04-25 03:16:27.549174] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.172 [2024-04-25 03:16:27.549205] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.172 [2024-04-25 03:16:27.558067] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.172 [2024-04-25 03:16:27.558094] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.172 [2024-04-25 03:16:27.569672] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.172 [2024-04-25 03:16:27.569699] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.172 [2024-04-25 03:16:27.580401] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.172 [2024-04-25 03:16:27.580429] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.172 [2024-04-25 03:16:27.591118] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.172 [2024-04-25 03:16:27.591146] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.172 [2024-04-25 03:16:27.602090] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.172 [2024-04-25 03:16:27.602117] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.172 [2024-04-25 03:16:27.612853] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.172 [2024-04-25 03:16:27.612880] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.172 [2024-04-25 03:16:27.623880] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.172 [2024-04-25 03:16:27.623909] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.172 [2024-04-25 03:16:27.634469] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.172 [2024-04-25 03:16:27.634496] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.172 [2024-04-25 03:16:27.645893] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.172 [2024-04-25 03:16:27.645921] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.172 [2024-04-25 03:16:27.656257] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.172 [2024-04-25 03:16:27.656284] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.172 [2024-04-25 03:16:27.667165] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.172 [2024-04-25 03:16:27.667206] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.431 [2024-04-25 03:16:27.678267] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.431 [2024-04-25 03:16:27.678295] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.431 [2024-04-25 03:16:27.688823] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.431 [2024-04-25 03:16:27.688850] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.431 [2024-04-25 03:16:27.698695] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.431 [2024-04-25 03:16:27.698723] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.431 [2024-04-25 03:16:27.709827] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.431 [2024-04-25 03:16:27.709862] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.431 [2024-04-25 03:16:27.719537] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.431 [2024-04-25 03:16:27.719564] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.431 [2024-04-25 03:16:27.730356] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.431 [2024-04-25 03:16:27.730383] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.431 [2024-04-25 03:16:27.741348] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.431 [2024-04-25 03:16:27.741376] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.431 [2024-04-25 03:16:27.752153] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.431 [2024-04-25 03:16:27.752179] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.431 [2024-04-25 03:16:27.762742] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.431 [2024-04-25 03:16:27.762770] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.431 [2024-04-25 03:16:27.773607] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.431 [2024-04-25 03:16:27.773666] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.431 [2024-04-25 03:16:27.783857] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.431 [2024-04-25 03:16:27.783885] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.431 [2024-04-25 03:16:27.795090] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.431 [2024-04-25 03:16:27.795117] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.431 [2024-04-25 03:16:27.805024] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.431 [2024-04-25 03:16:27.805051] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.431 [2024-04-25 03:16:27.816650] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.431 [2024-04-25 03:16:27.816678] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.431 [2024-04-25 03:16:27.826254] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.431 [2024-04-25 03:16:27.826281] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.431 [2024-04-25 03:16:27.838063] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.431 [2024-04-25 03:16:27.838090] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.431 [2024-04-25 03:16:27.847886] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.431 [2024-04-25 03:16:27.847927] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.431 [2024-04-25 03:16:27.858868] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.431 [2024-04-25 03:16:27.858895] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.431 [2024-04-25 03:16:27.869310] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.431 [2024-04-25 03:16:27.869337] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.431 [2024-04-25 03:16:27.879661] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.431 [2024-04-25 03:16:27.879689] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.431 [2024-04-25 03:16:27.889683] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.431 [2024-04-25 03:16:27.889711] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.431 [2024-04-25 03:16:27.900083] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.431 [2024-04-25 03:16:27.900111] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.432 [2024-04-25 03:16:27.910670] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.432 [2024-04-25 03:16:27.910705] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.432 [2024-04-25 03:16:27.920768] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.432 [2024-04-25 03:16:27.920796] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.692 [2024-04-25 03:16:27.932322] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.692 [2024-04-25 03:16:27.932350] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.692 [2024-04-25 03:16:27.942553] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.692 [2024-04-25 03:16:27.942580] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.692 [2024-04-25 03:16:27.953817] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.692 [2024-04-25 03:16:27.953845] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.692 [2024-04-25 03:16:27.963744] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.692 [2024-04-25 03:16:27.963772] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.692 [2024-04-25 03:16:27.975094] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.692 [2024-04-25 03:16:27.975122] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.692 [2024-04-25 03:16:27.984900] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.692 [2024-04-25 03:16:27.984941] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.692 [2024-04-25 03:16:27.996094] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.692 [2024-04-25 03:16:27.996121] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.692 [2024-04-25 03:16:28.006010] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.692 [2024-04-25 03:16:28.006037] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.692 [2024-04-25 03:16:28.017428] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.692 [2024-04-25 03:16:28.017456] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.692 [2024-04-25 03:16:28.027715] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.692 [2024-04-25 03:16:28.027743] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.692 [2024-04-25 03:16:28.038442] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.692 [2024-04-25 03:16:28.038469] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.692 [2024-04-25 03:16:28.048853] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.692 [2024-04-25 03:16:28.048882] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.692 [2024-04-25 03:16:28.059592] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.692 [2024-04-25 03:16:28.059643] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.692 [2024-04-25 03:16:28.070081] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.692 [2024-04-25 03:16:28.070109] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.692 [2024-04-25 03:16:28.081525] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.692 [2024-04-25 03:16:28.081553] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.692 [2024-04-25 03:16:28.091936] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.692 [2024-04-25 03:16:28.091963] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.692 [2024-04-25 03:16:28.102649] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.692 [2024-04-25 03:16:28.102676] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.692 [2024-04-25 03:16:28.112701] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.692 [2024-04-25 03:16:28.112737] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.692 [2024-04-25 03:16:28.123664] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.692 [2024-04-25 03:16:28.123692] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.692 [2024-04-25 03:16:28.134211] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.692 [2024-04-25 03:16:28.134238] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.692 [2024-04-25 03:16:28.144157] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.692 [2024-04-25 03:16:28.144183] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.692 [2024-04-25 03:16:28.155167] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.692 [2024-04-25 03:16:28.155194] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.692 [2024-04-25 03:16:28.165502] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.692 [2024-04-25 03:16:28.165528] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.692 [2024-04-25 03:16:28.176880] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.692 [2024-04-25 03:16:28.176908] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.692 [2024-04-25 03:16:28.187171] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.692 [2024-04-25 03:16:28.187198] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.953 [2024-04-25 03:16:28.198731] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.953 [2024-04-25 03:16:28.198760] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.953 [2024-04-25 03:16:28.209281] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.953 [2024-04-25 03:16:28.209309] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.953 [2024-04-25 03:16:28.220574] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.953 [2024-04-25 03:16:28.220601] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.953 [2024-04-25 03:16:28.231323] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.953 [2024-04-25 03:16:28.231349] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.953 [2024-04-25 03:16:28.242118] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.953 [2024-04-25 03:16:28.242146] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.953 [2024-04-25 03:16:28.252590] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.953 [2024-04-25 03:16:28.252640] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.953 [2024-04-25 03:16:28.263543] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.953 [2024-04-25 03:16:28.263570] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.953 [2024-04-25 03:16:28.274417] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.953 [2024-04-25 03:16:28.274444] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.953 [2024-04-25 03:16:28.284593] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.953 [2024-04-25 03:16:28.284643] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.953 [2024-04-25 03:16:28.296031] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.953 [2024-04-25 03:16:28.296058] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.953 [2024-04-25 03:16:28.306387] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.953 [2024-04-25 03:16:28.306414] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.953 [2024-04-25 03:16:28.317822] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.953 [2024-04-25 03:16:28.317872] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.953 [2024-04-25 03:16:28.329076] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.953 [2024-04-25 03:16:28.329103] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.953 [2024-04-25 03:16:28.338695] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.953 [2024-04-25 03:16:28.338722] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.953 [2024-04-25 03:16:28.350392] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.953 [2024-04-25 03:16:28.350433] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.953 [2024-04-25 03:16:28.360450] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.953 [2024-04-25 03:16:28.360476] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.953 [2024-04-25 03:16:28.371487] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.953 [2024-04-25 03:16:28.371514] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.953 [2024-04-25 03:16:28.381716] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.953 [2024-04-25 03:16:28.381744] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.953 [2024-04-25 03:16:28.392462] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.954 [2024-04-25 03:16:28.392488] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.954 [2024-04-25 03:16:28.403374] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.954 [2024-04-25 03:16:28.403400] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.954 [2024-04-25 03:16:28.414396] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.954 [2024-04-25 03:16:28.414423] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.954 [2024-04-25 03:16:28.424247] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.954 [2024-04-25 03:16:28.424275] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.954 [2024-04-25 03:16:28.435583] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.954 [2024-04-25 03:16:28.435626] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.954 [2024-04-25 03:16:28.446311] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.954 [2024-04-25 03:16:28.446338] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.214 [2024-04-25 03:16:28.457009] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.214 [2024-04-25 03:16:28.457043] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.214 [2024-04-25 03:16:28.469398] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.214 [2024-04-25 03:16:28.469425] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.214 [2024-04-25 03:16:28.478789] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.214 [2024-04-25 03:16:28.478818] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.214 [2024-04-25 03:16:28.490035] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.214 [2024-04-25 03:16:28.490064] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.214 [2024-04-25 03:16:28.500840] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.214 [2024-04-25 03:16:28.500868] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.214 [2024-04-25 03:16:28.511489] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.214 [2024-04-25 03:16:28.511516] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.214 [2024-04-25 03:16:28.523604] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.214 [2024-04-25 03:16:28.523666] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.215 [2024-04-25 03:16:28.532902] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.215 [2024-04-25 03:16:28.532945] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.215 [2024-04-25 03:16:28.544289] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.215 [2024-04-25 03:16:28.544316] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.215 [2024-04-25 03:16:28.554453] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.215 [2024-04-25 03:16:28.554480] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.215 [2024-04-25 03:16:28.565069] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.215 [2024-04-25 03:16:28.565098] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.215 [2024-04-25 03:16:28.575999] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.215 [2024-04-25 03:16:28.576027] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.215 [2024-04-25 03:16:28.586667] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.215 [2024-04-25 03:16:28.586696] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.215 [2024-04-25 03:16:28.597459] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.215 [2024-04-25 03:16:28.597486] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.215 [2024-04-25 03:16:28.607945] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.215 [2024-04-25 03:16:28.607974] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.215 [2024-04-25 03:16:28.617826] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.215 [2024-04-25 03:16:28.617853] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.215 [2024-04-25 03:16:28.629372] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.215 [2024-04-25 03:16:28.629399] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.215 [2024-04-25 03:16:28.640381] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.215 [2024-04-25 03:16:28.640423] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.215 [2024-04-25 03:16:28.651334] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.215 [2024-04-25 03:16:28.651361] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.215 [2024-04-25 03:16:28.661318] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.215 [2024-04-25 03:16:28.661346] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.215 [2024-04-25 03:16:28.672448] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.215 [2024-04-25 03:16:28.672474] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.215 [2024-04-25 03:16:28.683095] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.215 [2024-04-25 03:16:28.683122] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.215 [2024-04-25 03:16:28.694417] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.215 [2024-04-25 03:16:28.694445] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.215 [2024-04-25 03:16:28.704565] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.215 [2024-04-25 03:16:28.704593] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.474 [2024-04-25 03:16:28.715474] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.474 [2024-04-25 03:16:28.715502] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.474 [2024-04-25 03:16:28.726005] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.474 [2024-04-25 03:16:28.726033] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.474 [2024-04-25 03:16:28.736701] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.474 [2024-04-25 03:16:28.736729] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.474 [2024-04-25 03:16:28.746911] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.474 [2024-04-25 03:16:28.746939] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.474 [2024-04-25 03:16:28.758322] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.474 [2024-04-25 03:16:28.758349] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.474 [2024-04-25 03:16:28.768781] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.474 [2024-04-25 03:16:28.768809] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.474 [2024-04-25 03:16:28.779679] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.474 [2024-04-25 03:16:28.779707] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.474 [2024-04-25 03:16:28.789875] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.475 [2024-04-25 03:16:28.789904] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.475 [2024-04-25 03:16:28.800925] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.475 [2024-04-25 03:16:28.800953] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.475 [2024-04-25 03:16:28.810991] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.475 [2024-04-25 03:16:28.811019] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.475 [2024-04-25 03:16:28.822333] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.475 [2024-04-25 03:16:28.822361] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.475 [2024-04-25 03:16:28.832367] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.475 [2024-04-25 03:16:28.832394] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.475 [2024-04-25 03:16:28.843800] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.475 [2024-04-25 03:16:28.843827] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.475 [2024-04-25 03:16:28.853695] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.475 [2024-04-25 03:16:28.853723] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.475 [2024-04-25 03:16:28.864763] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.475 [2024-04-25 03:16:28.864791] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.475 [2024-04-25 03:16:28.875333] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.475 [2024-04-25 03:16:28.875361] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.475 [2024-04-25 03:16:28.885318] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.475 [2024-04-25 03:16:28.885345] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.475 [2024-04-25 03:16:28.896375] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.475 [2024-04-25 03:16:28.896403] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.475 [2024-04-25 03:16:28.906114] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.475 [2024-04-25 03:16:28.906142] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.475 [2024-04-25 03:16:28.917134] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.475 [2024-04-25 03:16:28.917163] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.475 [2024-04-25 03:16:28.926510] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.475 [2024-04-25 03:16:28.926537] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.475 [2024-04-25 03:16:28.937697] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.475 [2024-04-25 03:16:28.937725] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.475 [2024-04-25 03:16:28.947706] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.475 [2024-04-25 03:16:28.947734] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.475 [2024-04-25 03:16:28.958691] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.475 [2024-04-25 03:16:28.958718] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.475 [2024-04-25 03:16:28.968963] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.475 [2024-04-25 03:16:28.968990] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.734 [2024-04-25 03:16:28.979783] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.734 [2024-04-25 03:16:28.979811] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.734 [2024-04-25 03:16:28.990080] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.734 [2024-04-25 03:16:28.990108] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.734 [2024-04-25 03:16:29.001263] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.734 [2024-04-25 03:16:29.001292] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.734 [2024-04-25 03:16:29.011853] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.734 [2024-04-25 03:16:29.011882] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.734 [2024-04-25 03:16:29.022257] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.734 [2024-04-25 03:16:29.022295] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.734 [2024-04-25 03:16:29.035018] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.734 [2024-04-25 03:16:29.035046] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.735 [2024-04-25 03:16:29.044200] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.735 [2024-04-25 03:16:29.044239] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.735 [2024-04-25 03:16:29.055094] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.735 [2024-04-25 03:16:29.055122] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.735 [2024-04-25 03:16:29.065709] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.735 [2024-04-25 03:16:29.065738] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.735 [2024-04-25 03:16:29.075968] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.735 [2024-04-25 03:16:29.076009] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.735 [2024-04-25 03:16:29.087202] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.735 [2024-04-25 03:16:29.087230] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.735 [2024-04-25 03:16:29.097756] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.735 [2024-04-25 03:16:29.097784] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.735 [2024-04-25 03:16:29.108358] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.735 [2024-04-25 03:16:29.108394] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.735 [2024-04-25 03:16:29.119247] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.735 [2024-04-25 03:16:29.119274] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.735 [2024-04-25 03:16:29.129156] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.735 [2024-04-25 03:16:29.129184] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.735 [2024-04-25 03:16:29.140319] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.735 [2024-04-25 03:16:29.140346] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.735 [2024-04-25 03:16:29.150032] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.735 [2024-04-25 03:16:29.150060] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.735 [2024-04-25 03:16:29.161176] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.735 [2024-04-25 03:16:29.161204] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.735 [2024-04-25 03:16:29.171718] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.735 [2024-04-25 03:16:29.171746] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.735 [2024-04-25 03:16:29.182853] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.735 [2024-04-25 03:16:29.182880] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.735 [2024-04-25 03:16:29.193186] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.735 [2024-04-25 03:16:29.193214] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.735 [2024-04-25 03:16:29.203852] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.735 [2024-04-25 03:16:29.203880] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.735 [2024-04-25 03:16:29.213787] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.735 [2024-04-25 03:16:29.213814] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.735 [2024-04-25 03:16:29.224603] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.735 [2024-04-25 03:16:29.224637] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.995 [2024-04-25 03:16:29.234483] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.995 [2024-04-25 03:16:29.234513] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.995 [2024-04-25 03:16:29.245464] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.995 [2024-04-25 03:16:29.245508] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.995 [2024-04-25 03:16:29.256016] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.995 [2024-04-25 03:16:29.256044] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.995 [2024-04-25 03:16:29.266962] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.995 [2024-04-25 03:16:29.266991] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.995 [2024-04-25 03:16:29.279677] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.995 [2024-04-25 03:16:29.279706] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.996 [2024-04-25 03:16:29.290942] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.996 [2024-04-25 03:16:29.290971] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.996 [2024-04-25 03:16:29.299748] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.996 [2024-04-25 03:16:29.299775] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.996 [2024-04-25 03:16:29.311047] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.996 [2024-04-25 03:16:29.311074] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.996 [2024-04-25 03:16:29.329379] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.996 [2024-04-25 03:16:29.329424] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.996 [2024-04-25 03:16:29.339233] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.996 [2024-04-25 03:16:29.339260] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.996 [2024-04-25 03:16:29.350397] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.996 [2024-04-25 03:16:29.350424] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.996 [2024-04-25 03:16:29.360696] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.996 [2024-04-25 03:16:29.360724] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.996 [2024-04-25 03:16:29.371823] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.996 [2024-04-25 03:16:29.371851] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.996 [2024-04-25 03:16:29.382211] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.996 [2024-04-25 03:16:29.382239] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.996 [2024-04-25 03:16:29.393261] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.996 [2024-04-25 03:16:29.393289] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.996 [2024-04-25 03:16:29.403972] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.996 [2024-04-25 03:16:29.404000] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.996 [2024-04-25 03:16:29.413767] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.996 [2024-04-25 03:16:29.413795] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.996 [2024-04-25 03:16:29.424643] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.996 [2024-04-25 03:16:29.424671] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.996 [2024-04-25 03:16:29.435384] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.996 [2024-04-25 03:16:29.435412] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.996 [2024-04-25 03:16:29.445461] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.996 [2024-04-25 03:16:29.445489] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.996 [2024-04-25 03:16:29.456505] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.996 [2024-04-25 03:16:29.456534] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.996 [2024-04-25 03:16:29.466981] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.996 [2024-04-25 03:16:29.467009] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.996 [2024-04-25 03:16:29.477273] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.996 [2024-04-25 03:16:29.477316] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.996 [2024-04-25 03:16:29.487027] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.996 [2024-04-25 03:16:29.487055] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.257 [2024-04-25 03:16:29.497623] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.257 [2024-04-25 03:16:29.497660] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.257 [2024-04-25 03:16:29.508270] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.257 [2024-04-25 03:16:29.508299] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.257 [2024-04-25 03:16:29.518172] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.257 [2024-04-25 03:16:29.518200] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.257 [2024-04-25 03:16:29.529470] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.257 [2024-04-25 03:16:29.529520] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.257 [2024-04-25 03:16:29.539340] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.257 [2024-04-25 03:16:29.539368] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.257 [2024-04-25 03:16:29.550383] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.257 [2024-04-25 03:16:29.550426] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.257 [2024-04-25 03:16:29.560284] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.257 [2024-04-25 03:16:29.560312] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.257 [2024-04-25 03:16:29.571408] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.257 [2024-04-25 03:16:29.571436] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.257 [2024-04-25 03:16:29.582193] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.257 [2024-04-25 03:16:29.582220] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.257 [2024-04-25 03:16:29.592527] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.257 [2024-04-25 03:16:29.592554] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.257 [2024-04-25 03:16:29.603213] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.257 [2024-04-25 03:16:29.603240] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.257 [2024-04-25 03:16:29.613510] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.257 [2024-04-25 03:16:29.613538] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.257 [2024-04-25 03:16:29.624548] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.257 [2024-04-25 03:16:29.624577] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.257 [2024-04-25 03:16:29.634838] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.257 [2024-04-25 03:16:29.634866] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.257 [2024-04-25 03:16:29.645678] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.257 [2024-04-25 03:16:29.645706] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.257 [2024-04-25 03:16:29.656280] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.257 [2024-04-25 03:16:29.656308] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.257 [2024-04-25 03:16:29.666314] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.257 [2024-04-25 03:16:29.666341] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.257 [2024-04-25 03:16:29.677490] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.257 [2024-04-25 03:16:29.677517] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.257 [2024-04-25 03:16:29.687764] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.257 [2024-04-25 03:16:29.687792] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.257 [2024-04-25 03:16:29.698764] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.257 [2024-04-25 03:16:29.698792] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.257 [2024-04-25 03:16:29.708480] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.257 [2024-04-25 03:16:29.708508] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.257 [2024-04-25 03:16:29.719880] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.257 [2024-04-25 03:16:29.719923] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.257 [2024-04-25 03:16:29.730703] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.257 [2024-04-25 03:16:29.730738] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.257 [2024-04-25 03:16:29.740573] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.257 [2024-04-25 03:16:29.740602] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.257 [2024-04-25 03:16:29.751600] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.257 [2024-04-25 03:16:29.751650] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.517 [2024-04-25 03:16:29.763422] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.517 [2024-04-25 03:16:29.763451] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.517 [2024-04-25 03:16:29.773940] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.517 [2024-04-25 03:16:29.773967] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.517 [2024-04-25 03:16:29.786816] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.517 [2024-04-25 03:16:29.786844] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.517 [2024-04-25 03:16:29.796188] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.517 [2024-04-25 03:16:29.796215] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.517 [2024-04-25 03:16:29.807384] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.517 [2024-04-25 03:16:29.807411] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.517 [2024-04-25 03:16:29.817787] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.517 [2024-04-25 03:16:29.817815] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.517 [2024-04-25 03:16:29.828220] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.517 [2024-04-25 03:16:29.828247] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.517 [2024-04-25 03:16:29.838417] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.517 [2024-04-25 03:16:29.838445] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.517 [2024-04-25 03:16:29.849051] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.517 [2024-04-25 03:16:29.849078] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.517 [2024-04-25 03:16:29.859367] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.517 [2024-04-25 03:16:29.859395] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.517 [2024-04-25 03:16:29.870014] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.517 [2024-04-25 03:16:29.870042] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.517 [2024-04-25 03:16:29.880254] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.517 [2024-04-25 03:16:29.880281] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.517 [2024-04-25 03:16:29.890987] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.517 [2024-04-25 03:16:29.891014] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.517 [2024-04-25 03:16:29.901836] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.517 [2024-04-25 03:16:29.901864] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.517 [2024-04-25 03:16:29.914334] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.517 [2024-04-25 03:16:29.914363] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.517 [2024-04-25 03:16:29.924090] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.517 [2024-04-25 03:16:29.924117] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.517 [2024-04-25 03:16:29.935506] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.517 [2024-04-25 03:16:29.935540] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.517 [2024-04-25 03:16:29.945871] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.517 [2024-04-25 03:16:29.945898] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.517 [2024-04-25 03:16:29.956958] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.517 [2024-04-25 03:16:29.956986] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.517 [2024-04-25 03:16:29.967589] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.517 [2024-04-25 03:16:29.967617] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.517 [2024-04-25 03:16:29.978340] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.517 [2024-04-25 03:16:29.978368] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.517 [2024-04-25 03:16:29.991197] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.517 [2024-04-25 03:16:29.991225] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.517 [2024-04-25 03:16:30.000812] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.517 [2024-04-25 03:16:30.000840] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.518 [2024-04-25 03:16:30.011502] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.518 [2024-04-25 03:16:30.011532] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.778 [2024-04-25 03:16:30.022692] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.778 [2024-04-25 03:16:30.022723] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.778 [2024-04-25 03:16:30.032959] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.778 [2024-04-25 03:16:30.032989] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.778 [2024-04-25 03:16:30.044040] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.778 [2024-04-25 03:16:30.044069] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.779 [2024-04-25 03:16:30.054327] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.779 [2024-04-25 03:16:30.054368] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.779 [2024-04-25 03:16:30.064776] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.779 [2024-04-25 03:16:30.064803] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.779 00:15:55.779 Latency(us) 00:15:55.779 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:55.779 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:15:55.779 Nvme1n1 : 5.01 11865.01 92.70 0.00 0.00 10771.20 3470.98 18835.53 00:15:55.779 =================================================================================================================== 00:15:55.779 Total : 11865.01 92.70 0.00 0.00 10771.20 3470.98 18835.53 00:15:55.779 [2024-04-25 03:16:30.072205] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.779 [2024-04-25 03:16:30.072232] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.779 [2024-04-25 03:16:30.080220] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.779 [2024-04-25 03:16:30.080249] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.779 [2024-04-25 03:16:30.088221] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.779 [2024-04-25 03:16:30.088247] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.779 [2024-04-25 03:16:30.096299] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.779 [2024-04-25 03:16:30.096349] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.779 [2024-04-25 03:16:30.104325] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.779 [2024-04-25 03:16:30.104381] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.779 [2024-04-25 03:16:30.112341] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.779 [2024-04-25 03:16:30.112393] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.779 [2024-04-25 03:16:30.120365] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.779 [2024-04-25 03:16:30.120415] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.779 [2024-04-25 03:16:30.128386] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.779 [2024-04-25 03:16:30.128437] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.779 [2024-04-25 03:16:30.136420] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.779 [2024-04-25 03:16:30.136471] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.779 [2024-04-25 03:16:30.144425] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.779 [2024-04-25 03:16:30.144472] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.779 [2024-04-25 03:16:30.152448] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.779 [2024-04-25 03:16:30.152496] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.779 [2024-04-25 03:16:30.160474] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.779 [2024-04-25 03:16:30.160522] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.779 [2024-04-25 03:16:30.168496] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.779 [2024-04-25 03:16:30.168544] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.779 [2024-04-25 03:16:30.176514] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.779 [2024-04-25 03:16:30.176562] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.779 [2024-04-25 03:16:30.184538] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.779 [2024-04-25 03:16:30.184584] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.779 [2024-04-25 03:16:30.192551] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.779 [2024-04-25 03:16:30.192599] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.779 [2024-04-25 03:16:30.200575] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.779 [2024-04-25 03:16:30.200622] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.779 [2024-04-25 03:16:30.208552] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.779 [2024-04-25 03:16:30.208578] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.779 [2024-04-25 03:16:30.216572] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.779 [2024-04-25 03:16:30.216597] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.779 [2024-04-25 03:16:30.224594] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.779 [2024-04-25 03:16:30.224619] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.779 [2024-04-25 03:16:30.232614] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.779 [2024-04-25 03:16:30.232648] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.779 [2024-04-25 03:16:30.240658] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.779 [2024-04-25 03:16:30.240686] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.779 [2024-04-25 03:16:30.248743] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.779 [2024-04-25 03:16:30.248794] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.779 [2024-04-25 03:16:30.256759] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.779 [2024-04-25 03:16:30.256817] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.779 [2024-04-25 03:16:30.264723] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.779 [2024-04-25 03:16:30.264745] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.779 [2024-04-25 03:16:30.272736] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.779 [2024-04-25 03:16:30.272769] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.040 [2024-04-25 03:16:30.280755] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.040 [2024-04-25 03:16:30.280778] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.040 [2024-04-25 03:16:30.288763] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.040 [2024-04-25 03:16:30.288794] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.040 [2024-04-25 03:16:30.296812] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.040 [2024-04-25 03:16:30.296846] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.040 [2024-04-25 03:16:30.304864] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.040 [2024-04-25 03:16:30.304911] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.040 [2024-04-25 03:16:30.312898] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.040 [2024-04-25 03:16:30.312947] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.040 [2024-04-25 03:16:30.320853] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.040 [2024-04-25 03:16:30.320875] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.040 [2024-04-25 03:16:30.328872] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.040 [2024-04-25 03:16:30.328893] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.040 [2024-04-25 03:16:30.336894] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.040 [2024-04-25 03:16:30.336931] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.040 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1491066) - No such process 00:15:56.040 03:16:30 -- target/zcopy.sh@49 -- # wait 1491066 00:15:56.040 03:16:30 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:56.040 03:16:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:56.040 03:16:30 -- common/autotest_common.sh@10 -- # set +x 00:15:56.040 03:16:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:56.040 03:16:30 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:15:56.040 03:16:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:56.040 03:16:30 -- common/autotest_common.sh@10 -- # set +x 00:15:56.040 delay0 00:15:56.040 03:16:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:56.040 03:16:30 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:15:56.040 03:16:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:56.040 03:16:30 -- common/autotest_common.sh@10 -- # set +x 00:15:56.040 03:16:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:56.040 03:16:30 -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:15:56.040 EAL: No free 2048 kB hugepages reported on node 1 00:15:56.040 [2024-04-25 03:16:30.496817] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:16:02.613 Initializing NVMe Controllers 00:16:02.613 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:02.613 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:02.613 Initialization complete. Launching workers. 00:16:02.613 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 50 00:16:02.613 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 337, failed to submit 33 00:16:02.613 success 98, unsuccess 239, failed 0 00:16:02.613 03:16:36 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:16:02.613 03:16:36 -- target/zcopy.sh@60 -- # nvmftestfini 00:16:02.613 03:16:36 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:02.613 03:16:36 -- nvmf/common.sh@117 -- # sync 00:16:02.613 03:16:36 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:02.613 03:16:36 -- nvmf/common.sh@120 -- # set +e 00:16:02.613 03:16:36 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:02.613 03:16:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:02.613 rmmod nvme_tcp 00:16:02.613 rmmod nvme_fabrics 00:16:02.613 rmmod nvme_keyring 00:16:02.613 03:16:36 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:02.613 03:16:36 -- nvmf/common.sh@124 -- # set -e 00:16:02.613 03:16:36 -- nvmf/common.sh@125 -- # return 0 00:16:02.613 03:16:36 -- nvmf/common.sh@478 -- # '[' -n 1489603 ']' 00:16:02.613 03:16:36 -- nvmf/common.sh@479 -- # killprocess 1489603 00:16:02.613 03:16:36 -- common/autotest_common.sh@936 -- # '[' -z 1489603 ']' 00:16:02.613 03:16:36 -- common/autotest_common.sh@940 -- # kill -0 1489603 00:16:02.613 03:16:36 -- common/autotest_common.sh@941 -- # uname 00:16:02.613 03:16:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:02.613 03:16:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1489603 00:16:02.613 03:16:36 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:02.613 03:16:36 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:02.613 03:16:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1489603' 00:16:02.613 killing process with pid 1489603 00:16:02.613 03:16:36 -- common/autotest_common.sh@955 -- # kill 1489603 00:16:02.613 03:16:36 -- common/autotest_common.sh@960 -- # wait 1489603 00:16:02.613 03:16:36 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:02.613 03:16:36 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:02.613 03:16:36 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:02.613 03:16:36 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:02.613 03:16:36 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:02.613 03:16:36 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:02.613 03:16:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:02.613 03:16:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:04.522 03:16:39 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:04.522 00:16:04.522 real 0m28.683s 00:16:04.522 user 0m42.104s 00:16:04.522 sys 0m8.483s 00:16:04.522 03:16:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:04.522 03:16:39 -- common/autotest_common.sh@10 -- # set +x 00:16:04.522 ************************************ 00:16:04.522 END TEST nvmf_zcopy 00:16:04.522 ************************************ 00:16:04.781 03:16:39 -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:16:04.781 03:16:39 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:04.781 03:16:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:04.781 03:16:39 -- common/autotest_common.sh@10 -- # set +x 00:16:04.781 ************************************ 00:16:04.781 START TEST nvmf_nmic 00:16:04.781 ************************************ 00:16:04.781 03:16:39 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:16:04.781 * Looking for test storage... 00:16:04.781 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:04.781 03:16:39 -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:04.781 03:16:39 -- nvmf/common.sh@7 -- # uname -s 00:16:04.781 03:16:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:04.781 03:16:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:04.781 03:16:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:04.781 03:16:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:04.781 03:16:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:04.781 03:16:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:04.781 03:16:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:04.781 03:16:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:04.781 03:16:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:04.781 03:16:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:04.781 03:16:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:04.781 03:16:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:04.781 03:16:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:04.781 03:16:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:04.781 03:16:39 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:04.781 03:16:39 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:04.782 03:16:39 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:04.782 03:16:39 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:04.782 03:16:39 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:04.782 03:16:39 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:04.782 03:16:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.782 03:16:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.782 03:16:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.782 03:16:39 -- paths/export.sh@5 -- # export PATH 00:16:04.782 03:16:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.782 03:16:39 -- nvmf/common.sh@47 -- # : 0 00:16:04.782 03:16:39 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:04.782 03:16:39 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:04.782 03:16:39 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:04.782 03:16:39 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:04.782 03:16:39 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:04.782 03:16:39 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:04.782 03:16:39 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:04.782 03:16:39 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:04.782 03:16:39 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:04.782 03:16:39 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:04.782 03:16:39 -- target/nmic.sh@14 -- # nvmftestinit 00:16:04.782 03:16:39 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:04.782 03:16:39 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:04.782 03:16:39 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:04.782 03:16:39 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:04.782 03:16:39 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:04.782 03:16:39 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:04.782 03:16:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:04.782 03:16:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:04.782 03:16:39 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:16:04.782 03:16:39 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:16:04.782 03:16:39 -- nvmf/common.sh@285 -- # xtrace_disable 00:16:04.782 03:16:39 -- common/autotest_common.sh@10 -- # set +x 00:16:06.736 03:16:41 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:06.736 03:16:41 -- nvmf/common.sh@291 -- # pci_devs=() 00:16:06.736 03:16:41 -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:06.736 03:16:41 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:06.736 03:16:41 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:06.736 03:16:41 -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:06.736 03:16:41 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:06.736 03:16:41 -- nvmf/common.sh@295 -- # net_devs=() 00:16:06.736 03:16:41 -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:06.736 03:16:41 -- nvmf/common.sh@296 -- # e810=() 00:16:06.736 03:16:41 -- nvmf/common.sh@296 -- # local -ga e810 00:16:06.736 03:16:41 -- nvmf/common.sh@297 -- # x722=() 00:16:06.736 03:16:41 -- nvmf/common.sh@297 -- # local -ga x722 00:16:06.736 03:16:41 -- nvmf/common.sh@298 -- # mlx=() 00:16:06.736 03:16:41 -- nvmf/common.sh@298 -- # local -ga mlx 00:16:06.736 03:16:41 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:06.736 03:16:41 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:06.736 03:16:41 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:06.736 03:16:41 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:06.736 03:16:41 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:06.736 03:16:41 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:06.736 03:16:41 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:06.736 03:16:41 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:06.736 03:16:41 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:06.736 03:16:41 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:06.736 03:16:41 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:06.736 03:16:41 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:06.736 03:16:41 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:06.736 03:16:41 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:06.736 03:16:41 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:06.736 03:16:41 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:06.736 03:16:41 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:06.736 03:16:41 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:06.736 03:16:41 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:06.736 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:06.736 03:16:41 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:06.736 03:16:41 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:06.736 03:16:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:06.736 03:16:41 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:06.736 03:16:41 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:06.736 03:16:41 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:06.736 03:16:41 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:06.736 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:06.736 03:16:41 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:06.736 03:16:41 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:06.736 03:16:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:06.736 03:16:41 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:06.736 03:16:41 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:06.736 03:16:41 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:06.736 03:16:41 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:06.736 03:16:41 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:06.736 03:16:41 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:06.736 03:16:41 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:06.736 03:16:41 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:06.736 03:16:41 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:06.736 03:16:41 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:06.736 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:06.736 03:16:41 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:06.736 03:16:41 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:06.736 03:16:41 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:06.736 03:16:41 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:06.736 03:16:41 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:06.736 03:16:41 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:06.736 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:06.736 03:16:41 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:06.736 03:16:41 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:16:06.736 03:16:41 -- nvmf/common.sh@403 -- # is_hw=yes 00:16:06.736 03:16:41 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:16:06.736 03:16:41 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:16:06.736 03:16:41 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:16:06.736 03:16:41 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:06.736 03:16:41 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:06.736 03:16:41 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:06.736 03:16:41 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:06.736 03:16:41 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:06.736 03:16:41 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:06.736 03:16:41 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:06.736 03:16:41 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:06.736 03:16:41 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:06.736 03:16:41 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:06.736 03:16:41 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:06.736 03:16:41 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:06.736 03:16:41 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:06.736 03:16:41 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:06.736 03:16:41 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:06.736 03:16:41 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:06.736 03:16:41 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:06.995 03:16:41 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:06.995 03:16:41 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:06.995 03:16:41 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:06.995 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:06.995 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.253 ms 00:16:06.995 00:16:06.995 --- 10.0.0.2 ping statistics --- 00:16:06.995 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:06.995 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:16:06.995 03:16:41 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:06.995 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:06.995 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:16:06.995 00:16:06.995 --- 10.0.0.1 ping statistics --- 00:16:06.995 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:06.995 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:16:06.995 03:16:41 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:06.995 03:16:41 -- nvmf/common.sh@411 -- # return 0 00:16:06.995 03:16:41 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:06.995 03:16:41 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:06.995 03:16:41 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:06.995 03:16:41 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:06.995 03:16:41 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:06.996 03:16:41 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:06.996 03:16:41 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:06.996 03:16:41 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:16:06.996 03:16:41 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:06.996 03:16:41 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:06.996 03:16:41 -- common/autotest_common.sh@10 -- # set +x 00:16:06.996 03:16:41 -- nvmf/common.sh@470 -- # nvmfpid=1494328 00:16:06.996 03:16:41 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:06.996 03:16:41 -- nvmf/common.sh@471 -- # waitforlisten 1494328 00:16:06.996 03:16:41 -- common/autotest_common.sh@817 -- # '[' -z 1494328 ']' 00:16:06.996 03:16:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:06.996 03:16:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:06.996 03:16:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:06.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:06.996 03:16:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:06.996 03:16:41 -- common/autotest_common.sh@10 -- # set +x 00:16:06.996 [2024-04-25 03:16:41.342171] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:16:06.996 [2024-04-25 03:16:41.342247] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:06.996 EAL: No free 2048 kB hugepages reported on node 1 00:16:06.996 [2024-04-25 03:16:41.411597] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:07.255 [2024-04-25 03:16:41.533731] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:07.255 [2024-04-25 03:16:41.533781] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:07.255 [2024-04-25 03:16:41.533795] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:07.255 [2024-04-25 03:16:41.533807] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:07.255 [2024-04-25 03:16:41.533817] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:07.255 [2024-04-25 03:16:41.533864] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:07.255 [2024-04-25 03:16:41.533888] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:07.255 [2024-04-25 03:16:41.533944] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:07.255 [2024-04-25 03:16:41.533947] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:07.255 03:16:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:07.255 03:16:41 -- common/autotest_common.sh@850 -- # return 0 00:16:07.255 03:16:41 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:07.255 03:16:41 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:07.255 03:16:41 -- common/autotest_common.sh@10 -- # set +x 00:16:07.255 03:16:41 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:07.255 03:16:41 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:07.255 03:16:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:07.255 03:16:41 -- common/autotest_common.sh@10 -- # set +x 00:16:07.255 [2024-04-25 03:16:41.695398] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:07.255 03:16:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:07.255 03:16:41 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:07.255 03:16:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:07.255 03:16:41 -- common/autotest_common.sh@10 -- # set +x 00:16:07.255 Malloc0 00:16:07.255 03:16:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:07.255 03:16:41 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:07.255 03:16:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:07.255 03:16:41 -- common/autotest_common.sh@10 -- # set +x 00:16:07.255 03:16:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:07.255 03:16:41 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:07.255 03:16:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:07.255 03:16:41 -- common/autotest_common.sh@10 -- # set +x 00:16:07.255 03:16:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:07.255 03:16:41 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:07.255 03:16:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:07.255 03:16:41 -- common/autotest_common.sh@10 -- # set +x 00:16:07.255 [2024-04-25 03:16:41.749235] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:07.255 03:16:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:07.255 03:16:41 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:16:07.255 test case1: single bdev can't be used in multiple subsystems 00:16:07.255 03:16:41 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:16:07.255 03:16:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:07.255 03:16:41 -- common/autotest_common.sh@10 -- # set +x 00:16:07.513 03:16:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:07.513 03:16:41 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:16:07.513 03:16:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:07.513 03:16:41 -- common/autotest_common.sh@10 -- # set +x 00:16:07.513 03:16:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:07.513 03:16:41 -- target/nmic.sh@28 -- # nmic_status=0 00:16:07.513 03:16:41 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:16:07.513 03:16:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:07.513 03:16:41 -- common/autotest_common.sh@10 -- # set +x 00:16:07.513 [2024-04-25 03:16:41.773111] bdev.c:7988:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:16:07.513 [2024-04-25 03:16:41.773139] subsystem.c:1934:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:16:07.513 [2024-04-25 03:16:41.773153] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.513 request: 00:16:07.513 { 00:16:07.513 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:16:07.513 "namespace": { 00:16:07.513 "bdev_name": "Malloc0", 00:16:07.513 "no_auto_visible": false 00:16:07.513 }, 00:16:07.513 "method": "nvmf_subsystem_add_ns", 00:16:07.513 "req_id": 1 00:16:07.513 } 00:16:07.513 Got JSON-RPC error response 00:16:07.513 response: 00:16:07.513 { 00:16:07.514 "code": -32602, 00:16:07.514 "message": "Invalid parameters" 00:16:07.514 } 00:16:07.514 03:16:41 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:16:07.514 03:16:41 -- target/nmic.sh@29 -- # nmic_status=1 00:16:07.514 03:16:41 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:16:07.514 03:16:41 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:16:07.514 Adding namespace failed - expected result. 00:16:07.514 03:16:41 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:16:07.514 test case2: host connect to nvmf target in multiple paths 00:16:07.514 03:16:41 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:07.514 03:16:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:07.514 03:16:41 -- common/autotest_common.sh@10 -- # set +x 00:16:07.514 [2024-04-25 03:16:41.781216] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:07.514 03:16:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:07.514 03:16:41 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:08.080 03:16:42 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:16:08.648 03:16:43 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:16:08.648 03:16:43 -- common/autotest_common.sh@1184 -- # local i=0 00:16:08.648 03:16:43 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:16:08.648 03:16:43 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:16:08.648 03:16:43 -- common/autotest_common.sh@1191 -- # sleep 2 00:16:11.183 03:16:45 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:16:11.183 03:16:45 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:16:11.183 03:16:45 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:16:11.183 03:16:45 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:16:11.183 03:16:45 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:16:11.183 03:16:45 -- common/autotest_common.sh@1194 -- # return 0 00:16:11.183 03:16:45 -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:11.183 [global] 00:16:11.183 thread=1 00:16:11.183 invalidate=1 00:16:11.183 rw=write 00:16:11.183 time_based=1 00:16:11.183 runtime=1 00:16:11.183 ioengine=libaio 00:16:11.183 direct=1 00:16:11.183 bs=4096 00:16:11.183 iodepth=1 00:16:11.183 norandommap=0 00:16:11.183 numjobs=1 00:16:11.183 00:16:11.183 verify_dump=1 00:16:11.183 verify_backlog=512 00:16:11.183 verify_state_save=0 00:16:11.183 do_verify=1 00:16:11.183 verify=crc32c-intel 00:16:11.183 [job0] 00:16:11.183 filename=/dev/nvme0n1 00:16:11.183 Could not set queue depth (nvme0n1) 00:16:11.183 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:11.183 fio-3.35 00:16:11.183 Starting 1 thread 00:16:12.123 00:16:12.123 job0: (groupid=0, jobs=1): err= 0: pid=1494965: Thu Apr 25 03:16:46 2024 00:16:12.123 read: IOPS=21, BW=87.0KiB/s (89.0kB/s)(88.0KiB/1012msec) 00:16:12.123 slat (nsec): min=6356, max=36385, avg=25209.95, stdev=9274.91 00:16:12.123 clat (usec): min=1216, max=42178, avg=39788.84, stdev=8627.39 00:16:12.123 lat (usec): min=1237, max=42184, avg=39814.05, stdev=8628.49 00:16:12.123 clat percentiles (usec): 00:16:12.123 | 1.00th=[ 1221], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:16:12.123 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:16:12.123 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:16:12.123 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:16:12.123 | 99.99th=[42206] 00:16:12.123 write: IOPS=505, BW=2024KiB/s (2072kB/s)(2048KiB/1012msec); 0 zone resets 00:16:12.123 slat (nsec): min=5713, max=49536, avg=14082.47, stdev=6894.01 00:16:12.123 clat (usec): min=217, max=463, avg=247.02, stdev=33.01 00:16:12.123 lat (usec): min=224, max=489, avg=261.10, stdev=35.30 00:16:12.123 clat percentiles (usec): 00:16:12.123 | 1.00th=[ 221], 5.00th=[ 225], 10.00th=[ 227], 20.00th=[ 231], 00:16:12.123 | 30.00th=[ 233], 40.00th=[ 237], 50.00th=[ 239], 60.00th=[ 241], 00:16:12.123 | 70.00th=[ 245], 80.00th=[ 249], 90.00th=[ 273], 95.00th=[ 302], 00:16:12.123 | 99.00th=[ 404], 99.50th=[ 445], 99.90th=[ 465], 99.95th=[ 465], 00:16:12.123 | 99.99th=[ 465] 00:16:12.123 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:16:12.123 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:12.123 lat (usec) : 250=77.15%, 500=18.73% 00:16:12.123 lat (msec) : 2=0.19%, 50=3.93% 00:16:12.123 cpu : usr=0.69%, sys=0.30%, ctx=534, majf=0, minf=2 00:16:12.123 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:12.123 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:12.123 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:12.123 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:12.123 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:12.123 00:16:12.123 Run status group 0 (all jobs): 00:16:12.123 READ: bw=87.0KiB/s (89.0kB/s), 87.0KiB/s-87.0KiB/s (89.0kB/s-89.0kB/s), io=88.0KiB (90.1kB), run=1012-1012msec 00:16:12.123 WRITE: bw=2024KiB/s (2072kB/s), 2024KiB/s-2024KiB/s (2072kB/s-2072kB/s), io=2048KiB (2097kB), run=1012-1012msec 00:16:12.123 00:16:12.123 Disk stats (read/write): 00:16:12.123 nvme0n1: ios=69/512, merge=0/0, ticks=794/129, in_queue=923, util=92.18% 00:16:12.123 03:16:46 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:12.123 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:16:12.123 03:16:46 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:12.123 03:16:46 -- common/autotest_common.sh@1205 -- # local i=0 00:16:12.123 03:16:46 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:16:12.123 03:16:46 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:12.123 03:16:46 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:16:12.123 03:16:46 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:12.123 03:16:46 -- common/autotest_common.sh@1217 -- # return 0 00:16:12.123 03:16:46 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:16:12.123 03:16:46 -- target/nmic.sh@53 -- # nvmftestfini 00:16:12.123 03:16:46 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:12.123 03:16:46 -- nvmf/common.sh@117 -- # sync 00:16:12.123 03:16:46 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:12.123 03:16:46 -- nvmf/common.sh@120 -- # set +e 00:16:12.123 03:16:46 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:12.123 03:16:46 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:12.123 rmmod nvme_tcp 00:16:12.123 rmmod nvme_fabrics 00:16:12.123 rmmod nvme_keyring 00:16:12.123 03:16:46 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:12.123 03:16:46 -- nvmf/common.sh@124 -- # set -e 00:16:12.123 03:16:46 -- nvmf/common.sh@125 -- # return 0 00:16:12.123 03:16:46 -- nvmf/common.sh@478 -- # '[' -n 1494328 ']' 00:16:12.123 03:16:46 -- nvmf/common.sh@479 -- # killprocess 1494328 00:16:12.123 03:16:46 -- common/autotest_common.sh@936 -- # '[' -z 1494328 ']' 00:16:12.123 03:16:46 -- common/autotest_common.sh@940 -- # kill -0 1494328 00:16:12.123 03:16:46 -- common/autotest_common.sh@941 -- # uname 00:16:12.123 03:16:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:12.123 03:16:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1494328 00:16:12.382 03:16:46 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:12.382 03:16:46 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:12.382 03:16:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1494328' 00:16:12.382 killing process with pid 1494328 00:16:12.382 03:16:46 -- common/autotest_common.sh@955 -- # kill 1494328 00:16:12.382 03:16:46 -- common/autotest_common.sh@960 -- # wait 1494328 00:16:12.643 03:16:46 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:12.643 03:16:46 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:12.643 03:16:46 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:12.643 03:16:46 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:12.643 03:16:46 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:12.643 03:16:46 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:12.643 03:16:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:12.643 03:16:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:14.552 03:16:48 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:14.552 00:16:14.552 real 0m9.807s 00:16:14.552 user 0m22.207s 00:16:14.552 sys 0m2.260s 00:16:14.552 03:16:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:14.552 03:16:48 -- common/autotest_common.sh@10 -- # set +x 00:16:14.552 ************************************ 00:16:14.552 END TEST nvmf_nmic 00:16:14.552 ************************************ 00:16:14.552 03:16:48 -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:16:14.552 03:16:48 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:14.552 03:16:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:14.552 03:16:48 -- common/autotest_common.sh@10 -- # set +x 00:16:14.811 ************************************ 00:16:14.811 START TEST nvmf_fio_target 00:16:14.811 ************************************ 00:16:14.811 03:16:49 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:16:14.811 * Looking for test storage... 00:16:14.811 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:14.811 03:16:49 -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:14.811 03:16:49 -- nvmf/common.sh@7 -- # uname -s 00:16:14.811 03:16:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:14.811 03:16:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:14.811 03:16:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:14.811 03:16:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:14.811 03:16:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:14.811 03:16:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:14.811 03:16:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:14.811 03:16:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:14.811 03:16:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:14.811 03:16:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:14.811 03:16:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:14.811 03:16:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:14.811 03:16:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:14.811 03:16:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:14.811 03:16:49 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:14.811 03:16:49 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:14.811 03:16:49 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:14.811 03:16:49 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:14.811 03:16:49 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:14.811 03:16:49 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:14.811 03:16:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.812 03:16:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.812 03:16:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.812 03:16:49 -- paths/export.sh@5 -- # export PATH 00:16:14.812 03:16:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.812 03:16:49 -- nvmf/common.sh@47 -- # : 0 00:16:14.812 03:16:49 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:14.812 03:16:49 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:14.812 03:16:49 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:14.812 03:16:49 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:14.812 03:16:49 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:14.812 03:16:49 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:14.812 03:16:49 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:14.812 03:16:49 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:14.812 03:16:49 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:14.812 03:16:49 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:14.812 03:16:49 -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:14.812 03:16:49 -- target/fio.sh@16 -- # nvmftestinit 00:16:14.812 03:16:49 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:14.812 03:16:49 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:14.812 03:16:49 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:14.812 03:16:49 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:14.812 03:16:49 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:14.812 03:16:49 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:14.812 03:16:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:14.812 03:16:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:14.812 03:16:49 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:16:14.812 03:16:49 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:16:14.812 03:16:49 -- nvmf/common.sh@285 -- # xtrace_disable 00:16:14.812 03:16:49 -- common/autotest_common.sh@10 -- # set +x 00:16:16.714 03:16:51 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:16.714 03:16:51 -- nvmf/common.sh@291 -- # pci_devs=() 00:16:16.714 03:16:51 -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:16.714 03:16:51 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:16.714 03:16:51 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:16.714 03:16:51 -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:16.714 03:16:51 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:16.714 03:16:51 -- nvmf/common.sh@295 -- # net_devs=() 00:16:16.714 03:16:51 -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:16.714 03:16:51 -- nvmf/common.sh@296 -- # e810=() 00:16:16.714 03:16:51 -- nvmf/common.sh@296 -- # local -ga e810 00:16:16.714 03:16:51 -- nvmf/common.sh@297 -- # x722=() 00:16:16.714 03:16:51 -- nvmf/common.sh@297 -- # local -ga x722 00:16:16.714 03:16:51 -- nvmf/common.sh@298 -- # mlx=() 00:16:16.714 03:16:51 -- nvmf/common.sh@298 -- # local -ga mlx 00:16:16.714 03:16:51 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:16.714 03:16:51 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:16.714 03:16:51 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:16.714 03:16:51 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:16.714 03:16:51 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:16.714 03:16:51 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:16.714 03:16:51 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:16.714 03:16:51 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:16.714 03:16:51 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:16.714 03:16:51 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:16.714 03:16:51 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:16.714 03:16:51 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:16.714 03:16:51 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:16.714 03:16:51 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:16.714 03:16:51 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:16.714 03:16:51 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:16.714 03:16:51 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:16.714 03:16:51 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:16.714 03:16:51 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:16.714 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:16.714 03:16:51 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:16.714 03:16:51 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:16.714 03:16:51 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:16.714 03:16:51 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:16.714 03:16:51 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:16.714 03:16:51 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:16.714 03:16:51 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:16.714 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:16.714 03:16:51 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:16.714 03:16:51 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:16.714 03:16:51 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:16.714 03:16:51 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:16.714 03:16:51 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:16.714 03:16:51 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:16.714 03:16:51 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:16.714 03:16:51 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:16.714 03:16:51 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:16.714 03:16:51 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:16.714 03:16:51 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:16.714 03:16:51 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:16.714 03:16:51 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:16.714 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:16.714 03:16:51 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:16.714 03:16:51 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:16.714 03:16:51 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:16.714 03:16:51 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:16.714 03:16:51 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:16.714 03:16:51 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:16.714 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:16.714 03:16:51 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:16.714 03:16:51 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:16:16.714 03:16:51 -- nvmf/common.sh@403 -- # is_hw=yes 00:16:16.714 03:16:51 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:16:16.714 03:16:51 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:16:16.714 03:16:51 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:16:16.714 03:16:51 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:16.714 03:16:51 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:16.714 03:16:51 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:16.714 03:16:51 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:16.714 03:16:51 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:16.714 03:16:51 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:16.714 03:16:51 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:16.714 03:16:51 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:16.714 03:16:51 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:16.714 03:16:51 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:16.714 03:16:51 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:16.714 03:16:51 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:16.714 03:16:51 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:16.714 03:16:51 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:16.714 03:16:51 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:16.714 03:16:51 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:16.714 03:16:51 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:16.714 03:16:51 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:16.714 03:16:51 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:16.714 03:16:51 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:16.714 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:16.714 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.245 ms 00:16:16.714 00:16:16.714 --- 10.0.0.2 ping statistics --- 00:16:16.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:16.714 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:16:16.714 03:16:51 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:16.714 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:16.714 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.131 ms 00:16:16.714 00:16:16.714 --- 10.0.0.1 ping statistics --- 00:16:16.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:16.714 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:16:16.714 03:16:51 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:16.714 03:16:51 -- nvmf/common.sh@411 -- # return 0 00:16:16.714 03:16:51 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:16.714 03:16:51 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:16.714 03:16:51 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:16.714 03:16:51 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:16.714 03:16:51 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:16.714 03:16:51 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:16.715 03:16:51 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:16.975 03:16:51 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:16:16.975 03:16:51 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:16.975 03:16:51 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:16.975 03:16:51 -- common/autotest_common.sh@10 -- # set +x 00:16:16.975 03:16:51 -- nvmf/common.sh@470 -- # nvmfpid=1497041 00:16:16.975 03:16:51 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:16.975 03:16:51 -- nvmf/common.sh@471 -- # waitforlisten 1497041 00:16:16.975 03:16:51 -- common/autotest_common.sh@817 -- # '[' -z 1497041 ']' 00:16:16.975 03:16:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:16.975 03:16:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:16.975 03:16:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:16.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:16.975 03:16:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:16.975 03:16:51 -- common/autotest_common.sh@10 -- # set +x 00:16:16.975 [2024-04-25 03:16:51.263759] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:16:16.975 [2024-04-25 03:16:51.263839] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:16.975 EAL: No free 2048 kB hugepages reported on node 1 00:16:16.975 [2024-04-25 03:16:51.328707] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:16.975 [2024-04-25 03:16:51.442123] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:16.975 [2024-04-25 03:16:51.442179] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:16.975 [2024-04-25 03:16:51.442193] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:16.975 [2024-04-25 03:16:51.442205] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:16.975 [2024-04-25 03:16:51.442222] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:16.975 [2024-04-25 03:16:51.442295] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:16.975 [2024-04-25 03:16:51.442354] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:16.975 [2024-04-25 03:16:51.442420] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:16.975 [2024-04-25 03:16:51.442423] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:17.234 03:16:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:17.234 03:16:51 -- common/autotest_common.sh@850 -- # return 0 00:16:17.234 03:16:51 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:17.234 03:16:51 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:17.234 03:16:51 -- common/autotest_common.sh@10 -- # set +x 00:16:17.234 03:16:51 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:17.234 03:16:51 -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:17.493 [2024-04-25 03:16:51.868399] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:17.493 03:16:51 -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:17.752 03:16:52 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:16:17.752 03:16:52 -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:18.011 03:16:52 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:16:18.011 03:16:52 -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:18.277 03:16:52 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:16:18.277 03:16:52 -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:18.534 03:16:53 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:16:18.534 03:16:53 -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:16:18.792 03:16:53 -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:19.050 03:16:53 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:16:19.050 03:16:53 -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:19.334 03:16:53 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:16:19.334 03:16:53 -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:19.600 03:16:54 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:16:19.600 03:16:54 -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:16:19.857 03:16:54 -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:20.114 03:16:54 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:16:20.115 03:16:54 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:20.373 03:16:54 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:16:20.373 03:16:54 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:20.631 03:16:54 -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:20.889 [2024-04-25 03:16:55.235588] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:20.889 03:16:55 -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:16:21.146 03:16:55 -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:16:21.405 03:16:55 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:21.970 03:16:56 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:16:21.970 03:16:56 -- common/autotest_common.sh@1184 -- # local i=0 00:16:21.970 03:16:56 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:16:21.970 03:16:56 -- common/autotest_common.sh@1186 -- # [[ -n 4 ]] 00:16:21.970 03:16:56 -- common/autotest_common.sh@1187 -- # nvme_device_counter=4 00:16:21.970 03:16:56 -- common/autotest_common.sh@1191 -- # sleep 2 00:16:24.503 03:16:58 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:16:24.503 03:16:58 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:16:24.503 03:16:58 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:16:24.503 03:16:58 -- common/autotest_common.sh@1193 -- # nvme_devices=4 00:16:24.503 03:16:58 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:16:24.504 03:16:58 -- common/autotest_common.sh@1194 -- # return 0 00:16:24.504 03:16:58 -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:24.504 [global] 00:16:24.504 thread=1 00:16:24.504 invalidate=1 00:16:24.504 rw=write 00:16:24.504 time_based=1 00:16:24.504 runtime=1 00:16:24.504 ioengine=libaio 00:16:24.504 direct=1 00:16:24.504 bs=4096 00:16:24.504 iodepth=1 00:16:24.504 norandommap=0 00:16:24.504 numjobs=1 00:16:24.504 00:16:24.504 verify_dump=1 00:16:24.504 verify_backlog=512 00:16:24.504 verify_state_save=0 00:16:24.504 do_verify=1 00:16:24.504 verify=crc32c-intel 00:16:24.504 [job0] 00:16:24.504 filename=/dev/nvme0n1 00:16:24.504 [job1] 00:16:24.504 filename=/dev/nvme0n2 00:16:24.504 [job2] 00:16:24.504 filename=/dev/nvme0n3 00:16:24.504 [job3] 00:16:24.504 filename=/dev/nvme0n4 00:16:24.504 Could not set queue depth (nvme0n1) 00:16:24.504 Could not set queue depth (nvme0n2) 00:16:24.504 Could not set queue depth (nvme0n3) 00:16:24.504 Could not set queue depth (nvme0n4) 00:16:24.504 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:24.504 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:24.504 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:24.504 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:24.504 fio-3.35 00:16:24.504 Starting 4 threads 00:16:25.441 00:16:25.441 job0: (groupid=0, jobs=1): err= 0: pid=1498110: Thu Apr 25 03:16:59 2024 00:16:25.441 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:16:25.441 slat (nsec): min=5738, max=61251, avg=9803.31, stdev=5145.63 00:16:25.441 clat (usec): min=412, max=710, avg=478.00, stdev=36.49 00:16:25.441 lat (usec): min=420, max=718, avg=487.80, stdev=37.35 00:16:25.441 clat percentiles (usec): 00:16:25.441 | 1.00th=[ 424], 5.00th=[ 433], 10.00th=[ 437], 20.00th=[ 445], 00:16:25.441 | 30.00th=[ 453], 40.00th=[ 461], 50.00th=[ 474], 60.00th=[ 486], 00:16:25.441 | 70.00th=[ 498], 80.00th=[ 506], 90.00th=[ 519], 95.00th=[ 537], 00:16:25.441 | 99.00th=[ 586], 99.50th=[ 603], 99.90th=[ 635], 99.95th=[ 709], 00:16:25.441 | 99.99th=[ 709] 00:16:25.441 write: IOPS=1400, BW=5602KiB/s (5737kB/s)(5608KiB/1001msec); 0 zone resets 00:16:25.441 slat (nsec): min=7587, max=85257, avg=15679.98, stdev=8718.32 00:16:25.441 clat (usec): min=280, max=969, avg=335.66, stdev=46.18 00:16:25.441 lat (usec): min=289, max=1002, avg=351.34, stdev=49.04 00:16:25.441 clat percentiles (usec): 00:16:25.441 | 1.00th=[ 289], 5.00th=[ 293], 10.00th=[ 297], 20.00th=[ 302], 00:16:25.441 | 30.00th=[ 306], 40.00th=[ 314], 50.00th=[ 322], 60.00th=[ 330], 00:16:25.441 | 70.00th=[ 343], 80.00th=[ 383], 90.00th=[ 396], 95.00th=[ 404], 00:16:25.441 | 99.00th=[ 453], 99.50th=[ 478], 99.90th=[ 955], 99.95th=[ 971], 00:16:25.441 | 99.99th=[ 971] 00:16:25.441 bw ( KiB/s): min= 5776, max= 5776, per=38.72%, avg=5776.00, stdev= 0.00, samples=1 00:16:25.441 iops : min= 1444, max= 1444, avg=1444.00, stdev= 0.00, samples=1 00:16:25.441 lat (usec) : 500=88.62%, 750=11.29%, 1000=0.08% 00:16:25.441 cpu : usr=2.50%, sys=4.00%, ctx=2427, majf=0, minf=1 00:16:25.441 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:25.441 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:25.441 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:25.441 issued rwts: total=1024,1402,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:25.441 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:25.441 job1: (groupid=0, jobs=1): err= 0: pid=1498111: Thu Apr 25 03:16:59 2024 00:16:25.441 read: IOPS=19, BW=76.9KiB/s (78.8kB/s)(80.0KiB/1040msec) 00:16:25.441 slat (nsec): min=7910, max=18067, avg=13642.85, stdev=1861.77 00:16:25.441 clat (usec): min=40886, max=41057, avg=40984.72, stdev=38.37 00:16:25.441 lat (usec): min=40900, max=41065, avg=40998.36, stdev=37.85 00:16:25.441 clat percentiles (usec): 00:16:25.441 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:16:25.441 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:16:25.441 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:16:25.441 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:16:25.441 | 99.99th=[41157] 00:16:25.441 write: IOPS=492, BW=1969KiB/s (2016kB/s)(2048KiB/1040msec); 0 zone resets 00:16:25.441 slat (nsec): min=6584, max=63888, avg=19464.33, stdev=12218.03 00:16:25.441 clat (usec): min=226, max=920, avg=403.78, stdev=109.44 00:16:25.441 lat (usec): min=234, max=972, avg=423.25, stdev=112.57 00:16:25.441 clat percentiles (usec): 00:16:25.441 | 1.00th=[ 233], 5.00th=[ 245], 10.00th=[ 273], 20.00th=[ 302], 00:16:25.441 | 30.00th=[ 338], 40.00th=[ 371], 50.00th=[ 396], 60.00th=[ 424], 00:16:25.441 | 70.00th=[ 449], 80.00th=[ 486], 90.00th=[ 537], 95.00th=[ 603], 00:16:25.441 | 99.00th=[ 709], 99.50th=[ 725], 99.90th=[ 922], 99.95th=[ 922], 00:16:25.441 | 99.99th=[ 922] 00:16:25.441 bw ( KiB/s): min= 4096, max= 4096, per=27.45%, avg=4096.00, stdev= 0.00, samples=1 00:16:25.441 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:25.441 lat (usec) : 250=5.45%, 500=73.68%, 750=16.73%, 1000=0.38% 00:16:25.441 lat (msec) : 50=3.76% 00:16:25.441 cpu : usr=0.48%, sys=1.35%, ctx=533, majf=0, minf=1 00:16:25.441 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:25.441 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:25.441 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:25.441 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:25.441 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:25.441 job2: (groupid=0, jobs=1): err= 0: pid=1498114: Thu Apr 25 03:16:59 2024 00:16:25.441 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:16:25.441 slat (nsec): min=5779, max=49460, avg=8947.35, stdev=4687.45 00:16:25.441 clat (usec): min=458, max=1412, avg=502.34, stdev=45.44 00:16:25.441 lat (usec): min=464, max=1419, avg=511.29, stdev=46.19 00:16:25.441 clat percentiles (usec): 00:16:25.441 | 1.00th=[ 469], 5.00th=[ 478], 10.00th=[ 482], 20.00th=[ 486], 00:16:25.441 | 30.00th=[ 490], 40.00th=[ 494], 50.00th=[ 498], 60.00th=[ 502], 00:16:25.441 | 70.00th=[ 506], 80.00th=[ 510], 90.00th=[ 523], 95.00th=[ 529], 00:16:25.441 | 99.00th=[ 570], 99.50th=[ 594], 99.90th=[ 1106], 99.95th=[ 1418], 00:16:25.441 | 99.99th=[ 1418] 00:16:25.441 write: IOPS=1451, BW=5806KiB/s (5946kB/s)(5812KiB/1001msec); 0 zone resets 00:16:25.441 slat (nsec): min=7705, max=76349, avg=15240.79, stdev=10283.44 00:16:25.441 clat (usec): min=229, max=976, avg=307.24, stdev=75.10 00:16:25.441 lat (usec): min=237, max=994, avg=322.49, stdev=80.49 00:16:25.441 clat percentiles (usec): 00:16:25.441 | 1.00th=[ 237], 5.00th=[ 247], 10.00th=[ 253], 20.00th=[ 262], 00:16:25.441 | 30.00th=[ 265], 40.00th=[ 273], 50.00th=[ 281], 60.00th=[ 289], 00:16:25.441 | 70.00th=[ 306], 80.00th=[ 334], 90.00th=[ 416], 95.00th=[ 465], 00:16:25.441 | 99.00th=[ 586], 99.50th=[ 660], 99.90th=[ 848], 99.95th=[ 979], 00:16:25.441 | 99.99th=[ 979] 00:16:25.441 bw ( KiB/s): min= 5152, max= 5152, per=34.53%, avg=5152.00, stdev= 0.00, samples=1 00:16:25.441 iops : min= 1288, max= 1288, avg=1288.00, stdev= 0.00, samples=1 00:16:25.441 lat (usec) : 250=3.92%, 500=77.23%, 750=18.57%, 1000=0.12% 00:16:25.441 lat (msec) : 2=0.16% 00:16:25.441 cpu : usr=2.10%, sys=4.20%, ctx=2479, majf=0, minf=1 00:16:25.441 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:25.441 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:25.441 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:25.441 issued rwts: total=1024,1453,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:25.441 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:25.441 job3: (groupid=0, jobs=1): err= 0: pid=1498115: Thu Apr 25 03:16:59 2024 00:16:25.441 read: IOPS=32, BW=131KiB/s (134kB/s)(132KiB/1005msec) 00:16:25.441 slat (nsec): min=6634, max=30597, avg=14981.36, stdev=5861.22 00:16:25.441 clat (usec): min=443, max=42372, avg=24245.85, stdev=20696.82 00:16:25.441 lat (usec): min=457, max=42385, avg=24260.83, stdev=20694.91 00:16:25.442 clat percentiles (usec): 00:16:25.442 | 1.00th=[ 445], 5.00th=[ 457], 10.00th=[ 465], 20.00th=[ 494], 00:16:25.442 | 30.00th=[ 537], 40.00th=[ 586], 50.00th=[41157], 60.00th=[41681], 00:16:25.442 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:16:25.442 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:16:25.442 | 99.99th=[42206] 00:16:25.442 write: IOPS=509, BW=2038KiB/s (2087kB/s)(2048KiB/1005msec); 0 zone resets 00:16:25.442 slat (nsec): min=6529, max=69541, avg=18996.99, stdev=11484.51 00:16:25.442 clat (usec): min=234, max=1468, avg=374.04, stdev=98.68 00:16:25.442 lat (usec): min=244, max=1479, avg=393.04, stdev=100.25 00:16:25.442 clat percentiles (usec): 00:16:25.442 | 1.00th=[ 247], 5.00th=[ 269], 10.00th=[ 281], 20.00th=[ 302], 00:16:25.442 | 30.00th=[ 326], 40.00th=[ 351], 50.00th=[ 363], 60.00th=[ 375], 00:16:25.442 | 70.00th=[ 396], 80.00th=[ 429], 90.00th=[ 465], 95.00th=[ 498], 00:16:25.442 | 99.00th=[ 668], 99.50th=[ 955], 99.90th=[ 1467], 99.95th=[ 1467], 00:16:25.442 | 99.99th=[ 1467] 00:16:25.442 bw ( KiB/s): min= 4096, max= 4096, per=27.45%, avg=4096.00, stdev= 0.00, samples=1 00:16:25.442 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:25.442 lat (usec) : 250=1.28%, 500=89.36%, 750=4.95%, 1000=0.55% 00:16:25.442 lat (msec) : 2=0.37%, 50=3.49% 00:16:25.442 cpu : usr=0.50%, sys=1.20%, ctx=546, majf=0, minf=2 00:16:25.442 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:25.442 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:25.442 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:25.442 issued rwts: total=33,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:25.442 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:25.442 00:16:25.442 Run status group 0 (all jobs): 00:16:25.442 READ: bw=8081KiB/s (8275kB/s), 76.9KiB/s-4092KiB/s (78.8kB/s-4190kB/s), io=8404KiB (8606kB), run=1001-1040msec 00:16:25.442 WRITE: bw=14.6MiB/s (15.3MB/s), 1969KiB/s-5806KiB/s (2016kB/s-5946kB/s), io=15.2MiB (15.9MB), run=1001-1040msec 00:16:25.442 00:16:25.442 Disk stats (read/write): 00:16:25.442 nvme0n1: ios=1032/1024, merge=0/0, ticks=629/335, in_queue=964, util=85.37% 00:16:25.442 nvme0n2: ios=37/512, merge=0/0, ticks=1506/195, in_queue=1701, util=89.32% 00:16:25.442 nvme0n3: ios=1035/1024, merge=0/0, ticks=956/303, in_queue=1259, util=93.42% 00:16:25.442 nvme0n4: ios=78/512, merge=0/0, ticks=763/187, in_queue=950, util=95.79% 00:16:25.442 03:16:59 -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:16:25.442 [global] 00:16:25.442 thread=1 00:16:25.442 invalidate=1 00:16:25.442 rw=randwrite 00:16:25.442 time_based=1 00:16:25.442 runtime=1 00:16:25.442 ioengine=libaio 00:16:25.442 direct=1 00:16:25.442 bs=4096 00:16:25.442 iodepth=1 00:16:25.442 norandommap=0 00:16:25.442 numjobs=1 00:16:25.442 00:16:25.442 verify_dump=1 00:16:25.442 verify_backlog=512 00:16:25.442 verify_state_save=0 00:16:25.442 do_verify=1 00:16:25.442 verify=crc32c-intel 00:16:25.442 [job0] 00:16:25.442 filename=/dev/nvme0n1 00:16:25.442 [job1] 00:16:25.442 filename=/dev/nvme0n2 00:16:25.442 [job2] 00:16:25.442 filename=/dev/nvme0n3 00:16:25.442 [job3] 00:16:25.442 filename=/dev/nvme0n4 00:16:25.701 Could not set queue depth (nvme0n1) 00:16:25.701 Could not set queue depth (nvme0n2) 00:16:25.701 Could not set queue depth (nvme0n3) 00:16:25.701 Could not set queue depth (nvme0n4) 00:16:25.701 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:25.701 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:25.701 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:25.701 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:25.701 fio-3.35 00:16:25.701 Starting 4 threads 00:16:27.079 00:16:27.079 job0: (groupid=0, jobs=1): err= 0: pid=1498346: Thu Apr 25 03:17:01 2024 00:16:27.079 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:16:27.079 slat (nsec): min=5635, max=57721, avg=16725.91, stdev=8460.50 00:16:27.079 clat (usec): min=377, max=41384, avg=555.35, stdev=1277.70 00:16:27.079 lat (usec): min=385, max=41396, avg=572.07, stdev=1277.59 00:16:27.079 clat percentiles (usec): 00:16:27.079 | 1.00th=[ 420], 5.00th=[ 457], 10.00th=[ 474], 20.00th=[ 490], 00:16:27.079 | 30.00th=[ 502], 40.00th=[ 506], 50.00th=[ 515], 60.00th=[ 523], 00:16:27.079 | 70.00th=[ 529], 80.00th=[ 545], 90.00th=[ 562], 95.00th=[ 578], 00:16:27.079 | 99.00th=[ 627], 99.50th=[ 660], 99.90th=[ 717], 99.95th=[41157], 00:16:27.079 | 99.99th=[41157] 00:16:27.079 write: IOPS=1378, BW=5514KiB/s (5647kB/s)(5520KiB/1001msec); 0 zone resets 00:16:27.079 slat (nsec): min=5870, max=68910, avg=13969.19, stdev=7963.07 00:16:27.080 clat (usec): min=224, max=702, avg=278.93, stdev=71.37 00:16:27.080 lat (usec): min=237, max=721, avg=292.90, stdev=74.29 00:16:27.080 clat percentiles (usec): 00:16:27.080 | 1.00th=[ 233], 5.00th=[ 237], 10.00th=[ 239], 20.00th=[ 243], 00:16:27.080 | 30.00th=[ 245], 40.00th=[ 249], 50.00th=[ 251], 60.00th=[ 255], 00:16:27.080 | 70.00th=[ 262], 80.00th=[ 281], 90.00th=[ 383], 95.00th=[ 445], 00:16:27.080 | 99.00th=[ 570], 99.50th=[ 603], 99.90th=[ 668], 99.95th=[ 701], 00:16:27.080 | 99.99th=[ 701] 00:16:27.080 bw ( KiB/s): min= 4208, max= 4208, per=24.32%, avg=4208.00, stdev= 0.00, samples=1 00:16:27.080 iops : min= 1052, max= 1052, avg=1052.00, stdev= 0.00, samples=1 00:16:27.080 lat (usec) : 250=26.62%, 500=41.85%, 750=31.49% 00:16:27.080 lat (msec) : 50=0.04% 00:16:27.080 cpu : usr=1.60%, sys=4.10%, ctx=2405, majf=0, minf=1 00:16:27.080 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:27.080 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:27.080 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:27.080 issued rwts: total=1024,1380,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:27.080 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:27.080 job1: (groupid=0, jobs=1): err= 0: pid=1498347: Thu Apr 25 03:17:01 2024 00:16:27.080 read: IOPS=1045, BW=4184KiB/s (4284kB/s)(4188KiB/1001msec) 00:16:27.080 slat (nsec): min=6039, max=43451, avg=10504.26, stdev=5505.73 00:16:27.080 clat (usec): min=362, max=775, avg=479.61, stdev=63.36 00:16:27.080 lat (usec): min=371, max=808, avg=490.12, stdev=64.43 00:16:27.080 clat percentiles (usec): 00:16:27.080 | 1.00th=[ 371], 5.00th=[ 388], 10.00th=[ 396], 20.00th=[ 408], 00:16:27.080 | 30.00th=[ 437], 40.00th=[ 478], 50.00th=[ 490], 60.00th=[ 498], 00:16:27.080 | 70.00th=[ 510], 80.00th=[ 529], 90.00th=[ 553], 95.00th=[ 586], 00:16:27.080 | 99.00th=[ 644], 99.50th=[ 660], 99.90th=[ 775], 99.95th=[ 775], 00:16:27.080 | 99.99th=[ 775] 00:16:27.080 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:16:27.080 slat (nsec): min=7363, max=70394, avg=18129.57, stdev=9475.39 00:16:27.080 clat (usec): min=233, max=1120, avg=292.05, stdev=59.88 00:16:27.080 lat (usec): min=241, max=1131, avg=310.18, stdev=62.41 00:16:27.080 clat percentiles (usec): 00:16:27.080 | 1.00th=[ 239], 5.00th=[ 245], 10.00th=[ 249], 20.00th=[ 255], 00:16:27.080 | 30.00th=[ 265], 40.00th=[ 273], 50.00th=[ 281], 60.00th=[ 285], 00:16:27.080 | 70.00th=[ 289], 80.00th=[ 302], 90.00th=[ 355], 95.00th=[ 416], 00:16:27.080 | 99.00th=[ 494], 99.50th=[ 523], 99.90th=[ 865], 99.95th=[ 1123], 00:16:27.080 | 99.99th=[ 1123] 00:16:27.080 bw ( KiB/s): min= 6552, max= 6552, per=37.86%, avg=6552.00, stdev= 0.00, samples=1 00:16:27.080 iops : min= 1638, max= 1638, avg=1638.00, stdev= 0.00, samples=1 00:16:27.080 lat (usec) : 250=7.98%, 500=75.49%, 750=16.26%, 1000=0.23% 00:16:27.080 lat (msec) : 2=0.04% 00:16:27.080 cpu : usr=3.80%, sys=4.10%, ctx=2584, majf=0, minf=2 00:16:27.080 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:27.080 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:27.080 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:27.080 issued rwts: total=1047,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:27.080 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:27.080 job2: (groupid=0, jobs=1): err= 0: pid=1498348: Thu Apr 25 03:17:01 2024 00:16:27.080 read: IOPS=679, BW=2717KiB/s (2782kB/s)(2796KiB/1029msec) 00:16:27.080 slat (nsec): min=6942, max=36522, avg=14867.70, stdev=4855.11 00:16:27.080 clat (usec): min=452, max=42400, avg=990.91, stdev=4116.85 00:16:27.080 lat (usec): min=465, max=42414, avg=1005.77, stdev=4117.99 00:16:27.080 clat percentiles (usec): 00:16:27.080 | 1.00th=[ 474], 5.00th=[ 498], 10.00th=[ 510], 20.00th=[ 519], 00:16:27.080 | 30.00th=[ 529], 40.00th=[ 537], 50.00th=[ 562], 60.00th=[ 586], 00:16:27.080 | 70.00th=[ 603], 80.00th=[ 635], 90.00th=[ 685], 95.00th=[ 717], 00:16:27.080 | 99.00th=[41157], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:16:27.080 | 99.99th=[42206] 00:16:27.080 write: IOPS=995, BW=3981KiB/s (4076kB/s)(4096KiB/1029msec); 0 zone resets 00:16:27.080 slat (nsec): min=6865, max=61562, avg=14777.06, stdev=10314.78 00:16:27.080 clat (usec): min=231, max=1825, avg=296.62, stdev=77.77 00:16:27.080 lat (usec): min=239, max=1833, avg=311.40, stdev=80.97 00:16:27.080 clat percentiles (usec): 00:16:27.080 | 1.00th=[ 233], 5.00th=[ 237], 10.00th=[ 241], 20.00th=[ 247], 00:16:27.080 | 30.00th=[ 251], 40.00th=[ 260], 50.00th=[ 273], 60.00th=[ 289], 00:16:27.080 | 70.00th=[ 310], 80.00th=[ 347], 90.00th=[ 388], 95.00th=[ 424], 00:16:27.080 | 99.00th=[ 490], 99.50th=[ 515], 99.90th=[ 578], 99.95th=[ 1827], 00:16:27.080 | 99.99th=[ 1827] 00:16:27.080 bw ( KiB/s): min= 3912, max= 4280, per=23.67%, avg=4096.00, stdev=260.22, samples=2 00:16:27.080 iops : min= 978, max= 1070, avg=1024.00, stdev=65.05, samples=2 00:16:27.080 lat (usec) : 250=15.84%, 500=45.62%, 750=37.38%, 1000=0.70% 00:16:27.080 lat (msec) : 2=0.06%, 50=0.41% 00:16:27.080 cpu : usr=0.88%, sys=2.92%, ctx=1724, majf=0, minf=1 00:16:27.080 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:27.080 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:27.080 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:27.080 issued rwts: total=699,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:27.080 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:27.080 job3: (groupid=0, jobs=1): err= 0: pid=1498349: Thu Apr 25 03:17:01 2024 00:16:27.080 read: IOPS=22, BW=90.2KiB/s (92.4kB/s)(92.0KiB/1020msec) 00:16:27.080 slat (nsec): min=13023, max=36058, avg=19675.22, stdev=8024.26 00:16:27.080 clat (usec): min=524, max=41655, avg=35772.48, stdev=13880.31 00:16:27.080 lat (usec): min=554, max=41669, avg=35792.16, stdev=13880.31 00:16:27.080 clat percentiles (usec): 00:16:27.080 | 1.00th=[ 529], 5.00th=[ 627], 10.00th=[ 1029], 20.00th=[40633], 00:16:27.080 | 30.00th=[40633], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:16:27.080 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:16:27.080 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:16:27.080 | 99.99th=[41681] 00:16:27.080 write: IOPS=501, BW=2008KiB/s (2056kB/s)(2048KiB/1020msec); 0 zone resets 00:16:27.080 slat (nsec): min=8231, max=67195, avg=22013.01, stdev=11351.44 00:16:27.080 clat (usec): min=237, max=1246, avg=356.22, stdev=90.73 00:16:27.080 lat (usec): min=249, max=1275, avg=378.23, stdev=91.51 00:16:27.080 clat percentiles (usec): 00:16:27.080 | 1.00th=[ 251], 5.00th=[ 265], 10.00th=[ 273], 20.00th=[ 285], 00:16:27.080 | 30.00th=[ 302], 40.00th=[ 314], 50.00th=[ 326], 60.00th=[ 371], 00:16:27.080 | 70.00th=[ 396], 80.00th=[ 420], 90.00th=[ 445], 95.00th=[ 498], 00:16:27.080 | 99.00th=[ 562], 99.50th=[ 816], 99.90th=[ 1254], 99.95th=[ 1254], 00:16:27.080 | 99.99th=[ 1254] 00:16:27.080 bw ( KiB/s): min= 4096, max= 4096, per=23.67%, avg=4096.00, stdev= 0.00, samples=1 00:16:27.080 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:27.080 lat (usec) : 250=0.75%, 500=90.28%, 750=4.49%, 1000=0.37% 00:16:27.080 lat (msec) : 2=0.37%, 50=3.74% 00:16:27.080 cpu : usr=0.49%, sys=1.57%, ctx=537, majf=0, minf=1 00:16:27.080 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:27.080 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:27.080 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:27.080 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:27.080 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:27.080 00:16:27.080 Run status group 0 (all jobs): 00:16:27.080 READ: bw=10.6MiB/s (11.1MB/s), 90.2KiB/s-4184KiB/s (92.4kB/s-4284kB/s), io=10.9MiB (11.4MB), run=1001-1029msec 00:16:27.080 WRITE: bw=16.9MiB/s (17.7MB/s), 2008KiB/s-6138KiB/s (2056kB/s-6285kB/s), io=17.4MiB (18.2MB), run=1001-1029msec 00:16:27.080 00:16:27.080 Disk stats (read/write): 00:16:27.080 nvme0n1: ios=973/1024, merge=0/0, ticks=1445/283, in_queue=1728, util=98.70% 00:16:27.080 nvme0n2: ios=1048/1098, merge=0/0, ticks=1459/302, in_queue=1761, util=98.17% 00:16:27.080 nvme0n3: ios=744/1024, merge=0/0, ticks=755/296, in_queue=1051, util=98.75% 00:16:27.080 nvme0n4: ios=68/512, merge=0/0, ticks=838/176, in_queue=1014, util=99.47% 00:16:27.080 03:17:01 -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:16:27.080 [global] 00:16:27.080 thread=1 00:16:27.080 invalidate=1 00:16:27.080 rw=write 00:16:27.080 time_based=1 00:16:27.080 runtime=1 00:16:27.080 ioengine=libaio 00:16:27.080 direct=1 00:16:27.080 bs=4096 00:16:27.080 iodepth=128 00:16:27.080 norandommap=0 00:16:27.080 numjobs=1 00:16:27.080 00:16:27.080 verify_dump=1 00:16:27.080 verify_backlog=512 00:16:27.080 verify_state_save=0 00:16:27.080 do_verify=1 00:16:27.080 verify=crc32c-intel 00:16:27.081 [job0] 00:16:27.081 filename=/dev/nvme0n1 00:16:27.081 [job1] 00:16:27.081 filename=/dev/nvme0n2 00:16:27.081 [job2] 00:16:27.081 filename=/dev/nvme0n3 00:16:27.081 [job3] 00:16:27.081 filename=/dev/nvme0n4 00:16:27.081 Could not set queue depth (nvme0n1) 00:16:27.081 Could not set queue depth (nvme0n2) 00:16:27.081 Could not set queue depth (nvme0n3) 00:16:27.081 Could not set queue depth (nvme0n4) 00:16:27.340 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:27.340 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:27.340 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:27.340 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:27.340 fio-3.35 00:16:27.340 Starting 4 threads 00:16:28.718 00:16:28.718 job0: (groupid=0, jobs=1): err= 0: pid=1498575: Thu Apr 25 03:17:02 2024 00:16:28.718 read: IOPS=5069, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1010msec) 00:16:28.719 slat (usec): min=2, max=11825, avg=104.48, stdev=753.10 00:16:28.719 clat (usec): min=5300, max=33544, avg=13355.73, stdev=3748.17 00:16:28.719 lat (usec): min=5307, max=33554, avg=13460.21, stdev=3806.78 00:16:28.719 clat percentiles (usec): 00:16:28.719 | 1.00th=[ 8160], 5.00th=[ 9896], 10.00th=[10028], 20.00th=[10421], 00:16:28.719 | 30.00th=[10683], 40.00th=[11207], 50.00th=[12256], 60.00th=[12911], 00:16:28.719 | 70.00th=[14091], 80.00th=[15795], 90.00th=[19006], 95.00th=[21365], 00:16:28.719 | 99.00th=[25035], 99.50th=[25297], 99.90th=[30016], 99.95th=[30016], 00:16:28.719 | 99.99th=[33424] 00:16:28.719 write: IOPS=5265, BW=20.6MiB/s (21.6MB/s)(20.8MiB/1010msec); 0 zone resets 00:16:28.719 slat (usec): min=4, max=13644, avg=79.95, stdev=503.41 00:16:28.719 clat (usec): min=1516, max=23411, avg=10878.79, stdev=2850.49 00:16:28.719 lat (usec): min=1529, max=23432, avg=10958.74, stdev=2859.20 00:16:28.719 clat percentiles (usec): 00:16:28.719 | 1.00th=[ 4228], 5.00th=[ 5932], 10.00th=[ 7046], 20.00th=[ 8586], 00:16:28.719 | 30.00th=[ 9765], 40.00th=[10421], 50.00th=[11338], 60.00th=[11731], 00:16:28.719 | 70.00th=[12256], 80.00th=[12911], 90.00th=[13829], 95.00th=[16188], 00:16:28.719 | 99.00th=[17957], 99.50th=[18220], 99.90th=[20579], 99.95th=[22676], 00:16:28.719 | 99.99th=[23462] 00:16:28.719 bw ( KiB/s): min=20480, max=21048, per=34.41%, avg=20764.00, stdev=401.64, samples=2 00:16:28.719 iops : min= 5120, max= 5262, avg=5191.00, stdev=100.41, samples=2 00:16:28.719 lat (msec) : 2=0.08%, 4=0.23%, 10=20.09%, 20=75.36%, 50=4.24% 00:16:28.719 cpu : usr=4.96%, sys=8.33%, ctx=479, majf=0, minf=19 00:16:28.719 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:16:28.719 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:28.719 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:28.719 issued rwts: total=5120,5318,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:28.719 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:28.719 job1: (groupid=0, jobs=1): err= 0: pid=1498576: Thu Apr 25 03:17:02 2024 00:16:28.719 read: IOPS=3608, BW=14.1MiB/s (14.8MB/s)(14.2MiB/1005msec) 00:16:28.719 slat (usec): min=3, max=8830, avg=132.18, stdev=736.78 00:16:28.719 clat (usec): min=1013, max=28474, avg=16842.30, stdev=4859.90 00:16:28.719 lat (usec): min=6208, max=29930, avg=16974.48, stdev=4900.33 00:16:28.719 clat percentiles (usec): 00:16:28.719 | 1.00th=[ 8717], 5.00th=[10290], 10.00th=[10945], 20.00th=[11600], 00:16:28.719 | 30.00th=[12387], 40.00th=[14353], 50.00th=[17171], 60.00th=[19792], 00:16:28.719 | 70.00th=[20579], 80.00th=[21627], 90.00th=[22676], 95.00th=[23987], 00:16:28.719 | 99.00th=[25822], 99.50th=[28443], 99.90th=[28443], 99.95th=[28443], 00:16:28.719 | 99.99th=[28443] 00:16:28.719 write: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec); 0 zone resets 00:16:28.719 slat (usec): min=4, max=7448, avg=119.82, stdev=651.65 00:16:28.719 clat (usec): min=7713, max=34272, avg=16102.83, stdev=5761.46 00:16:28.719 lat (usec): min=7798, max=34290, avg=16222.65, stdev=5812.29 00:16:28.719 clat percentiles (usec): 00:16:28.719 | 1.00th=[ 8848], 5.00th=[10159], 10.00th=[10945], 20.00th=[11863], 00:16:28.719 | 30.00th=[12125], 40.00th=[12518], 50.00th=[13173], 60.00th=[15533], 00:16:28.719 | 70.00th=[18744], 80.00th=[21890], 90.00th=[23462], 95.00th=[28181], 00:16:28.719 | 99.00th=[32900], 99.50th=[34341], 99.90th=[34341], 99.95th=[34341], 00:16:28.719 | 99.99th=[34341] 00:16:28.719 bw ( KiB/s): min=13160, max=18928, per=26.59%, avg=16044.00, stdev=4078.59, samples=2 00:16:28.719 iops : min= 3290, max= 4732, avg=4011.00, stdev=1019.65, samples=2 00:16:28.719 lat (msec) : 2=0.01%, 10=3.72%, 20=64.70%, 50=31.57% 00:16:28.719 cpu : usr=4.58%, sys=6.47%, ctx=372, majf=0, minf=9 00:16:28.719 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:16:28.719 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:28.719 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:28.719 issued rwts: total=3627,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:28.719 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:28.719 job2: (groupid=0, jobs=1): err= 0: pid=1498577: Thu Apr 25 03:17:02 2024 00:16:28.719 read: IOPS=3538, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1013msec) 00:16:28.719 slat (usec): min=2, max=41679, avg=131.43, stdev=1021.49 00:16:28.719 clat (usec): min=7269, max=73467, avg=16015.85, stdev=8740.47 00:16:28.719 lat (usec): min=7274, max=73476, avg=16147.28, stdev=8798.41 00:16:28.719 clat percentiles (usec): 00:16:28.719 | 1.00th=[ 7963], 5.00th=[11076], 10.00th=[11994], 20.00th=[12518], 00:16:28.719 | 30.00th=[13173], 40.00th=[13698], 50.00th=[13960], 60.00th=[14222], 00:16:28.719 | 70.00th=[14877], 80.00th=[16188], 90.00th=[19006], 95.00th=[31851], 00:16:28.719 | 99.00th=[65274], 99.50th=[65274], 99.90th=[73925], 99.95th=[73925], 00:16:28.719 | 99.99th=[73925] 00:16:28.719 write: IOPS=3771, BW=14.7MiB/s (15.4MB/s)(14.9MiB/1013msec); 0 zone resets 00:16:28.719 slat (usec): min=3, max=7483, avg=130.82, stdev=546.81 00:16:28.719 clat (usec): min=299, max=37860, avg=18504.61, stdev=7610.07 00:16:28.719 lat (usec): min=315, max=37866, avg=18635.43, stdev=7667.83 00:16:28.719 clat percentiles (usec): 00:16:28.719 | 1.00th=[ 4293], 5.00th=[ 7963], 10.00th=[10290], 20.00th=[12256], 00:16:28.719 | 30.00th=[13304], 40.00th=[14615], 50.00th=[15533], 60.00th=[18744], 00:16:28.719 | 70.00th=[24249], 80.00th=[27657], 90.00th=[29492], 95.00th=[30802], 00:16:28.719 | 99.00th=[32113], 99.50th=[32637], 99.90th=[34866], 99.95th=[36439], 00:16:28.719 | 99.99th=[38011] 00:16:28.719 bw ( KiB/s): min=12288, max=17264, per=24.49%, avg=14776.00, stdev=3518.56, samples=2 00:16:28.719 iops : min= 3072, max= 4316, avg=3694.00, stdev=879.64, samples=2 00:16:28.719 lat (usec) : 500=0.01%, 750=0.04%, 1000=0.01% 00:16:28.719 lat (msec) : 2=0.32%, 4=0.08%, 10=5.17%, 20=71.17%, 50=22.27% 00:16:28.719 lat (msec) : 100=0.92% 00:16:28.719 cpu : usr=1.68%, sys=4.84%, ctx=503, majf=0, minf=7 00:16:28.719 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:16:28.719 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:28.719 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:28.719 issued rwts: total=3584,3821,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:28.719 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:28.719 job3: (groupid=0, jobs=1): err= 0: pid=1498578: Thu Apr 25 03:17:02 2024 00:16:28.719 read: IOPS=2114, BW=8458KiB/s (8661kB/s)(8856KiB/1047msec) 00:16:28.719 slat (usec): min=2, max=38861, avg=204.96, stdev=1423.03 00:16:28.719 clat (usec): min=9420, max=73112, avg=27276.15, stdev=12654.76 00:16:28.719 lat (usec): min=11592, max=79158, avg=27481.11, stdev=12721.24 00:16:28.719 clat percentiles (usec): 00:16:28.719 | 1.00th=[13304], 5.00th=[14615], 10.00th=[16450], 20.00th=[20317], 00:16:28.719 | 30.00th=[21103], 40.00th=[22152], 50.00th=[22938], 60.00th=[24511], 00:16:28.719 | 70.00th=[27132], 80.00th=[29754], 90.00th=[47973], 95.00th=[61604], 00:16:28.719 | 99.00th=[68682], 99.50th=[68682], 99.90th=[72877], 99.95th=[72877], 00:16:28.719 | 99.99th=[72877] 00:16:28.719 write: IOPS=2445, BW=9780KiB/s (10.0MB/s)(10.0MiB/1047msec); 0 zone resets 00:16:28.719 slat (usec): min=3, max=28900, avg=209.62, stdev=1507.58 00:16:28.719 clat (usec): min=6977, max=83220, avg=27911.14, stdev=18202.22 00:16:28.719 lat (usec): min=6984, max=85339, avg=28120.76, stdev=18319.97 00:16:28.719 clat percentiles (usec): 00:16:28.719 | 1.00th=[ 7046], 5.00th=[11863], 10.00th=[13566], 20.00th=[15533], 00:16:28.719 | 30.00th=[17433], 40.00th=[20055], 50.00th=[21627], 60.00th=[23200], 00:16:28.719 | 70.00th=[25560], 80.00th=[35914], 90.00th=[64750], 95.00th=[69731], 00:16:28.719 | 99.00th=[83362], 99.50th=[83362], 99.90th=[83362], 99.95th=[83362], 00:16:28.719 | 99.99th=[83362] 00:16:28.719 bw ( KiB/s): min= 9992, max=10488, per=16.97%, avg=10240.00, stdev=350.72, samples=2 00:16:28.719 iops : min= 2498, max= 2622, avg=2560.00, stdev=87.68, samples=2 00:16:28.719 lat (msec) : 10=1.36%, 20=28.38%, 50=57.96%, 100=12.30% 00:16:28.719 cpu : usr=1.53%, sys=2.39%, ctx=174, majf=0, minf=15 00:16:28.719 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:16:28.719 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:28.719 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:28.719 issued rwts: total=2214,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:28.719 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:28.719 00:16:28.719 Run status group 0 (all jobs): 00:16:28.719 READ: bw=54.3MiB/s (56.9MB/s), 8458KiB/s-19.8MiB/s (8661kB/s-20.8MB/s), io=56.8MiB (59.6MB), run=1005-1047msec 00:16:28.719 WRITE: bw=58.9MiB/s (61.8MB/s), 9780KiB/s-20.6MiB/s (10.0MB/s-21.6MB/s), io=61.7MiB (64.7MB), run=1005-1047msec 00:16:28.719 00:16:28.719 Disk stats (read/write): 00:16:28.719 nvme0n1: ios=4148/4495, merge=0/0, ticks=53821/47760, in_queue=101581, util=91.58% 00:16:28.719 nvme0n2: ios=3128/3584, merge=0/0, ticks=17610/17013, in_queue=34623, util=92.77% 00:16:28.719 nvme0n3: ios=3119/3243, merge=0/0, ticks=25750/28365, in_queue=54115, util=96.11% 00:16:28.719 nvme0n4: ios=1812/2048, merge=0/0, ticks=19791/20080, in_queue=39871, util=94.90% 00:16:28.719 03:17:02 -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:16:28.719 [global] 00:16:28.719 thread=1 00:16:28.719 invalidate=1 00:16:28.719 rw=randwrite 00:16:28.719 time_based=1 00:16:28.719 runtime=1 00:16:28.719 ioengine=libaio 00:16:28.719 direct=1 00:16:28.719 bs=4096 00:16:28.719 iodepth=128 00:16:28.719 norandommap=0 00:16:28.719 numjobs=1 00:16:28.719 00:16:28.719 verify_dump=1 00:16:28.719 verify_backlog=512 00:16:28.719 verify_state_save=0 00:16:28.719 do_verify=1 00:16:28.719 verify=crc32c-intel 00:16:28.719 [job0] 00:16:28.719 filename=/dev/nvme0n1 00:16:28.719 [job1] 00:16:28.719 filename=/dev/nvme0n2 00:16:28.719 [job2] 00:16:28.719 filename=/dev/nvme0n3 00:16:28.719 [job3] 00:16:28.719 filename=/dev/nvme0n4 00:16:28.719 Could not set queue depth (nvme0n1) 00:16:28.719 Could not set queue depth (nvme0n2) 00:16:28.719 Could not set queue depth (nvme0n3) 00:16:28.719 Could not set queue depth (nvme0n4) 00:16:28.719 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:28.719 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:28.719 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:28.719 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:28.719 fio-3.35 00:16:28.719 Starting 4 threads 00:16:30.098 00:16:30.098 job0: (groupid=0, jobs=1): err= 0: pid=1498926: Thu Apr 25 03:17:04 2024 00:16:30.098 read: IOPS=5609, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1004msec) 00:16:30.098 slat (usec): min=2, max=9886, avg=88.42, stdev=580.23 00:16:30.098 clat (usec): min=4009, max=26241, avg=11414.07, stdev=3022.97 00:16:30.098 lat (usec): min=4015, max=26247, avg=11502.50, stdev=3069.56 00:16:30.098 clat percentiles (usec): 00:16:30.098 | 1.00th=[ 6325], 5.00th=[ 8094], 10.00th=[ 8717], 20.00th=[ 9110], 00:16:30.098 | 30.00th=[ 9896], 40.00th=[10290], 50.00th=[10683], 60.00th=[11076], 00:16:30.098 | 70.00th=[11731], 80.00th=[13304], 90.00th=[15401], 95.00th=[17695], 00:16:30.098 | 99.00th=[22414], 99.50th=[23462], 99.90th=[25035], 99.95th=[25035], 00:16:30.098 | 99.99th=[26346] 00:16:30.098 write: IOPS=5733, BW=22.4MiB/s (23.5MB/s)(22.5MiB/1004msec); 0 zone resets 00:16:30.098 slat (usec): min=4, max=7267, avg=79.33, stdev=439.44 00:16:30.098 clat (usec): min=2387, max=27287, avg=10946.00, stdev=4427.36 00:16:30.098 lat (usec): min=2768, max=27323, avg=11025.33, stdev=4449.52 00:16:30.098 clat percentiles (usec): 00:16:30.098 | 1.00th=[ 4555], 5.00th=[ 5800], 10.00th=[ 6980], 20.00th=[ 7635], 00:16:30.098 | 30.00th=[ 8455], 40.00th=[ 9503], 50.00th=[10159], 60.00th=[10945], 00:16:30.098 | 70.00th=[11338], 80.00th=[11994], 90.00th=[17433], 95.00th=[21627], 00:16:30.098 | 99.00th=[26084], 99.50th=[26870], 99.90th=[27395], 99.95th=[27395], 00:16:30.098 | 99.99th=[27395] 00:16:30.098 bw ( KiB/s): min=21968, max=23128, per=36.66%, avg=22548.00, stdev=820.24, samples=2 00:16:30.098 iops : min= 5492, max= 5782, avg=5637.00, stdev=205.06, samples=2 00:16:30.098 lat (msec) : 4=0.25%, 10=39.61%, 20=55.22%, 50=4.92% 00:16:30.098 cpu : usr=6.28%, sys=8.47%, ctx=461, majf=0, minf=13 00:16:30.098 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:16:30.098 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:30.098 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:30.098 issued rwts: total=5632,5756,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:30.098 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:30.098 job1: (groupid=0, jobs=1): err= 0: pid=1498927: Thu Apr 25 03:17:04 2024 00:16:30.098 read: IOPS=2668, BW=10.4MiB/s (10.9MB/s)(10.5MiB/1007msec) 00:16:30.098 slat (usec): min=2, max=25247, avg=123.33, stdev=935.74 00:16:30.098 clat (usec): min=1070, max=37674, avg=15261.19, stdev=6737.18 00:16:30.098 lat (usec): min=1074, max=41213, avg=15384.53, stdev=6792.77 00:16:30.098 clat percentiles (usec): 00:16:30.098 | 1.00th=[ 1188], 5.00th=[ 5342], 10.00th=[10028], 20.00th=[11863], 00:16:30.098 | 30.00th=[11994], 40.00th=[12256], 50.00th=[12387], 60.00th=[13042], 00:16:30.098 | 70.00th=[16581], 80.00th=[21103], 90.00th=[27132], 95.00th=[28705], 00:16:30.098 | 99.00th=[33424], 99.50th=[34866], 99.90th=[36439], 99.95th=[36439], 00:16:30.098 | 99.99th=[37487] 00:16:30.098 write: IOPS=3050, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1007msec); 0 zone resets 00:16:30.098 slat (usec): min=3, max=86175, avg=201.57, stdev=1833.50 00:16:30.098 clat (usec): min=1669, max=117016, avg=28235.32, stdev=25832.00 00:16:30.098 lat (usec): min=1675, max=117022, avg=28436.89, stdev=25955.62 00:16:30.098 clat percentiles (msec): 00:16:30.098 | 1.00th=[ 4], 5.00th=[ 6], 10.00th=[ 10], 20.00th=[ 11], 00:16:30.098 | 30.00th=[ 13], 40.00th=[ 13], 50.00th=[ 22], 60.00th=[ 25], 00:16:30.098 | 70.00th=[ 28], 80.00th=[ 40], 90.00th=[ 68], 95.00th=[ 96], 00:16:30.098 | 99.00th=[ 110], 99.50th=[ 111], 99.90th=[ 117], 99.95th=[ 117], 00:16:30.098 | 99.99th=[ 117] 00:16:30.098 bw ( KiB/s): min= 9346, max=15240, per=19.99%, avg=12293.00, stdev=4167.69, samples=2 00:16:30.098 iops : min= 2336, max= 3810, avg=3073.00, stdev=1042.28, samples=2 00:16:30.098 lat (msec) : 2=0.85%, 4=2.43%, 10=9.65%, 20=49.73%, 50=29.59% 00:16:30.098 lat (msec) : 100=5.92%, 250=1.82% 00:16:30.098 cpu : usr=1.99%, sys=3.18%, ctx=307, majf=0, minf=9 00:16:30.098 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:16:30.098 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:30.098 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:30.098 issued rwts: total=2687,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:30.098 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:30.098 job2: (groupid=0, jobs=1): err= 0: pid=1498928: Thu Apr 25 03:17:04 2024 00:16:30.098 read: IOPS=2274, BW=9096KiB/s (9315kB/s)(9160KiB/1007msec) 00:16:30.098 slat (usec): min=3, max=23125, avg=193.71, stdev=1237.37 00:16:30.098 clat (msec): min=6, max=113, avg=20.60, stdev=16.11 00:16:30.098 lat (msec): min=7, max=113, avg=20.80, stdev=16.26 00:16:30.098 clat percentiles (msec): 00:16:30.098 | 1.00th=[ 8], 5.00th=[ 12], 10.00th=[ 13], 20.00th=[ 14], 00:16:30.098 | 30.00th=[ 14], 40.00th=[ 14], 50.00th=[ 15], 60.00th=[ 15], 00:16:30.098 | 70.00th=[ 18], 80.00th=[ 22], 90.00th=[ 35], 95.00th=[ 63], 00:16:30.098 | 99.00th=[ 93], 99.50th=[ 106], 99.90th=[ 113], 99.95th=[ 113], 00:16:30.098 | 99.99th=[ 113] 00:16:30.098 write: IOPS=2542, BW=9.93MiB/s (10.4MB/s)(10.0MiB/1007msec); 0 zone resets 00:16:30.098 slat (usec): min=4, max=102422, avg=203.27, stdev=2184.01 00:16:30.098 clat (msec): min=4, max=113, avg=28.97, stdev=24.79 00:16:30.098 lat (msec): min=4, max=113, avg=29.17, stdev=24.93 00:16:30.098 clat percentiles (msec): 00:16:30.098 | 1.00th=[ 6], 5.00th=[ 9], 10.00th=[ 10], 20.00th=[ 12], 00:16:30.098 | 30.00th=[ 15], 40.00th=[ 17], 50.00th=[ 23], 60.00th=[ 25], 00:16:30.098 | 70.00th=[ 28], 80.00th=[ 36], 90.00th=[ 63], 95.00th=[ 92], 00:16:30.098 | 99.00th=[ 112], 99.50th=[ 112], 99.90th=[ 112], 99.95th=[ 113], 00:16:30.098 | 99.99th=[ 113] 00:16:30.098 bw ( KiB/s): min= 8192, max=12288, per=16.65%, avg=10240.00, stdev=2896.31, samples=2 00:16:30.098 iops : min= 2048, max= 3072, avg=2560.00, stdev=724.08, samples=2 00:16:30.098 lat (msec) : 10=7.26%, 20=52.95%, 50=28.12%, 100=9.98%, 250=1.69% 00:16:30.098 cpu : usr=2.49%, sys=4.27%, ctx=302, majf=0, minf=17 00:16:30.098 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:16:30.098 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:30.098 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:30.098 issued rwts: total=2290,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:30.098 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:30.098 job3: (groupid=0, jobs=1): err= 0: pid=1498929: Thu Apr 25 03:17:04 2024 00:16:30.098 read: IOPS=3967, BW=15.5MiB/s (16.2MB/s)(15.6MiB/1007msec) 00:16:30.098 slat (usec): min=2, max=35829, avg=137.22, stdev=1226.22 00:16:30.098 clat (usec): min=768, max=67884, avg=16316.43, stdev=9928.94 00:16:30.099 lat (usec): min=7498, max=71756, avg=16453.65, stdev=9995.09 00:16:30.099 clat percentiles (usec): 00:16:30.099 | 1.00th=[ 8979], 5.00th=[10421], 10.00th=[10945], 20.00th=[11863], 00:16:30.099 | 30.00th=[12256], 40.00th=[12518], 50.00th=[12649], 60.00th=[12780], 00:16:30.099 | 70.00th=[13566], 80.00th=[14746], 90.00th=[35390], 95.00th=[40633], 00:16:30.099 | 99.00th=[49021], 99.50th=[67634], 99.90th=[67634], 99.95th=[67634], 00:16:30.099 | 99.99th=[67634] 00:16:30.099 write: IOPS=4067, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1007msec); 0 zone resets 00:16:30.099 slat (usec): min=3, max=11563, avg=105.91, stdev=529.13 00:16:30.099 clat (usec): min=7141, max=57552, avg=15087.32, stdev=7350.47 00:16:30.099 lat (usec): min=7146, max=57558, avg=15193.23, stdev=7375.90 00:16:30.099 clat percentiles (usec): 00:16:30.099 | 1.00th=[ 8455], 5.00th=[10028], 10.00th=[10945], 20.00th=[11731], 00:16:30.099 | 30.00th=[12125], 40.00th=[12387], 50.00th=[12780], 60.00th=[12911], 00:16:30.099 | 70.00th=[13173], 80.00th=[15533], 90.00th=[22676], 95.00th=[31851], 00:16:30.099 | 99.00th=[51643], 99.50th=[52691], 99.90th=[57410], 99.95th=[57410], 00:16:30.099 | 99.99th=[57410] 00:16:30.099 bw ( KiB/s): min=16368, max=16400, per=26.64%, avg=16384.00, stdev=22.63, samples=2 00:16:30.099 iops : min= 4092, max= 4100, avg=4096.00, stdev= 5.66, samples=2 00:16:30.099 lat (usec) : 1000=0.01% 00:16:30.099 lat (msec) : 10=3.58%, 20=81.78%, 50=13.45%, 100=1.17% 00:16:30.099 cpu : usr=1.59%, sys=5.47%, ctx=482, majf=0, minf=11 00:16:30.099 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:16:30.099 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:30.099 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:30.099 issued rwts: total=3995,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:30.099 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:30.099 00:16:30.099 Run status group 0 (all jobs): 00:16:30.099 READ: bw=56.6MiB/s (59.4MB/s), 9096KiB/s-21.9MiB/s (9315kB/s-23.0MB/s), io=57.0MiB (59.8MB), run=1004-1007msec 00:16:30.099 WRITE: bw=60.1MiB/s (63.0MB/s), 9.93MiB/s-22.4MiB/s (10.4MB/s-23.5MB/s), io=60.5MiB (63.4MB), run=1004-1007msec 00:16:30.099 00:16:30.099 Disk stats (read/write): 00:16:30.099 nvme0n1: ios=4625/4801, merge=0/0, ticks=52071/51510, in_queue=103581, util=85.07% 00:16:30.099 nvme0n2: ios=2104/2189, merge=0/0, ticks=30923/64579, in_queue=95502, util=90.95% 00:16:30.099 nvme0n3: ios=2097/2399, merge=0/0, ticks=36133/58638, in_queue=94771, util=95.39% 00:16:30.099 nvme0n4: ios=3253/3584, merge=0/0, ticks=23773/23553, in_queue=47326, util=95.45% 00:16:30.099 03:17:04 -- target/fio.sh@55 -- # sync 00:16:30.099 03:17:04 -- target/fio.sh@59 -- # fio_pid=1499067 00:16:30.099 03:17:04 -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:16:30.099 03:17:04 -- target/fio.sh@61 -- # sleep 3 00:16:30.099 [global] 00:16:30.099 thread=1 00:16:30.099 invalidate=1 00:16:30.099 rw=read 00:16:30.099 time_based=1 00:16:30.099 runtime=10 00:16:30.099 ioengine=libaio 00:16:30.099 direct=1 00:16:30.099 bs=4096 00:16:30.099 iodepth=1 00:16:30.099 norandommap=1 00:16:30.099 numjobs=1 00:16:30.099 00:16:30.099 [job0] 00:16:30.099 filename=/dev/nvme0n1 00:16:30.099 [job1] 00:16:30.099 filename=/dev/nvme0n2 00:16:30.099 [job2] 00:16:30.099 filename=/dev/nvme0n3 00:16:30.099 [job3] 00:16:30.099 filename=/dev/nvme0n4 00:16:30.099 Could not set queue depth (nvme0n1) 00:16:30.099 Could not set queue depth (nvme0n2) 00:16:30.099 Could not set queue depth (nvme0n3) 00:16:30.099 Could not set queue depth (nvme0n4) 00:16:30.356 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:30.356 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:30.356 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:30.356 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:30.356 fio-3.35 00:16:30.356 Starting 4 threads 00:16:32.883 03:17:07 -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:16:33.153 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=17903616, buflen=4096 00:16:33.153 fio: pid=1499163, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:33.421 03:17:07 -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:16:33.679 03:17:07 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:33.679 03:17:07 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:16:33.679 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=327680, buflen=4096 00:16:33.679 fio: pid=1499162, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:33.937 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=14483456, buflen=4096 00:16:33.937 fio: pid=1499160, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:33.937 03:17:08 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:33.937 03:17:08 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:16:34.196 03:17:08 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:34.196 03:17:08 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:16:34.196 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=385024, buflen=4096 00:16:34.196 fio: pid=1499161, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:34.196 00:16:34.196 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1499160: Thu Apr 25 03:17:08 2024 00:16:34.196 read: IOPS=1023, BW=4091KiB/s (4190kB/s)(13.8MiB/3457msec) 00:16:34.196 slat (usec): min=5, max=4786, avg=17.17, stdev=80.82 00:16:34.196 clat (usec): min=409, max=42965, avg=949.74, stdev=4289.61 00:16:34.196 lat (usec): min=416, max=45990, avg=966.90, stdev=4302.98 00:16:34.196 clat percentiles (usec): 00:16:34.196 | 1.00th=[ 416], 5.00th=[ 424], 10.00th=[ 433], 20.00th=[ 441], 00:16:34.196 | 30.00th=[ 457], 40.00th=[ 469], 50.00th=[ 482], 60.00th=[ 494], 00:16:34.196 | 70.00th=[ 510], 80.00th=[ 545], 90.00th=[ 570], 95.00th=[ 586], 00:16:34.196 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41681], 00:16:34.196 | 99.99th=[42730] 00:16:34.196 bw ( KiB/s): min= 96, max= 8600, per=54.19%, avg=4700.00, stdev=3704.64, samples=6 00:16:34.196 iops : min= 24, max= 2150, avg=1175.00, stdev=926.16, samples=6 00:16:34.196 lat (usec) : 500=64.21%, 750=34.21%, 1000=0.40% 00:16:34.196 lat (msec) : 2=0.03%, 50=1.13% 00:16:34.196 cpu : usr=1.48%, sys=2.11%, ctx=3538, majf=0, minf=1 00:16:34.196 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:34.196 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:34.196 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:34.196 issued rwts: total=3537,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:34.196 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:34.196 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1499161: Thu Apr 25 03:17:08 2024 00:16:34.196 read: IOPS=25, BW=101KiB/s (103kB/s)(376KiB/3727msec) 00:16:34.196 slat (usec): min=11, max=5714, avg=131.47, stdev=758.11 00:16:34.196 clat (usec): min=509, max=41417, avg=39272.56, stdev=8203.50 00:16:34.196 lat (usec): min=529, max=46863, avg=39405.30, stdev=8265.76 00:16:34.196 clat percentiles (usec): 00:16:34.196 | 1.00th=[ 510], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:16:34.196 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:16:34.196 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:16:34.196 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:16:34.196 | 99.99th=[41157] 00:16:34.196 bw ( KiB/s): min= 96, max= 104, per=1.16%, avg=101.14, stdev= 3.80, samples=7 00:16:34.196 iops : min= 24, max= 26, avg=25.29, stdev= 0.95, samples=7 00:16:34.196 lat (usec) : 750=4.21% 00:16:34.196 lat (msec) : 50=94.74% 00:16:34.196 cpu : usr=0.11%, sys=0.00%, ctx=98, majf=0, minf=1 00:16:34.196 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:34.196 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:34.196 complete : 0=1.0%, 4=99.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:34.196 issued rwts: total=95,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:34.196 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:34.196 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1499162: Thu Apr 25 03:17:08 2024 00:16:34.196 read: IOPS=25, BW=99.0KiB/s (101kB/s)(320KiB/3231msec) 00:16:34.196 slat (usec): min=12, max=8809, avg=130.90, stdev=976.35 00:16:34.196 clat (usec): min=655, max=41287, avg=39968.57, stdev=6331.94 00:16:34.196 lat (usec): min=668, max=50044, avg=40100.72, stdev=6430.01 00:16:34.196 clat percentiles (usec): 00:16:34.196 | 1.00th=[ 652], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:16:34.196 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:16:34.196 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:16:34.196 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:16:34.196 | 99.99th=[41157] 00:16:34.196 bw ( KiB/s): min= 96, max= 104, per=1.15%, avg=100.00, stdev= 4.38, samples=6 00:16:34.196 iops : min= 24, max= 26, avg=25.00, stdev= 1.10, samples=6 00:16:34.196 lat (usec) : 750=2.47% 00:16:34.196 lat (msec) : 50=96.30% 00:16:34.196 cpu : usr=0.09%, sys=0.00%, ctx=82, majf=0, minf=1 00:16:34.196 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:34.196 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:34.196 complete : 0=1.2%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:34.196 issued rwts: total=81,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:34.196 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:34.196 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1499163: Thu Apr 25 03:17:08 2024 00:16:34.196 read: IOPS=1508, BW=6031KiB/s (6176kB/s)(17.1MiB/2899msec) 00:16:34.196 slat (nsec): min=5549, max=67892, avg=15931.70, stdev=9629.42 00:16:34.196 clat (usec): min=542, max=2010, avg=640.96, stdev=55.80 00:16:34.196 lat (usec): min=549, max=2019, avg=656.89, stdev=60.79 00:16:34.196 clat percentiles (usec): 00:16:34.196 | 1.00th=[ 570], 5.00th=[ 586], 10.00th=[ 594], 20.00th=[ 603], 00:16:34.196 | 30.00th=[ 611], 40.00th=[ 619], 50.00th=[ 627], 60.00th=[ 635], 00:16:34.196 | 70.00th=[ 660], 80.00th=[ 676], 90.00th=[ 701], 95.00th=[ 742], 00:16:34.196 | 99.00th=[ 799], 99.50th=[ 832], 99.90th=[ 1074], 99.95th=[ 1205], 00:16:34.196 | 99.99th=[ 2008] 00:16:34.196 bw ( KiB/s): min= 5736, max= 6408, per=69.78%, avg=6052.80, stdev=280.15, samples=5 00:16:34.196 iops : min= 1434, max= 1602, avg=1513.20, stdev=70.04, samples=5 00:16:34.196 lat (usec) : 750=96.09%, 1000=3.71% 00:16:34.196 lat (msec) : 2=0.16%, 4=0.02% 00:16:34.196 cpu : usr=1.69%, sys=3.52%, ctx=4372, majf=0, minf=1 00:16:34.196 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:34.196 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:34.196 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:34.196 issued rwts: total=4372,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:34.196 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:34.196 00:16:34.196 Run status group 0 (all jobs): 00:16:34.196 READ: bw=8673KiB/s (8881kB/s), 99.0KiB/s-6031KiB/s (101kB/s-6176kB/s), io=31.6MiB (33.1MB), run=2899-3727msec 00:16:34.196 00:16:34.196 Disk stats (read/write): 00:16:34.196 nvme0n1: ios=3554/0, merge=0/0, ticks=3302/0, in_queue=3302, util=98.63% 00:16:34.196 nvme0n2: ios=91/0, merge=0/0, ticks=3570/0, in_queue=3570, util=96.36% 00:16:34.196 nvme0n3: ios=77/0, merge=0/0, ticks=3077/0, in_queue=3077, util=96.54% 00:16:34.196 nvme0n4: ios=4317/0, merge=0/0, ticks=2704/0, in_queue=2704, util=96.75% 00:16:34.454 03:17:08 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:34.454 03:17:08 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:16:34.713 03:17:08 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:34.713 03:17:08 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:16:34.971 03:17:09 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:34.971 03:17:09 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:16:35.230 03:17:09 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:35.230 03:17:09 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:16:35.230 03:17:09 -- target/fio.sh@69 -- # fio_status=0 00:16:35.230 03:17:09 -- target/fio.sh@70 -- # wait 1499067 00:16:35.230 03:17:09 -- target/fio.sh@70 -- # fio_status=4 00:16:35.230 03:17:09 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:35.488 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:35.488 03:17:09 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:35.488 03:17:09 -- common/autotest_common.sh@1205 -- # local i=0 00:16:35.488 03:17:09 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:16:35.488 03:17:09 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:35.488 03:17:09 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:16:35.488 03:17:09 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:35.488 03:17:09 -- common/autotest_common.sh@1217 -- # return 0 00:16:35.488 03:17:09 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:16:35.488 03:17:09 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:16:35.488 nvmf hotplug test: fio failed as expected 00:16:35.488 03:17:09 -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:35.753 03:17:10 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:16:35.753 03:17:10 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:16:35.753 03:17:10 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:16:35.753 03:17:10 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:16:35.753 03:17:10 -- target/fio.sh@91 -- # nvmftestfini 00:16:35.753 03:17:10 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:35.753 03:17:10 -- nvmf/common.sh@117 -- # sync 00:16:35.753 03:17:10 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:35.753 03:17:10 -- nvmf/common.sh@120 -- # set +e 00:16:35.753 03:17:10 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:35.753 03:17:10 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:35.753 rmmod nvme_tcp 00:16:35.753 rmmod nvme_fabrics 00:16:35.753 rmmod nvme_keyring 00:16:35.753 03:17:10 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:35.753 03:17:10 -- nvmf/common.sh@124 -- # set -e 00:16:35.753 03:17:10 -- nvmf/common.sh@125 -- # return 0 00:16:35.753 03:17:10 -- nvmf/common.sh@478 -- # '[' -n 1497041 ']' 00:16:35.753 03:17:10 -- nvmf/common.sh@479 -- # killprocess 1497041 00:16:35.753 03:17:10 -- common/autotest_common.sh@936 -- # '[' -z 1497041 ']' 00:16:35.753 03:17:10 -- common/autotest_common.sh@940 -- # kill -0 1497041 00:16:35.753 03:17:10 -- common/autotest_common.sh@941 -- # uname 00:16:35.753 03:17:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:35.753 03:17:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1497041 00:16:35.753 03:17:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:35.753 03:17:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:35.753 03:17:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1497041' 00:16:35.753 killing process with pid 1497041 00:16:35.753 03:17:10 -- common/autotest_common.sh@955 -- # kill 1497041 00:16:35.753 03:17:10 -- common/autotest_common.sh@960 -- # wait 1497041 00:16:36.061 03:17:10 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:36.061 03:17:10 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:36.061 03:17:10 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:36.061 03:17:10 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:36.061 03:17:10 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:36.061 03:17:10 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:36.061 03:17:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:36.061 03:17:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:38.604 03:17:12 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:38.604 00:16:38.604 real 0m23.437s 00:16:38.604 user 1m21.549s 00:16:38.604 sys 0m6.271s 00:16:38.604 03:17:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:38.604 03:17:12 -- common/autotest_common.sh@10 -- # set +x 00:16:38.604 ************************************ 00:16:38.604 END TEST nvmf_fio_target 00:16:38.604 ************************************ 00:16:38.604 03:17:12 -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:16:38.604 03:17:12 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:38.604 03:17:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:38.604 03:17:12 -- common/autotest_common.sh@10 -- # set +x 00:16:38.604 ************************************ 00:16:38.604 START TEST nvmf_bdevio 00:16:38.604 ************************************ 00:16:38.604 03:17:12 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:16:38.604 * Looking for test storage... 00:16:38.604 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:38.604 03:17:12 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:38.604 03:17:12 -- nvmf/common.sh@7 -- # uname -s 00:16:38.604 03:17:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:38.604 03:17:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:38.604 03:17:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:38.604 03:17:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:38.604 03:17:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:38.604 03:17:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:38.604 03:17:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:38.604 03:17:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:38.604 03:17:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:38.604 03:17:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:38.604 03:17:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:38.604 03:17:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:38.604 03:17:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:38.604 03:17:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:38.604 03:17:12 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:38.604 03:17:12 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:38.604 03:17:12 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:38.604 03:17:12 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:38.604 03:17:12 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:38.604 03:17:12 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:38.604 03:17:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:38.604 03:17:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:38.604 03:17:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:38.604 03:17:12 -- paths/export.sh@5 -- # export PATH 00:16:38.604 03:17:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:38.604 03:17:12 -- nvmf/common.sh@47 -- # : 0 00:16:38.604 03:17:12 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:38.604 03:17:12 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:38.604 03:17:12 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:38.604 03:17:12 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:38.604 03:17:12 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:38.604 03:17:12 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:38.605 03:17:12 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:38.605 03:17:12 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:38.605 03:17:12 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:38.605 03:17:12 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:38.605 03:17:12 -- target/bdevio.sh@14 -- # nvmftestinit 00:16:38.605 03:17:12 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:38.605 03:17:12 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:38.605 03:17:12 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:38.605 03:17:12 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:38.605 03:17:12 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:38.605 03:17:12 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:38.605 03:17:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:38.605 03:17:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:38.605 03:17:12 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:16:38.605 03:17:12 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:16:38.605 03:17:12 -- nvmf/common.sh@285 -- # xtrace_disable 00:16:38.605 03:17:12 -- common/autotest_common.sh@10 -- # set +x 00:16:40.512 03:17:14 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:40.512 03:17:14 -- nvmf/common.sh@291 -- # pci_devs=() 00:16:40.512 03:17:14 -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:40.512 03:17:14 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:40.512 03:17:14 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:40.512 03:17:14 -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:40.512 03:17:14 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:40.512 03:17:14 -- nvmf/common.sh@295 -- # net_devs=() 00:16:40.512 03:17:14 -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:40.512 03:17:14 -- nvmf/common.sh@296 -- # e810=() 00:16:40.512 03:17:14 -- nvmf/common.sh@296 -- # local -ga e810 00:16:40.512 03:17:14 -- nvmf/common.sh@297 -- # x722=() 00:16:40.512 03:17:14 -- nvmf/common.sh@297 -- # local -ga x722 00:16:40.512 03:17:14 -- nvmf/common.sh@298 -- # mlx=() 00:16:40.512 03:17:14 -- nvmf/common.sh@298 -- # local -ga mlx 00:16:40.512 03:17:14 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:40.512 03:17:14 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:40.512 03:17:14 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:40.512 03:17:14 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:40.512 03:17:14 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:40.512 03:17:14 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:40.512 03:17:14 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:40.512 03:17:14 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:40.513 03:17:14 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:40.513 03:17:14 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:40.513 03:17:14 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:40.513 03:17:14 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:40.513 03:17:14 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:40.513 03:17:14 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:40.513 03:17:14 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:40.513 03:17:14 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:40.513 03:17:14 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:40.513 03:17:14 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:40.513 03:17:14 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:40.513 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:40.513 03:17:14 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:40.513 03:17:14 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:40.513 03:17:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:40.513 03:17:14 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:40.513 03:17:14 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:40.513 03:17:14 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:40.513 03:17:14 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:40.513 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:40.513 03:17:14 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:40.513 03:17:14 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:40.513 03:17:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:40.513 03:17:14 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:40.513 03:17:14 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:40.513 03:17:14 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:40.513 03:17:14 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:40.513 03:17:14 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:40.513 03:17:14 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:40.513 03:17:14 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:40.513 03:17:14 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:40.513 03:17:14 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:40.513 03:17:14 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:40.513 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:40.513 03:17:14 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:40.513 03:17:14 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:40.513 03:17:14 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:40.513 03:17:14 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:40.513 03:17:14 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:40.513 03:17:14 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:40.513 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:40.513 03:17:14 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:40.513 03:17:14 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:16:40.513 03:17:14 -- nvmf/common.sh@403 -- # is_hw=yes 00:16:40.513 03:17:14 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:16:40.513 03:17:14 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:16:40.513 03:17:14 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:16:40.513 03:17:14 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:40.513 03:17:14 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:40.513 03:17:14 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:40.513 03:17:14 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:40.513 03:17:14 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:40.513 03:17:14 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:40.513 03:17:14 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:40.513 03:17:14 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:40.513 03:17:14 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:40.513 03:17:14 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:40.513 03:17:14 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:40.513 03:17:14 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:40.513 03:17:14 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:40.513 03:17:14 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:40.513 03:17:14 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:40.513 03:17:14 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:40.513 03:17:14 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:40.513 03:17:14 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:40.513 03:17:14 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:40.513 03:17:14 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:40.513 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:40.513 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.214 ms 00:16:40.513 00:16:40.513 --- 10.0.0.2 ping statistics --- 00:16:40.513 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:40.513 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:16:40.513 03:17:14 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:40.513 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:40.513 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:16:40.513 00:16:40.513 --- 10.0.0.1 ping statistics --- 00:16:40.513 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:40.513 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:16:40.513 03:17:14 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:40.513 03:17:14 -- nvmf/common.sh@411 -- # return 0 00:16:40.513 03:17:14 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:40.513 03:17:14 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:40.513 03:17:14 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:40.513 03:17:14 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:40.513 03:17:14 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:40.513 03:17:14 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:40.513 03:17:14 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:40.513 03:17:14 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:16:40.513 03:17:14 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:40.513 03:17:14 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:40.513 03:17:14 -- common/autotest_common.sh@10 -- # set +x 00:16:40.513 03:17:14 -- nvmf/common.sh@470 -- # nvmfpid=1501792 00:16:40.513 03:17:14 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:16:40.513 03:17:14 -- nvmf/common.sh@471 -- # waitforlisten 1501792 00:16:40.513 03:17:14 -- common/autotest_common.sh@817 -- # '[' -z 1501792 ']' 00:16:40.513 03:17:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:40.513 03:17:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:40.513 03:17:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:40.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:40.513 03:17:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:40.513 03:17:14 -- common/autotest_common.sh@10 -- # set +x 00:16:40.513 [2024-04-25 03:17:14.898201] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:16:40.513 [2024-04-25 03:17:14.898275] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:40.513 EAL: No free 2048 kB hugepages reported on node 1 00:16:40.513 [2024-04-25 03:17:14.964738] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:40.772 [2024-04-25 03:17:15.079592] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:40.772 [2024-04-25 03:17:15.079659] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:40.772 [2024-04-25 03:17:15.079690] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:40.772 [2024-04-25 03:17:15.079702] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:40.772 [2024-04-25 03:17:15.079713] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:40.772 [2024-04-25 03:17:15.079814] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:40.772 [2024-04-25 03:17:15.079856] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:16:40.772 [2024-04-25 03:17:15.079902] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:16:40.772 [2024-04-25 03:17:15.079905] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:40.772 03:17:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:40.772 03:17:15 -- common/autotest_common.sh@850 -- # return 0 00:16:40.772 03:17:15 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:40.772 03:17:15 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:40.772 03:17:15 -- common/autotest_common.sh@10 -- # set +x 00:16:40.772 03:17:15 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:40.772 03:17:15 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:40.772 03:17:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:40.772 03:17:15 -- common/autotest_common.sh@10 -- # set +x 00:16:40.772 [2024-04-25 03:17:15.241311] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:40.772 03:17:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:40.772 03:17:15 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:40.772 03:17:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:40.772 03:17:15 -- common/autotest_common.sh@10 -- # set +x 00:16:41.030 Malloc0 00:16:41.030 03:17:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:41.031 03:17:15 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:41.031 03:17:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:41.031 03:17:15 -- common/autotest_common.sh@10 -- # set +x 00:16:41.031 03:17:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:41.031 03:17:15 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:41.031 03:17:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:41.031 03:17:15 -- common/autotest_common.sh@10 -- # set +x 00:16:41.031 03:17:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:41.031 03:17:15 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:41.031 03:17:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:41.031 03:17:15 -- common/autotest_common.sh@10 -- # set +x 00:16:41.031 [2024-04-25 03:17:15.295446] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:41.031 03:17:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:41.031 03:17:15 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:16:41.031 03:17:15 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:16:41.031 03:17:15 -- nvmf/common.sh@521 -- # config=() 00:16:41.031 03:17:15 -- nvmf/common.sh@521 -- # local subsystem config 00:16:41.031 03:17:15 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:41.031 03:17:15 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:41.031 { 00:16:41.031 "params": { 00:16:41.031 "name": "Nvme$subsystem", 00:16:41.031 "trtype": "$TEST_TRANSPORT", 00:16:41.031 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:41.031 "adrfam": "ipv4", 00:16:41.031 "trsvcid": "$NVMF_PORT", 00:16:41.031 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:41.031 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:41.031 "hdgst": ${hdgst:-false}, 00:16:41.031 "ddgst": ${ddgst:-false} 00:16:41.031 }, 00:16:41.031 "method": "bdev_nvme_attach_controller" 00:16:41.031 } 00:16:41.031 EOF 00:16:41.031 )") 00:16:41.031 03:17:15 -- nvmf/common.sh@543 -- # cat 00:16:41.031 03:17:15 -- nvmf/common.sh@545 -- # jq . 00:16:41.031 03:17:15 -- nvmf/common.sh@546 -- # IFS=, 00:16:41.031 03:17:15 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:16:41.031 "params": { 00:16:41.031 "name": "Nvme1", 00:16:41.031 "trtype": "tcp", 00:16:41.031 "traddr": "10.0.0.2", 00:16:41.031 "adrfam": "ipv4", 00:16:41.031 "trsvcid": "4420", 00:16:41.031 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:41.031 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:41.031 "hdgst": false, 00:16:41.031 "ddgst": false 00:16:41.031 }, 00:16:41.031 "method": "bdev_nvme_attach_controller" 00:16:41.031 }' 00:16:41.031 [2024-04-25 03:17:15.339601] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:16:41.031 [2024-04-25 03:17:15.339703] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1501823 ] 00:16:41.031 EAL: No free 2048 kB hugepages reported on node 1 00:16:41.031 [2024-04-25 03:17:15.400412] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:41.031 [2024-04-25 03:17:15.514079] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:41.031 [2024-04-25 03:17:15.514129] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:41.031 [2024-04-25 03:17:15.514132] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:41.289 I/O targets: 00:16:41.289 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:16:41.289 00:16:41.289 00:16:41.289 CUnit - A unit testing framework for C - Version 2.1-3 00:16:41.289 http://cunit.sourceforge.net/ 00:16:41.289 00:16:41.289 00:16:41.289 Suite: bdevio tests on: Nvme1n1 00:16:41.289 Test: blockdev write read block ...passed 00:16:41.289 Test: blockdev write zeroes read block ...passed 00:16:41.289 Test: blockdev write zeroes read no split ...passed 00:16:41.547 Test: blockdev write zeroes read split ...passed 00:16:41.547 Test: blockdev write zeroes read split partial ...passed 00:16:41.547 Test: blockdev reset ...[2024-04-25 03:17:15.904551] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:41.547 [2024-04-25 03:17:15.904668] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b2a20 (9): Bad file descriptor 00:16:41.547 [2024-04-25 03:17:15.917519] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:41.547 passed 00:16:41.547 Test: blockdev write read 8 blocks ...passed 00:16:41.547 Test: blockdev write read size > 128k ...passed 00:16:41.547 Test: blockdev write read invalid size ...passed 00:16:41.547 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:41.547 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:41.547 Test: blockdev write read max offset ...passed 00:16:41.547 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:41.805 Test: blockdev writev readv 8 blocks ...passed 00:16:41.805 Test: blockdev writev readv 30 x 1block ...passed 00:16:41.805 Test: blockdev writev readv block ...passed 00:16:41.805 Test: blockdev writev readv size > 128k ...passed 00:16:41.805 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:41.805 Test: blockdev comparev and writev ...[2024-04-25 03:17:16.179034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:41.805 [2024-04-25 03:17:16.179071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:41.805 [2024-04-25 03:17:16.179095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:41.805 [2024-04-25 03:17:16.179112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:41.805 [2024-04-25 03:17:16.179570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:41.805 [2024-04-25 03:17:16.179595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:41.806 [2024-04-25 03:17:16.179617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:41.806 [2024-04-25 03:17:16.179640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:41.806 [2024-04-25 03:17:16.180079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:41.806 [2024-04-25 03:17:16.180102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:41.806 [2024-04-25 03:17:16.180123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:41.806 [2024-04-25 03:17:16.180139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:41.806 [2024-04-25 03:17:16.180573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:41.806 [2024-04-25 03:17:16.180596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:41.806 [2024-04-25 03:17:16.180617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:41.806 [2024-04-25 03:17:16.180641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:41.806 passed 00:16:41.806 Test: blockdev nvme passthru rw ...passed 00:16:41.806 Test: blockdev nvme passthru vendor specific ...[2024-04-25 03:17:16.264047] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:41.806 [2024-04-25 03:17:16.264073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:41.806 [2024-04-25 03:17:16.264297] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:41.806 [2024-04-25 03:17:16.264320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:41.806 [2024-04-25 03:17:16.264535] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:41.806 [2024-04-25 03:17:16.264558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:41.806 [2024-04-25 03:17:16.264787] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:41.806 [2024-04-25 03:17:16.264818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:41.806 passed 00:16:41.806 Test: blockdev nvme admin passthru ...passed 00:16:42.064 Test: blockdev copy ...passed 00:16:42.064 00:16:42.064 Run Summary: Type Total Ran Passed Failed Inactive 00:16:42.064 suites 1 1 n/a 0 0 00:16:42.064 tests 23 23 23 0 0 00:16:42.064 asserts 152 152 152 0 n/a 00:16:42.064 00:16:42.064 Elapsed time = 1.257 seconds 00:16:42.064 03:17:16 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:42.064 03:17:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:42.064 03:17:16 -- common/autotest_common.sh@10 -- # set +x 00:16:42.064 03:17:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:42.064 03:17:16 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:16:42.064 03:17:16 -- target/bdevio.sh@30 -- # nvmftestfini 00:16:42.064 03:17:16 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:42.064 03:17:16 -- nvmf/common.sh@117 -- # sync 00:16:42.064 03:17:16 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:42.064 03:17:16 -- nvmf/common.sh@120 -- # set +e 00:16:42.064 03:17:16 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:42.064 03:17:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:42.064 rmmod nvme_tcp 00:16:42.322 rmmod nvme_fabrics 00:16:42.322 rmmod nvme_keyring 00:16:42.322 03:17:16 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:42.322 03:17:16 -- nvmf/common.sh@124 -- # set -e 00:16:42.322 03:17:16 -- nvmf/common.sh@125 -- # return 0 00:16:42.322 03:17:16 -- nvmf/common.sh@478 -- # '[' -n 1501792 ']' 00:16:42.322 03:17:16 -- nvmf/common.sh@479 -- # killprocess 1501792 00:16:42.322 03:17:16 -- common/autotest_common.sh@936 -- # '[' -z 1501792 ']' 00:16:42.322 03:17:16 -- common/autotest_common.sh@940 -- # kill -0 1501792 00:16:42.322 03:17:16 -- common/autotest_common.sh@941 -- # uname 00:16:42.322 03:17:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:42.322 03:17:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1501792 00:16:42.322 03:17:16 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:16:42.322 03:17:16 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:16:42.322 03:17:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1501792' 00:16:42.322 killing process with pid 1501792 00:16:42.322 03:17:16 -- common/autotest_common.sh@955 -- # kill 1501792 00:16:42.322 03:17:16 -- common/autotest_common.sh@960 -- # wait 1501792 00:16:42.580 03:17:16 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:42.580 03:17:16 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:42.580 03:17:16 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:42.580 03:17:16 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:42.580 03:17:16 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:42.580 03:17:16 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:42.580 03:17:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:42.580 03:17:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:44.483 03:17:18 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:44.483 00:16:44.483 real 0m6.323s 00:16:44.483 user 0m10.016s 00:16:44.483 sys 0m2.069s 00:16:44.483 03:17:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:44.483 03:17:18 -- common/autotest_common.sh@10 -- # set +x 00:16:44.483 ************************************ 00:16:44.483 END TEST nvmf_bdevio 00:16:44.483 ************************************ 00:16:44.742 03:17:18 -- nvmf/nvmf.sh@58 -- # '[' tcp = tcp ']' 00:16:44.742 03:17:18 -- nvmf/nvmf.sh@59 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:16:44.742 03:17:18 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:16:44.742 03:17:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:44.742 03:17:18 -- common/autotest_common.sh@10 -- # set +x 00:16:44.742 ************************************ 00:16:44.742 START TEST nvmf_bdevio_no_huge 00:16:44.742 ************************************ 00:16:44.742 03:17:19 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:16:44.742 * Looking for test storage... 00:16:44.742 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:44.742 03:17:19 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:44.742 03:17:19 -- nvmf/common.sh@7 -- # uname -s 00:16:44.742 03:17:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:44.742 03:17:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:44.742 03:17:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:44.742 03:17:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:44.742 03:17:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:44.742 03:17:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:44.743 03:17:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:44.743 03:17:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:44.743 03:17:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:44.743 03:17:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:44.743 03:17:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:44.743 03:17:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:44.743 03:17:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:44.743 03:17:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:44.743 03:17:19 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:44.743 03:17:19 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:44.743 03:17:19 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:44.743 03:17:19 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:44.743 03:17:19 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:44.743 03:17:19 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:44.743 03:17:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:44.743 03:17:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:44.743 03:17:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:44.743 03:17:19 -- paths/export.sh@5 -- # export PATH 00:16:44.743 03:17:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:44.743 03:17:19 -- nvmf/common.sh@47 -- # : 0 00:16:44.743 03:17:19 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:44.743 03:17:19 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:44.743 03:17:19 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:44.743 03:17:19 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:44.743 03:17:19 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:44.743 03:17:19 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:44.743 03:17:19 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:44.743 03:17:19 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:44.743 03:17:19 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:44.743 03:17:19 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:44.743 03:17:19 -- target/bdevio.sh@14 -- # nvmftestinit 00:16:44.743 03:17:19 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:44.743 03:17:19 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:44.743 03:17:19 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:44.743 03:17:19 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:44.743 03:17:19 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:44.743 03:17:19 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:44.743 03:17:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:44.743 03:17:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:44.743 03:17:19 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:16:44.743 03:17:19 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:16:44.743 03:17:19 -- nvmf/common.sh@285 -- # xtrace_disable 00:16:44.743 03:17:19 -- common/autotest_common.sh@10 -- # set +x 00:16:46.642 03:17:21 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:46.642 03:17:21 -- nvmf/common.sh@291 -- # pci_devs=() 00:16:46.642 03:17:21 -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:46.642 03:17:21 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:46.642 03:17:21 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:46.642 03:17:21 -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:46.642 03:17:21 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:46.642 03:17:21 -- nvmf/common.sh@295 -- # net_devs=() 00:16:46.642 03:17:21 -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:46.642 03:17:21 -- nvmf/common.sh@296 -- # e810=() 00:16:46.642 03:17:21 -- nvmf/common.sh@296 -- # local -ga e810 00:16:46.642 03:17:21 -- nvmf/common.sh@297 -- # x722=() 00:16:46.642 03:17:21 -- nvmf/common.sh@297 -- # local -ga x722 00:16:46.642 03:17:21 -- nvmf/common.sh@298 -- # mlx=() 00:16:46.642 03:17:21 -- nvmf/common.sh@298 -- # local -ga mlx 00:16:46.642 03:17:21 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:46.642 03:17:21 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:46.642 03:17:21 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:46.642 03:17:21 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:46.642 03:17:21 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:46.642 03:17:21 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:46.642 03:17:21 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:46.642 03:17:21 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:46.642 03:17:21 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:46.642 03:17:21 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:46.642 03:17:21 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:46.642 03:17:21 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:46.642 03:17:21 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:46.642 03:17:21 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:46.642 03:17:21 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:46.642 03:17:21 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:46.642 03:17:21 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:46.642 03:17:21 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:46.642 03:17:21 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:46.642 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:46.642 03:17:21 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:46.642 03:17:21 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:46.642 03:17:21 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:46.642 03:17:21 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:46.642 03:17:21 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:46.642 03:17:21 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:46.642 03:17:21 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:46.642 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:46.642 03:17:21 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:46.642 03:17:21 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:46.642 03:17:21 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:46.642 03:17:21 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:46.642 03:17:21 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:46.642 03:17:21 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:46.642 03:17:21 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:46.642 03:17:21 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:46.642 03:17:21 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:46.642 03:17:21 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:46.642 03:17:21 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:46.642 03:17:21 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:46.642 03:17:21 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:46.642 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:46.642 03:17:21 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:46.642 03:17:21 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:46.642 03:17:21 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:46.642 03:17:21 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:46.642 03:17:21 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:46.642 03:17:21 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:46.642 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:46.642 03:17:21 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:46.642 03:17:21 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:16:46.642 03:17:21 -- nvmf/common.sh@403 -- # is_hw=yes 00:16:46.642 03:17:21 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:16:46.642 03:17:21 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:16:46.642 03:17:21 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:16:46.642 03:17:21 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:46.642 03:17:21 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:46.642 03:17:21 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:46.642 03:17:21 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:46.642 03:17:21 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:46.642 03:17:21 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:46.642 03:17:21 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:46.642 03:17:21 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:46.642 03:17:21 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:46.642 03:17:21 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:46.642 03:17:21 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:46.642 03:17:21 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:46.642 03:17:21 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:46.900 03:17:21 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:46.900 03:17:21 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:46.900 03:17:21 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:46.900 03:17:21 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:46.900 03:17:21 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:46.900 03:17:21 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:46.900 03:17:21 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:46.900 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:46.900 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.126 ms 00:16:46.900 00:16:46.900 --- 10.0.0.2 ping statistics --- 00:16:46.900 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:46.900 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:16:46.900 03:17:21 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:46.900 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:46.900 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:16:46.900 00:16:46.900 --- 10.0.0.1 ping statistics --- 00:16:46.900 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:46.900 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:16:46.900 03:17:21 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:46.900 03:17:21 -- nvmf/common.sh@411 -- # return 0 00:16:46.900 03:17:21 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:46.900 03:17:21 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:46.900 03:17:21 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:46.900 03:17:21 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:46.900 03:17:21 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:46.900 03:17:21 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:46.900 03:17:21 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:46.900 03:17:21 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:16:46.900 03:17:21 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:46.900 03:17:21 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:46.900 03:17:21 -- common/autotest_common.sh@10 -- # set +x 00:16:46.900 03:17:21 -- nvmf/common.sh@470 -- # nvmfpid=1503987 00:16:46.900 03:17:21 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:16:46.900 03:17:21 -- nvmf/common.sh@471 -- # waitforlisten 1503987 00:16:46.900 03:17:21 -- common/autotest_common.sh@817 -- # '[' -z 1503987 ']' 00:16:46.900 03:17:21 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:46.900 03:17:21 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:46.900 03:17:21 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:46.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:46.900 03:17:21 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:46.900 03:17:21 -- common/autotest_common.sh@10 -- # set +x 00:16:46.900 [2024-04-25 03:17:21.299180] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:16:46.900 [2024-04-25 03:17:21.299254] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:16:46.900 [2024-04-25 03:17:21.371006] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:47.157 [2024-04-25 03:17:21.490071] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:47.157 [2024-04-25 03:17:21.490126] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:47.157 [2024-04-25 03:17:21.490142] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:47.157 [2024-04-25 03:17:21.490155] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:47.157 [2024-04-25 03:17:21.490167] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:47.157 [2024-04-25 03:17:21.490242] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:47.157 [2024-04-25 03:17:21.490296] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:16:47.157 [2024-04-25 03:17:21.490350] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:16:47.157 [2024-04-25 03:17:21.490353] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:48.095 03:17:22 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:48.095 03:17:22 -- common/autotest_common.sh@850 -- # return 0 00:16:48.095 03:17:22 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:48.095 03:17:22 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:48.095 03:17:22 -- common/autotest_common.sh@10 -- # set +x 00:16:48.095 03:17:22 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:48.095 03:17:22 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:48.095 03:17:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:48.095 03:17:22 -- common/autotest_common.sh@10 -- # set +x 00:16:48.095 [2024-04-25 03:17:22.306439] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:48.095 03:17:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:48.095 03:17:22 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:48.095 03:17:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:48.095 03:17:22 -- common/autotest_common.sh@10 -- # set +x 00:16:48.095 Malloc0 00:16:48.095 03:17:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:48.095 03:17:22 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:48.095 03:17:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:48.095 03:17:22 -- common/autotest_common.sh@10 -- # set +x 00:16:48.095 03:17:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:48.095 03:17:22 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:48.095 03:17:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:48.095 03:17:22 -- common/autotest_common.sh@10 -- # set +x 00:16:48.095 03:17:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:48.095 03:17:22 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:48.095 03:17:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:48.095 03:17:22 -- common/autotest_common.sh@10 -- # set +x 00:16:48.095 [2024-04-25 03:17:22.344684] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:48.095 03:17:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:48.095 03:17:22 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:16:48.095 03:17:22 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:16:48.095 03:17:22 -- nvmf/common.sh@521 -- # config=() 00:16:48.095 03:17:22 -- nvmf/common.sh@521 -- # local subsystem config 00:16:48.095 03:17:22 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:48.095 03:17:22 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:48.095 { 00:16:48.095 "params": { 00:16:48.095 "name": "Nvme$subsystem", 00:16:48.095 "trtype": "$TEST_TRANSPORT", 00:16:48.095 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:48.095 "adrfam": "ipv4", 00:16:48.095 "trsvcid": "$NVMF_PORT", 00:16:48.095 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:48.095 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:48.095 "hdgst": ${hdgst:-false}, 00:16:48.095 "ddgst": ${ddgst:-false} 00:16:48.095 }, 00:16:48.095 "method": "bdev_nvme_attach_controller" 00:16:48.095 } 00:16:48.095 EOF 00:16:48.095 )") 00:16:48.095 03:17:22 -- nvmf/common.sh@543 -- # cat 00:16:48.095 03:17:22 -- nvmf/common.sh@545 -- # jq . 00:16:48.095 03:17:22 -- nvmf/common.sh@546 -- # IFS=, 00:16:48.095 03:17:22 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:16:48.095 "params": { 00:16:48.095 "name": "Nvme1", 00:16:48.095 "trtype": "tcp", 00:16:48.095 "traddr": "10.0.0.2", 00:16:48.095 "adrfam": "ipv4", 00:16:48.095 "trsvcid": "4420", 00:16:48.095 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:48.095 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:48.095 "hdgst": false, 00:16:48.095 "ddgst": false 00:16:48.095 }, 00:16:48.095 "method": "bdev_nvme_attach_controller" 00:16:48.095 }' 00:16:48.095 [2024-04-25 03:17:22.388068] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:16:48.095 [2024-04-25 03:17:22.388142] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1504104 ] 00:16:48.095 [2024-04-25 03:17:22.457355] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:48.095 [2024-04-25 03:17:22.571382] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:48.095 [2024-04-25 03:17:22.571432] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:48.095 [2024-04-25 03:17:22.571436] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:48.352 I/O targets: 00:16:48.352 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:16:48.352 00:16:48.352 00:16:48.352 CUnit - A unit testing framework for C - Version 2.1-3 00:16:48.352 http://cunit.sourceforge.net/ 00:16:48.352 00:16:48.352 00:16:48.352 Suite: bdevio tests on: Nvme1n1 00:16:48.353 Test: blockdev write read block ...passed 00:16:48.353 Test: blockdev write zeroes read block ...passed 00:16:48.353 Test: blockdev write zeroes read no split ...passed 00:16:48.610 Test: blockdev write zeroes read split ...passed 00:16:48.610 Test: blockdev write zeroes read split partial ...passed 00:16:48.610 Test: blockdev reset ...[2024-04-25 03:17:22.950123] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:48.610 [2024-04-25 03:17:22.950230] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c40f80 (9): Bad file descriptor 00:16:48.611 [2024-04-25 03:17:23.019110] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:48.611 passed 00:16:48.611 Test: blockdev write read 8 blocks ...passed 00:16:48.611 Test: blockdev write read size > 128k ...passed 00:16:48.611 Test: blockdev write read invalid size ...passed 00:16:48.611 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:48.611 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:48.611 Test: blockdev write read max offset ...passed 00:16:48.868 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:48.868 Test: blockdev writev readv 8 blocks ...passed 00:16:48.868 Test: blockdev writev readv 30 x 1block ...passed 00:16:48.868 Test: blockdev writev readv block ...passed 00:16:48.868 Test: blockdev writev readv size > 128k ...passed 00:16:48.868 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:48.868 Test: blockdev comparev and writev ...[2024-04-25 03:17:23.282061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:48.868 [2024-04-25 03:17:23.282103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:48.868 [2024-04-25 03:17:23.282127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:48.868 [2024-04-25 03:17:23.282144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:48.868 [2024-04-25 03:17:23.282568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:48.868 [2024-04-25 03:17:23.282592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:48.868 [2024-04-25 03:17:23.282614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:48.868 [2024-04-25 03:17:23.282637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:48.868 [2024-04-25 03:17:23.283063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:48.868 [2024-04-25 03:17:23.283086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:48.868 [2024-04-25 03:17:23.283107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:48.868 [2024-04-25 03:17:23.283124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:48.868 [2024-04-25 03:17:23.283558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:48.868 [2024-04-25 03:17:23.283582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:48.869 [2024-04-25 03:17:23.283602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:48.869 [2024-04-25 03:17:23.283618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:48.869 passed 00:16:48.869 Test: blockdev nvme passthru rw ...passed 00:16:48.869 Test: blockdev nvme passthru vendor specific ...[2024-04-25 03:17:23.368068] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:48.869 [2024-04-25 03:17:23.368096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:48.869 [2024-04-25 03:17:23.368385] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:48.869 [2024-04-25 03:17:23.368408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:48.869 [2024-04-25 03:17:23.368675] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:48.869 [2024-04-25 03:17:23.368700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:49.127 [2024-04-25 03:17:23.368958] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:49.127 [2024-04-25 03:17:23.368980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:49.127 passed 00:16:49.127 Test: blockdev nvme admin passthru ...passed 00:16:49.127 Test: blockdev copy ...passed 00:16:49.127 00:16:49.127 Run Summary: Type Total Ran Passed Failed Inactive 00:16:49.127 suites 1 1 n/a 0 0 00:16:49.127 tests 23 23 23 0 0 00:16:49.127 asserts 152 152 152 0 n/a 00:16:49.127 00:16:49.127 Elapsed time = 1.385 seconds 00:16:49.385 03:17:23 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:49.385 03:17:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:49.385 03:17:23 -- common/autotest_common.sh@10 -- # set +x 00:16:49.385 03:17:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:49.385 03:17:23 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:16:49.385 03:17:23 -- target/bdevio.sh@30 -- # nvmftestfini 00:16:49.385 03:17:23 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:49.385 03:17:23 -- nvmf/common.sh@117 -- # sync 00:16:49.385 03:17:23 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:49.385 03:17:23 -- nvmf/common.sh@120 -- # set +e 00:16:49.385 03:17:23 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:49.385 03:17:23 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:49.385 rmmod nvme_tcp 00:16:49.385 rmmod nvme_fabrics 00:16:49.385 rmmod nvme_keyring 00:16:49.385 03:17:23 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:49.385 03:17:23 -- nvmf/common.sh@124 -- # set -e 00:16:49.385 03:17:23 -- nvmf/common.sh@125 -- # return 0 00:16:49.385 03:17:23 -- nvmf/common.sh@478 -- # '[' -n 1503987 ']' 00:16:49.385 03:17:23 -- nvmf/common.sh@479 -- # killprocess 1503987 00:16:49.385 03:17:23 -- common/autotest_common.sh@936 -- # '[' -z 1503987 ']' 00:16:49.385 03:17:23 -- common/autotest_common.sh@940 -- # kill -0 1503987 00:16:49.385 03:17:23 -- common/autotest_common.sh@941 -- # uname 00:16:49.385 03:17:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:49.385 03:17:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1503987 00:16:49.644 03:17:23 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:16:49.644 03:17:23 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:16:49.644 03:17:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1503987' 00:16:49.644 killing process with pid 1503987 00:16:49.644 03:17:23 -- common/autotest_common.sh@955 -- # kill 1503987 00:16:49.644 03:17:23 -- common/autotest_common.sh@960 -- # wait 1503987 00:16:49.902 03:17:24 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:49.902 03:17:24 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:49.902 03:17:24 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:49.902 03:17:24 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:49.902 03:17:24 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:49.902 03:17:24 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:49.902 03:17:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:49.902 03:17:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:52.492 03:17:26 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:52.492 00:16:52.492 real 0m7.282s 00:16:52.492 user 0m14.038s 00:16:52.492 sys 0m2.549s 00:16:52.492 03:17:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:52.492 03:17:26 -- common/autotest_common.sh@10 -- # set +x 00:16:52.492 ************************************ 00:16:52.492 END TEST nvmf_bdevio_no_huge 00:16:52.492 ************************************ 00:16:52.492 03:17:26 -- nvmf/nvmf.sh@60 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:16:52.492 03:17:26 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:52.492 03:17:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:52.492 03:17:26 -- common/autotest_common.sh@10 -- # set +x 00:16:52.492 ************************************ 00:16:52.492 START TEST nvmf_tls 00:16:52.492 ************************************ 00:16:52.492 03:17:26 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:16:52.492 * Looking for test storage... 00:16:52.492 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:52.492 03:17:26 -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:52.492 03:17:26 -- nvmf/common.sh@7 -- # uname -s 00:16:52.492 03:17:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:52.492 03:17:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:52.492 03:17:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:52.492 03:17:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:52.492 03:17:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:52.492 03:17:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:52.492 03:17:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:52.492 03:17:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:52.492 03:17:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:52.492 03:17:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:52.492 03:17:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:52.492 03:17:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:52.492 03:17:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:52.492 03:17:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:52.492 03:17:26 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:52.492 03:17:26 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:52.492 03:17:26 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:52.492 03:17:26 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:52.492 03:17:26 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:52.492 03:17:26 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:52.492 03:17:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:52.492 03:17:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:52.492 03:17:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:52.492 03:17:26 -- paths/export.sh@5 -- # export PATH 00:16:52.492 03:17:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:52.492 03:17:26 -- nvmf/common.sh@47 -- # : 0 00:16:52.492 03:17:26 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:52.492 03:17:26 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:52.492 03:17:26 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:52.492 03:17:26 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:52.492 03:17:26 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:52.492 03:17:26 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:52.492 03:17:26 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:52.492 03:17:26 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:52.492 03:17:26 -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:52.492 03:17:26 -- target/tls.sh@62 -- # nvmftestinit 00:16:52.492 03:17:26 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:52.492 03:17:26 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:52.492 03:17:26 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:52.492 03:17:26 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:52.492 03:17:26 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:52.492 03:17:26 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:52.492 03:17:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:52.492 03:17:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:52.492 03:17:26 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:16:52.492 03:17:26 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:16:52.492 03:17:26 -- nvmf/common.sh@285 -- # xtrace_disable 00:16:52.492 03:17:26 -- common/autotest_common.sh@10 -- # set +x 00:16:54.391 03:17:28 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:54.391 03:17:28 -- nvmf/common.sh@291 -- # pci_devs=() 00:16:54.391 03:17:28 -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:54.391 03:17:28 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:54.391 03:17:28 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:54.391 03:17:28 -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:54.391 03:17:28 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:54.391 03:17:28 -- nvmf/common.sh@295 -- # net_devs=() 00:16:54.391 03:17:28 -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:54.391 03:17:28 -- nvmf/common.sh@296 -- # e810=() 00:16:54.391 03:17:28 -- nvmf/common.sh@296 -- # local -ga e810 00:16:54.391 03:17:28 -- nvmf/common.sh@297 -- # x722=() 00:16:54.391 03:17:28 -- nvmf/common.sh@297 -- # local -ga x722 00:16:54.391 03:17:28 -- nvmf/common.sh@298 -- # mlx=() 00:16:54.391 03:17:28 -- nvmf/common.sh@298 -- # local -ga mlx 00:16:54.391 03:17:28 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:54.391 03:17:28 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:54.391 03:17:28 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:54.391 03:17:28 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:54.391 03:17:28 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:54.391 03:17:28 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:54.391 03:17:28 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:54.391 03:17:28 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:54.391 03:17:28 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:54.391 03:17:28 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:54.391 03:17:28 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:54.391 03:17:28 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:54.391 03:17:28 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:54.391 03:17:28 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:54.391 03:17:28 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:54.391 03:17:28 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:54.391 03:17:28 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:54.391 03:17:28 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:54.391 03:17:28 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:54.391 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:54.391 03:17:28 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:54.391 03:17:28 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:54.391 03:17:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:54.391 03:17:28 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:54.391 03:17:28 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:54.391 03:17:28 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:54.391 03:17:28 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:54.391 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:54.391 03:17:28 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:54.391 03:17:28 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:54.391 03:17:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:54.391 03:17:28 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:54.391 03:17:28 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:54.391 03:17:28 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:54.391 03:17:28 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:54.391 03:17:28 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:54.391 03:17:28 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:54.391 03:17:28 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:54.391 03:17:28 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:54.391 03:17:28 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:54.391 03:17:28 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:54.391 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:54.391 03:17:28 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:54.391 03:17:28 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:54.391 03:17:28 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:54.392 03:17:28 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:54.392 03:17:28 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:54.392 03:17:28 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:54.392 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:54.392 03:17:28 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:54.392 03:17:28 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:16:54.392 03:17:28 -- nvmf/common.sh@403 -- # is_hw=yes 00:16:54.392 03:17:28 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:16:54.392 03:17:28 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:16:54.392 03:17:28 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:16:54.392 03:17:28 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:54.392 03:17:28 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:54.392 03:17:28 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:54.392 03:17:28 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:54.392 03:17:28 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:54.392 03:17:28 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:54.392 03:17:28 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:54.392 03:17:28 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:54.392 03:17:28 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:54.392 03:17:28 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:54.392 03:17:28 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:54.392 03:17:28 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:54.392 03:17:28 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:54.392 03:17:28 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:54.392 03:17:28 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:54.392 03:17:28 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:54.392 03:17:28 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:54.392 03:17:28 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:54.392 03:17:28 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:54.392 03:17:28 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:54.392 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:54.392 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.262 ms 00:16:54.392 00:16:54.392 --- 10.0.0.2 ping statistics --- 00:16:54.392 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:54.392 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:16:54.392 03:17:28 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:54.392 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:54.392 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.166 ms 00:16:54.392 00:16:54.392 --- 10.0.0.1 ping statistics --- 00:16:54.392 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:54.392 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:16:54.392 03:17:28 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:54.392 03:17:28 -- nvmf/common.sh@411 -- # return 0 00:16:54.392 03:17:28 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:54.392 03:17:28 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:54.392 03:17:28 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:54.392 03:17:28 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:54.392 03:17:28 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:54.392 03:17:28 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:54.392 03:17:28 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:54.392 03:17:28 -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:16:54.392 03:17:28 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:54.392 03:17:28 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:54.392 03:17:28 -- common/autotest_common.sh@10 -- # set +x 00:16:54.392 03:17:28 -- nvmf/common.sh@470 -- # nvmfpid=1506254 00:16:54.392 03:17:28 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:16:54.392 03:17:28 -- nvmf/common.sh@471 -- # waitforlisten 1506254 00:16:54.392 03:17:28 -- common/autotest_common.sh@817 -- # '[' -z 1506254 ']' 00:16:54.392 03:17:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:54.392 03:17:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:54.392 03:17:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:54.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:54.392 03:17:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:54.392 03:17:28 -- common/autotest_common.sh@10 -- # set +x 00:16:54.392 [2024-04-25 03:17:28.804522] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:16:54.392 [2024-04-25 03:17:28.804610] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:54.392 EAL: No free 2048 kB hugepages reported on node 1 00:16:54.392 [2024-04-25 03:17:28.869468] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:54.651 [2024-04-25 03:17:28.978670] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:54.651 [2024-04-25 03:17:28.978734] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:54.651 [2024-04-25 03:17:28.978762] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:54.651 [2024-04-25 03:17:28.978774] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:54.651 [2024-04-25 03:17:28.978783] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:54.651 [2024-04-25 03:17:28.978812] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:54.651 03:17:29 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:54.651 03:17:29 -- common/autotest_common.sh@850 -- # return 0 00:16:54.651 03:17:29 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:54.651 03:17:29 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:54.651 03:17:29 -- common/autotest_common.sh@10 -- # set +x 00:16:54.651 03:17:29 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:54.651 03:17:29 -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:16:54.651 03:17:29 -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:16:54.910 true 00:16:54.910 03:17:29 -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:54.910 03:17:29 -- target/tls.sh@73 -- # jq -r .tls_version 00:16:55.168 03:17:29 -- target/tls.sh@73 -- # version=0 00:16:55.168 03:17:29 -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:16:55.168 03:17:29 -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:16:55.427 03:17:29 -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:55.427 03:17:29 -- target/tls.sh@81 -- # jq -r .tls_version 00:16:55.686 03:17:30 -- target/tls.sh@81 -- # version=13 00:16:55.686 03:17:30 -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:16:55.686 03:17:30 -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:16:55.944 03:17:30 -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:55.944 03:17:30 -- target/tls.sh@89 -- # jq -r .tls_version 00:16:56.203 03:17:30 -- target/tls.sh@89 -- # version=7 00:16:56.203 03:17:30 -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:16:56.203 03:17:30 -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:56.203 03:17:30 -- target/tls.sh@96 -- # jq -r .enable_ktls 00:16:56.461 03:17:30 -- target/tls.sh@96 -- # ktls=false 00:16:56.461 03:17:30 -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:16:56.461 03:17:30 -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:16:56.719 03:17:31 -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:56.719 03:17:31 -- target/tls.sh@104 -- # jq -r .enable_ktls 00:16:56.977 03:17:31 -- target/tls.sh@104 -- # ktls=true 00:16:56.977 03:17:31 -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:16:56.977 03:17:31 -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:16:57.235 03:17:31 -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:57.235 03:17:31 -- target/tls.sh@112 -- # jq -r .enable_ktls 00:16:57.494 03:17:31 -- target/tls.sh@112 -- # ktls=false 00:16:57.494 03:17:31 -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:16:57.494 03:17:31 -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:16:57.494 03:17:31 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:16:57.494 03:17:31 -- nvmf/common.sh@691 -- # local prefix key digest 00:16:57.494 03:17:31 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:16:57.494 03:17:31 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:16:57.494 03:17:31 -- nvmf/common.sh@693 -- # digest=1 00:16:57.494 03:17:31 -- nvmf/common.sh@694 -- # python - 00:16:57.494 03:17:31 -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:57.494 03:17:31 -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:16:57.494 03:17:31 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:16:57.494 03:17:31 -- nvmf/common.sh@691 -- # local prefix key digest 00:16:57.494 03:17:31 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:16:57.494 03:17:31 -- nvmf/common.sh@693 -- # key=ffeeddccbbaa99887766554433221100 00:16:57.494 03:17:31 -- nvmf/common.sh@693 -- # digest=1 00:16:57.494 03:17:31 -- nvmf/common.sh@694 -- # python - 00:16:57.494 03:17:31 -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:57.494 03:17:31 -- target/tls.sh@121 -- # mktemp 00:16:57.494 03:17:31 -- target/tls.sh@121 -- # key_path=/tmp/tmp.ZD9rHB5R25 00:16:57.494 03:17:31 -- target/tls.sh@122 -- # mktemp 00:16:57.494 03:17:31 -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.YkFNsSux2U 00:16:57.494 03:17:31 -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:57.494 03:17:31 -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:57.494 03:17:31 -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.ZD9rHB5R25 00:16:57.494 03:17:31 -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.YkFNsSux2U 00:16:57.494 03:17:31 -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:16:57.753 03:17:32 -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:16:58.011 03:17:32 -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.ZD9rHB5R25 00:16:58.011 03:17:32 -- target/tls.sh@49 -- # local key=/tmp/tmp.ZD9rHB5R25 00:16:58.011 03:17:32 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:58.270 [2024-04-25 03:17:32.694329] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:58.270 03:17:32 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:58.527 03:17:32 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:16:58.784 [2024-04-25 03:17:33.215708] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:58.784 [2024-04-25 03:17:33.215940] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:58.784 03:17:33 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:59.042 malloc0 00:16:59.042 03:17:33 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:59.300 03:17:33 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ZD9rHB5R25 00:16:59.557 [2024-04-25 03:17:34.002215] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:16:59.557 03:17:34 -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.ZD9rHB5R25 00:16:59.557 EAL: No free 2048 kB hugepages reported on node 1 00:17:11.754 Initializing NVMe Controllers 00:17:11.754 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:11.754 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:11.754 Initialization complete. Launching workers. 00:17:11.754 ======================================================== 00:17:11.754 Latency(us) 00:17:11.754 Device Information : IOPS MiB/s Average min max 00:17:11.754 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7612.29 29.74 8409.89 1022.64 10126.29 00:17:11.754 ======================================================== 00:17:11.754 Total : 7612.29 29.74 8409.89 1022.64 10126.29 00:17:11.754 00:17:11.754 03:17:44 -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ZD9rHB5R25 00:17:11.754 03:17:44 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:11.754 03:17:44 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:11.754 03:17:44 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:11.754 03:17:44 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.ZD9rHB5R25' 00:17:11.754 03:17:44 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:11.754 03:17:44 -- target/tls.sh@28 -- # bdevperf_pid=1508143 00:17:11.754 03:17:44 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:11.754 03:17:44 -- target/tls.sh@31 -- # waitforlisten 1508143 /var/tmp/bdevperf.sock 00:17:11.754 03:17:44 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:11.754 03:17:44 -- common/autotest_common.sh@817 -- # '[' -z 1508143 ']' 00:17:11.754 03:17:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:11.754 03:17:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:11.754 03:17:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:11.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:11.754 03:17:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:11.754 03:17:44 -- common/autotest_common.sh@10 -- # set +x 00:17:11.754 [2024-04-25 03:17:44.167428] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:17:11.754 [2024-04-25 03:17:44.167502] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1508143 ] 00:17:11.754 EAL: No free 2048 kB hugepages reported on node 1 00:17:11.754 [2024-04-25 03:17:44.224997] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:11.754 [2024-04-25 03:17:44.328973] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:11.754 03:17:44 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:11.754 03:17:44 -- common/autotest_common.sh@850 -- # return 0 00:17:11.754 03:17:44 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ZD9rHB5R25 00:17:11.754 [2024-04-25 03:17:44.715085] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:11.754 [2024-04-25 03:17:44.715211] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:11.754 TLSTESTn1 00:17:11.754 03:17:44 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:11.754 Running I/O for 10 seconds... 00:17:21.770 00:17:21.770 Latency(us) 00:17:21.770 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:21.770 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:21.770 Verification LBA range: start 0x0 length 0x2000 00:17:21.770 TLSTESTn1 : 10.08 1224.12 4.78 0.00 0.00 104224.55 10582.85 139033.41 00:17:21.770 =================================================================================================================== 00:17:21.770 Total : 1224.12 4.78 0.00 0.00 104224.55 10582.85 139033.41 00:17:21.770 0 00:17:21.770 03:17:55 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:21.770 03:17:55 -- target/tls.sh@45 -- # killprocess 1508143 00:17:21.770 03:17:55 -- common/autotest_common.sh@936 -- # '[' -z 1508143 ']' 00:17:21.770 03:17:55 -- common/autotest_common.sh@940 -- # kill -0 1508143 00:17:21.770 03:17:55 -- common/autotest_common.sh@941 -- # uname 00:17:21.770 03:17:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:21.770 03:17:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1508143 00:17:21.770 03:17:55 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:21.770 03:17:55 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:21.770 03:17:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1508143' 00:17:21.770 killing process with pid 1508143 00:17:21.770 03:17:55 -- common/autotest_common.sh@955 -- # kill 1508143 00:17:21.770 Received shutdown signal, test time was about 10.000000 seconds 00:17:21.770 00:17:21.770 Latency(us) 00:17:21.770 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:21.770 =================================================================================================================== 00:17:21.770 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:21.770 [2024-04-25 03:17:55.067726] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:21.770 03:17:55 -- common/autotest_common.sh@960 -- # wait 1508143 00:17:21.770 03:17:55 -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.YkFNsSux2U 00:17:21.770 03:17:55 -- common/autotest_common.sh@638 -- # local es=0 00:17:21.770 03:17:55 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.YkFNsSux2U 00:17:21.770 03:17:55 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:17:21.770 03:17:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:21.770 03:17:55 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:17:21.770 03:17:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:21.770 03:17:55 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.YkFNsSux2U 00:17:21.770 03:17:55 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:21.770 03:17:55 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:21.770 03:17:55 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:21.770 03:17:55 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.YkFNsSux2U' 00:17:21.770 03:17:55 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:21.770 03:17:55 -- target/tls.sh@28 -- # bdevperf_pid=1509349 00:17:21.770 03:17:55 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:21.770 03:17:55 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:21.770 03:17:55 -- target/tls.sh@31 -- # waitforlisten 1509349 /var/tmp/bdevperf.sock 00:17:21.770 03:17:55 -- common/autotest_common.sh@817 -- # '[' -z 1509349 ']' 00:17:21.770 03:17:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:21.770 03:17:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:21.770 03:17:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:21.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:21.770 03:17:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:21.770 03:17:55 -- common/autotest_common.sh@10 -- # set +x 00:17:21.770 [2024-04-25 03:17:55.371209] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:17:21.770 [2024-04-25 03:17:55.371295] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1509349 ] 00:17:21.770 EAL: No free 2048 kB hugepages reported on node 1 00:17:21.770 [2024-04-25 03:17:55.439874] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:21.770 [2024-04-25 03:17:55.547057] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:21.770 03:17:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:21.771 03:17:55 -- common/autotest_common.sh@850 -- # return 0 00:17:21.771 03:17:55 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.YkFNsSux2U 00:17:21.771 [2024-04-25 03:17:55.928024] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:21.771 [2024-04-25 03:17:55.928183] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:21.771 [2024-04-25 03:17:55.937428] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:21.771 [2024-04-25 03:17:55.938191] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaa5c40 (107): Transport endpoint is not connected 00:17:21.771 [2024-04-25 03:17:55.939167] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaa5c40 (9): Bad file descriptor 00:17:21.771 [2024-04-25 03:17:55.940168] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:21.771 [2024-04-25 03:17:55.940189] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:21.771 [2024-04-25 03:17:55.940223] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:21.771 request: 00:17:21.771 { 00:17:21.771 "name": "TLSTEST", 00:17:21.771 "trtype": "tcp", 00:17:21.771 "traddr": "10.0.0.2", 00:17:21.771 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:21.771 "adrfam": "ipv4", 00:17:21.771 "trsvcid": "4420", 00:17:21.771 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:21.771 "psk": "/tmp/tmp.YkFNsSux2U", 00:17:21.771 "method": "bdev_nvme_attach_controller", 00:17:21.771 "req_id": 1 00:17:21.771 } 00:17:21.771 Got JSON-RPC error response 00:17:21.771 response: 00:17:21.771 { 00:17:21.771 "code": -32602, 00:17:21.771 "message": "Invalid parameters" 00:17:21.771 } 00:17:21.771 03:17:55 -- target/tls.sh@36 -- # killprocess 1509349 00:17:21.771 03:17:55 -- common/autotest_common.sh@936 -- # '[' -z 1509349 ']' 00:17:21.771 03:17:55 -- common/autotest_common.sh@940 -- # kill -0 1509349 00:17:21.771 03:17:55 -- common/autotest_common.sh@941 -- # uname 00:17:21.771 03:17:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:21.771 03:17:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1509349 00:17:21.771 03:17:55 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:21.771 03:17:55 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:21.771 03:17:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1509349' 00:17:21.771 killing process with pid 1509349 00:17:21.771 03:17:55 -- common/autotest_common.sh@955 -- # kill 1509349 00:17:21.771 Received shutdown signal, test time was about 10.000000 seconds 00:17:21.771 00:17:21.771 Latency(us) 00:17:21.771 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:21.771 =================================================================================================================== 00:17:21.771 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:21.771 [2024-04-25 03:17:55.993817] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:21.771 03:17:55 -- common/autotest_common.sh@960 -- # wait 1509349 00:17:21.771 03:17:56 -- target/tls.sh@37 -- # return 1 00:17:21.771 03:17:56 -- common/autotest_common.sh@641 -- # es=1 00:17:21.771 03:17:56 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:17:21.771 03:17:56 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:17:21.771 03:17:56 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:17:21.771 03:17:56 -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.ZD9rHB5R25 00:17:21.771 03:17:56 -- common/autotest_common.sh@638 -- # local es=0 00:17:21.771 03:17:56 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.ZD9rHB5R25 00:17:21.771 03:17:56 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:17:21.771 03:17:56 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:21.771 03:17:56 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:17:21.771 03:17:56 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:21.771 03:17:56 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.ZD9rHB5R25 00:17:21.771 03:17:56 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:21.771 03:17:56 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:21.771 03:17:56 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:17:21.771 03:17:56 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.ZD9rHB5R25' 00:17:21.771 03:17:56 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:21.771 03:17:56 -- target/tls.sh@28 -- # bdevperf_pid=1509482 00:17:21.771 03:17:56 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:21.771 03:17:56 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:21.771 03:17:56 -- target/tls.sh@31 -- # waitforlisten 1509482 /var/tmp/bdevperf.sock 00:17:21.771 03:17:56 -- common/autotest_common.sh@817 -- # '[' -z 1509482 ']' 00:17:21.771 03:17:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:21.771 03:17:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:21.771 03:17:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:21.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:21.771 03:17:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:21.771 03:17:56 -- common/autotest_common.sh@10 -- # set +x 00:17:22.030 [2024-04-25 03:17:56.295754] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:17:22.030 [2024-04-25 03:17:56.295834] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1509482 ] 00:17:22.030 EAL: No free 2048 kB hugepages reported on node 1 00:17:22.030 [2024-04-25 03:17:56.353288] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:22.030 [2024-04-25 03:17:56.454314] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:22.288 03:17:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:22.288 03:17:56 -- common/autotest_common.sh@850 -- # return 0 00:17:22.288 03:17:56 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.ZD9rHB5R25 00:17:22.288 [2024-04-25 03:17:56.778768] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:22.288 [2024-04-25 03:17:56.778902] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:22.547 [2024-04-25 03:17:56.789569] tcp.c: 878:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:22.547 [2024-04-25 03:17:56.789614] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:22.547 [2024-04-25 03:17:56.789684] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:22.547 [2024-04-25 03:17:56.789925] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x873c40 (107): Transport endpoint is not connected 00:17:22.547 [2024-04-25 03:17:56.790932] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x873c40 (9): Bad file descriptor 00:17:22.547 [2024-04-25 03:17:56.791914] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:22.547 [2024-04-25 03:17:56.791934] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:22.547 [2024-04-25 03:17:56.791964] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:22.547 request: 00:17:22.547 { 00:17:22.547 "name": "TLSTEST", 00:17:22.547 "trtype": "tcp", 00:17:22.547 "traddr": "10.0.0.2", 00:17:22.547 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:17:22.547 "adrfam": "ipv4", 00:17:22.547 "trsvcid": "4420", 00:17:22.547 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:22.547 "psk": "/tmp/tmp.ZD9rHB5R25", 00:17:22.547 "method": "bdev_nvme_attach_controller", 00:17:22.547 "req_id": 1 00:17:22.547 } 00:17:22.547 Got JSON-RPC error response 00:17:22.547 response: 00:17:22.547 { 00:17:22.547 "code": -32602, 00:17:22.547 "message": "Invalid parameters" 00:17:22.547 } 00:17:22.547 03:17:56 -- target/tls.sh@36 -- # killprocess 1509482 00:17:22.547 03:17:56 -- common/autotest_common.sh@936 -- # '[' -z 1509482 ']' 00:17:22.547 03:17:56 -- common/autotest_common.sh@940 -- # kill -0 1509482 00:17:22.547 03:17:56 -- common/autotest_common.sh@941 -- # uname 00:17:22.547 03:17:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:22.547 03:17:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1509482 00:17:22.547 03:17:56 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:22.547 03:17:56 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:22.547 03:17:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1509482' 00:17:22.547 killing process with pid 1509482 00:17:22.547 03:17:56 -- common/autotest_common.sh@955 -- # kill 1509482 00:17:22.547 Received shutdown signal, test time was about 10.000000 seconds 00:17:22.547 00:17:22.547 Latency(us) 00:17:22.547 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:22.547 =================================================================================================================== 00:17:22.547 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:22.547 [2024-04-25 03:17:56.842148] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:22.547 03:17:56 -- common/autotest_common.sh@960 -- # wait 1509482 00:17:22.805 03:17:57 -- target/tls.sh@37 -- # return 1 00:17:22.805 03:17:57 -- common/autotest_common.sh@641 -- # es=1 00:17:22.805 03:17:57 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:17:22.805 03:17:57 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:17:22.805 03:17:57 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:17:22.805 03:17:57 -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.ZD9rHB5R25 00:17:22.805 03:17:57 -- common/autotest_common.sh@638 -- # local es=0 00:17:22.805 03:17:57 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.ZD9rHB5R25 00:17:22.805 03:17:57 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:17:22.805 03:17:57 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:22.805 03:17:57 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:17:22.805 03:17:57 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:22.805 03:17:57 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.ZD9rHB5R25 00:17:22.805 03:17:57 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:22.805 03:17:57 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:17:22.805 03:17:57 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:22.805 03:17:57 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.ZD9rHB5R25' 00:17:22.805 03:17:57 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:22.805 03:17:57 -- target/tls.sh@28 -- # bdevperf_pid=1509622 00:17:22.805 03:17:57 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:22.805 03:17:57 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:22.805 03:17:57 -- target/tls.sh@31 -- # waitforlisten 1509622 /var/tmp/bdevperf.sock 00:17:22.805 03:17:57 -- common/autotest_common.sh@817 -- # '[' -z 1509622 ']' 00:17:22.805 03:17:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:22.805 03:17:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:22.805 03:17:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:22.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:22.805 03:17:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:22.805 03:17:57 -- common/autotest_common.sh@10 -- # set +x 00:17:22.805 [2024-04-25 03:17:57.144162] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:17:22.805 [2024-04-25 03:17:57.144239] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1509622 ] 00:17:22.805 EAL: No free 2048 kB hugepages reported on node 1 00:17:22.805 [2024-04-25 03:17:57.202483] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:23.063 [2024-04-25 03:17:57.309527] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:23.063 03:17:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:23.063 03:17:57 -- common/autotest_common.sh@850 -- # return 0 00:17:23.063 03:17:57 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ZD9rHB5R25 00:17:23.321 [2024-04-25 03:17:57.679319] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:23.321 [2024-04-25 03:17:57.679443] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:23.321 [2024-04-25 03:17:57.684872] tcp.c: 878:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:23.321 [2024-04-25 03:17:57.684925] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:23.321 [2024-04-25 03:17:57.684964] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:23.321 [2024-04-25 03:17:57.685507] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x195fc40 (107): Transport endpoint is not connected 00:17:23.321 [2024-04-25 03:17:57.686495] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x195fc40 (9): Bad file descriptor 00:17:23.321 [2024-04-25 03:17:57.687493] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:17:23.321 [2024-04-25 03:17:57.687514] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:23.321 [2024-04-25 03:17:57.687555] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:17:23.321 request: 00:17:23.321 { 00:17:23.321 "name": "TLSTEST", 00:17:23.321 "trtype": "tcp", 00:17:23.321 "traddr": "10.0.0.2", 00:17:23.321 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:23.321 "adrfam": "ipv4", 00:17:23.321 "trsvcid": "4420", 00:17:23.321 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:17:23.321 "psk": "/tmp/tmp.ZD9rHB5R25", 00:17:23.321 "method": "bdev_nvme_attach_controller", 00:17:23.321 "req_id": 1 00:17:23.321 } 00:17:23.321 Got JSON-RPC error response 00:17:23.321 response: 00:17:23.321 { 00:17:23.321 "code": -32602, 00:17:23.321 "message": "Invalid parameters" 00:17:23.321 } 00:17:23.321 03:17:57 -- target/tls.sh@36 -- # killprocess 1509622 00:17:23.321 03:17:57 -- common/autotest_common.sh@936 -- # '[' -z 1509622 ']' 00:17:23.321 03:17:57 -- common/autotest_common.sh@940 -- # kill -0 1509622 00:17:23.321 03:17:57 -- common/autotest_common.sh@941 -- # uname 00:17:23.321 03:17:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:23.321 03:17:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1509622 00:17:23.321 03:17:57 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:23.321 03:17:57 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:23.321 03:17:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1509622' 00:17:23.321 killing process with pid 1509622 00:17:23.321 03:17:57 -- common/autotest_common.sh@955 -- # kill 1509622 00:17:23.321 Received shutdown signal, test time was about 10.000000 seconds 00:17:23.321 00:17:23.321 Latency(us) 00:17:23.321 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:23.321 =================================================================================================================== 00:17:23.321 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:23.321 [2024-04-25 03:17:57.736086] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:23.321 03:17:57 -- common/autotest_common.sh@960 -- # wait 1509622 00:17:23.579 03:17:57 -- target/tls.sh@37 -- # return 1 00:17:23.579 03:17:57 -- common/autotest_common.sh@641 -- # es=1 00:17:23.579 03:17:57 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:17:23.579 03:17:57 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:17:23.579 03:17:57 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:17:23.579 03:17:57 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:23.579 03:17:57 -- common/autotest_common.sh@638 -- # local es=0 00:17:23.579 03:17:57 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:23.579 03:17:57 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:17:23.579 03:17:57 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:23.580 03:17:57 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:17:23.580 03:17:57 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:23.580 03:17:57 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:23.580 03:17:57 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:23.580 03:17:57 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:23.580 03:17:57 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:23.580 03:17:57 -- target/tls.sh@23 -- # psk= 00:17:23.580 03:17:57 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:23.580 03:17:57 -- target/tls.sh@28 -- # bdevperf_pid=1509758 00:17:23.580 03:17:57 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:23.580 03:17:57 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:23.580 03:17:57 -- target/tls.sh@31 -- # waitforlisten 1509758 /var/tmp/bdevperf.sock 00:17:23.580 03:17:57 -- common/autotest_common.sh@817 -- # '[' -z 1509758 ']' 00:17:23.580 03:17:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:23.580 03:17:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:23.580 03:17:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:23.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:23.580 03:17:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:23.580 03:17:57 -- common/autotest_common.sh@10 -- # set +x 00:17:23.580 [2024-04-25 03:17:58.034438] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:17:23.580 [2024-04-25 03:17:58.034521] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1509758 ] 00:17:23.580 EAL: No free 2048 kB hugepages reported on node 1 00:17:23.838 [2024-04-25 03:17:58.092374] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:23.838 [2024-04-25 03:17:58.195921] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:23.838 03:17:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:23.838 03:17:58 -- common/autotest_common.sh@850 -- # return 0 00:17:23.838 03:17:58 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:17:24.096 [2024-04-25 03:17:58.574194] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:24.096 [2024-04-25 03:17:58.576133] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f3f690 (9): Bad file descriptor 00:17:24.096 [2024-04-25 03:17:58.577132] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:24.096 [2024-04-25 03:17:58.577152] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:24.096 [2024-04-25 03:17:58.577185] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:24.096 request: 00:17:24.096 { 00:17:24.096 "name": "TLSTEST", 00:17:24.096 "trtype": "tcp", 00:17:24.096 "traddr": "10.0.0.2", 00:17:24.096 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:24.096 "adrfam": "ipv4", 00:17:24.096 "trsvcid": "4420", 00:17:24.096 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:24.096 "method": "bdev_nvme_attach_controller", 00:17:24.096 "req_id": 1 00:17:24.096 } 00:17:24.096 Got JSON-RPC error response 00:17:24.096 response: 00:17:24.096 { 00:17:24.096 "code": -32602, 00:17:24.096 "message": "Invalid parameters" 00:17:24.096 } 00:17:24.096 03:17:58 -- target/tls.sh@36 -- # killprocess 1509758 00:17:24.096 03:17:58 -- common/autotest_common.sh@936 -- # '[' -z 1509758 ']' 00:17:24.096 03:17:58 -- common/autotest_common.sh@940 -- # kill -0 1509758 00:17:24.355 03:17:58 -- common/autotest_common.sh@941 -- # uname 00:17:24.355 03:17:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:24.355 03:17:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1509758 00:17:24.355 03:17:58 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:24.355 03:17:58 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:24.355 03:17:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1509758' 00:17:24.355 killing process with pid 1509758 00:17:24.355 03:17:58 -- common/autotest_common.sh@955 -- # kill 1509758 00:17:24.355 Received shutdown signal, test time was about 10.000000 seconds 00:17:24.355 00:17:24.355 Latency(us) 00:17:24.355 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:24.355 =================================================================================================================== 00:17:24.355 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:24.355 03:17:58 -- common/autotest_common.sh@960 -- # wait 1509758 00:17:24.613 03:17:58 -- target/tls.sh@37 -- # return 1 00:17:24.613 03:17:58 -- common/autotest_common.sh@641 -- # es=1 00:17:24.613 03:17:58 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:17:24.613 03:17:58 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:17:24.613 03:17:58 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:17:24.613 03:17:58 -- target/tls.sh@158 -- # killprocess 1506254 00:17:24.613 03:17:58 -- common/autotest_common.sh@936 -- # '[' -z 1506254 ']' 00:17:24.613 03:17:58 -- common/autotest_common.sh@940 -- # kill -0 1506254 00:17:24.613 03:17:58 -- common/autotest_common.sh@941 -- # uname 00:17:24.613 03:17:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:24.613 03:17:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1506254 00:17:24.613 03:17:58 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:24.613 03:17:58 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:24.613 03:17:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1506254' 00:17:24.613 killing process with pid 1506254 00:17:24.613 03:17:58 -- common/autotest_common.sh@955 -- # kill 1506254 00:17:24.613 [2024-04-25 03:17:58.890016] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:24.613 03:17:58 -- common/autotest_common.sh@960 -- # wait 1506254 00:17:24.871 03:17:59 -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:17:24.871 03:17:59 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:17:24.872 03:17:59 -- nvmf/common.sh@691 -- # local prefix key digest 00:17:24.872 03:17:59 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:17:24.872 03:17:59 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:17:24.872 03:17:59 -- nvmf/common.sh@693 -- # digest=2 00:17:24.872 03:17:59 -- nvmf/common.sh@694 -- # python - 00:17:24.872 03:17:59 -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:24.872 03:17:59 -- target/tls.sh@160 -- # mktemp 00:17:24.872 03:17:59 -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.zgtd0avrUE 00:17:24.872 03:17:59 -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:24.872 03:17:59 -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.zgtd0avrUE 00:17:24.872 03:17:59 -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:17:24.872 03:17:59 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:24.872 03:17:59 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:24.872 03:17:59 -- common/autotest_common.sh@10 -- # set +x 00:17:24.872 03:17:59 -- nvmf/common.sh@470 -- # nvmfpid=1509910 00:17:24.872 03:17:59 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:24.872 03:17:59 -- nvmf/common.sh@471 -- # waitforlisten 1509910 00:17:24.872 03:17:59 -- common/autotest_common.sh@817 -- # '[' -z 1509910 ']' 00:17:24.872 03:17:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:24.872 03:17:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:24.872 03:17:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:24.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:24.872 03:17:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:24.872 03:17:59 -- common/autotest_common.sh@10 -- # set +x 00:17:24.872 [2024-04-25 03:17:59.264322] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:17:24.872 [2024-04-25 03:17:59.264419] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:24.872 EAL: No free 2048 kB hugepages reported on node 1 00:17:24.872 [2024-04-25 03:17:59.332072] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:25.130 [2024-04-25 03:17:59.444346] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:25.130 [2024-04-25 03:17:59.444426] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:25.130 [2024-04-25 03:17:59.444442] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:25.130 [2024-04-25 03:17:59.444456] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:25.130 [2024-04-25 03:17:59.444469] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:25.130 [2024-04-25 03:17:59.444505] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:26.062 03:18:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:26.062 03:18:00 -- common/autotest_common.sh@850 -- # return 0 00:17:26.062 03:18:00 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:26.062 03:18:00 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:26.062 03:18:00 -- common/autotest_common.sh@10 -- # set +x 00:17:26.062 03:18:00 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:26.062 03:18:00 -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.zgtd0avrUE 00:17:26.062 03:18:00 -- target/tls.sh@49 -- # local key=/tmp/tmp.zgtd0avrUE 00:17:26.062 03:18:00 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:26.062 [2024-04-25 03:18:00.464989] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:26.062 03:18:00 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:26.320 03:18:00 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:26.577 [2024-04-25 03:18:00.946275] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:26.577 [2024-04-25 03:18:00.946523] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:26.577 03:18:00 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:26.835 malloc0 00:17:26.835 03:18:01 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:27.092 03:18:01 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.zgtd0avrUE 00:17:27.350 [2024-04-25 03:18:01.703886] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:27.350 03:18:01 -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.zgtd0avrUE 00:17:27.350 03:18:01 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:27.350 03:18:01 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:27.350 03:18:01 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:27.350 03:18:01 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.zgtd0avrUE' 00:17:27.350 03:18:01 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:27.350 03:18:01 -- target/tls.sh@28 -- # bdevperf_pid=1510307 00:17:27.350 03:18:01 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:27.350 03:18:01 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:27.350 03:18:01 -- target/tls.sh@31 -- # waitforlisten 1510307 /var/tmp/bdevperf.sock 00:17:27.350 03:18:01 -- common/autotest_common.sh@817 -- # '[' -z 1510307 ']' 00:17:27.350 03:18:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:27.350 03:18:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:27.350 03:18:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:27.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:27.350 03:18:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:27.350 03:18:01 -- common/autotest_common.sh@10 -- # set +x 00:17:27.350 [2024-04-25 03:18:01.765807] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:17:27.350 [2024-04-25 03:18:01.765877] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1510307 ] 00:17:27.350 EAL: No free 2048 kB hugepages reported on node 1 00:17:27.350 [2024-04-25 03:18:01.829733] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:27.607 [2024-04-25 03:18:01.934906] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:27.607 03:18:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:27.607 03:18:02 -- common/autotest_common.sh@850 -- # return 0 00:17:27.607 03:18:02 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.zgtd0avrUE 00:17:27.865 [2024-04-25 03:18:02.273678] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:27.865 [2024-04-25 03:18:02.273800] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:27.865 TLSTESTn1 00:17:28.122 03:18:02 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:28.122 Running I/O for 10 seconds... 00:17:40.314 00:17:40.314 Latency(us) 00:17:40.314 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:40.314 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:40.314 Verification LBA range: start 0x0 length 0x2000 00:17:40.314 TLSTESTn1 : 10.12 855.92 3.34 0.00 0.00 148908.90 6310.87 193404.02 00:17:40.314 =================================================================================================================== 00:17:40.314 Total : 855.92 3.34 0.00 0.00 148908.90 6310.87 193404.02 00:17:40.314 0 00:17:40.314 03:18:12 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:40.314 03:18:12 -- target/tls.sh@45 -- # killprocess 1510307 00:17:40.314 03:18:12 -- common/autotest_common.sh@936 -- # '[' -z 1510307 ']' 00:17:40.314 03:18:12 -- common/autotest_common.sh@940 -- # kill -0 1510307 00:17:40.314 03:18:12 -- common/autotest_common.sh@941 -- # uname 00:17:40.314 03:18:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:40.314 03:18:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1510307 00:17:40.314 03:18:12 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:40.314 03:18:12 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:40.314 03:18:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1510307' 00:17:40.314 killing process with pid 1510307 00:17:40.314 03:18:12 -- common/autotest_common.sh@955 -- # kill 1510307 00:17:40.314 Received shutdown signal, test time was about 10.000000 seconds 00:17:40.314 00:17:40.314 Latency(us) 00:17:40.314 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:40.314 =================================================================================================================== 00:17:40.314 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:40.314 [2024-04-25 03:18:12.663716] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:40.314 03:18:12 -- common/autotest_common.sh@960 -- # wait 1510307 00:17:40.314 03:18:12 -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.zgtd0avrUE 00:17:40.314 03:18:12 -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.zgtd0avrUE 00:17:40.314 03:18:12 -- common/autotest_common.sh@638 -- # local es=0 00:17:40.314 03:18:12 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.zgtd0avrUE 00:17:40.314 03:18:12 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:17:40.314 03:18:12 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:40.314 03:18:12 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:17:40.314 03:18:12 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:40.314 03:18:12 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.zgtd0avrUE 00:17:40.314 03:18:12 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:40.314 03:18:12 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:40.314 03:18:12 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:40.314 03:18:12 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.zgtd0avrUE' 00:17:40.314 03:18:12 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:40.314 03:18:12 -- target/tls.sh@28 -- # bdevperf_pid=1512134 00:17:40.314 03:18:12 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:40.314 03:18:12 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:40.314 03:18:12 -- target/tls.sh@31 -- # waitforlisten 1512134 /var/tmp/bdevperf.sock 00:17:40.314 03:18:12 -- common/autotest_common.sh@817 -- # '[' -z 1512134 ']' 00:17:40.314 03:18:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:40.314 03:18:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:40.314 03:18:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:40.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:40.314 03:18:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:40.314 03:18:12 -- common/autotest_common.sh@10 -- # set +x 00:17:40.314 [2024-04-25 03:18:12.973178] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:17:40.314 [2024-04-25 03:18:12.973258] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1512134 ] 00:17:40.314 EAL: No free 2048 kB hugepages reported on node 1 00:17:40.314 [2024-04-25 03:18:13.033797] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:40.314 [2024-04-25 03:18:13.138983] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:40.314 03:18:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:40.314 03:18:13 -- common/autotest_common.sh@850 -- # return 0 00:17:40.314 03:18:13 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.zgtd0avrUE 00:17:40.314 [2024-04-25 03:18:13.460064] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:40.314 [2024-04-25 03:18:13.460139] bdev_nvme.c:6067:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:17:40.314 [2024-04-25 03:18:13.460154] bdev_nvme.c:6176:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.zgtd0avrUE 00:17:40.314 request: 00:17:40.314 { 00:17:40.314 "name": "TLSTEST", 00:17:40.314 "trtype": "tcp", 00:17:40.314 "traddr": "10.0.0.2", 00:17:40.314 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:40.314 "adrfam": "ipv4", 00:17:40.314 "trsvcid": "4420", 00:17:40.314 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:40.314 "psk": "/tmp/tmp.zgtd0avrUE", 00:17:40.314 "method": "bdev_nvme_attach_controller", 00:17:40.314 "req_id": 1 00:17:40.314 } 00:17:40.314 Got JSON-RPC error response 00:17:40.314 response: 00:17:40.314 { 00:17:40.314 "code": -1, 00:17:40.314 "message": "Operation not permitted" 00:17:40.314 } 00:17:40.314 03:18:13 -- target/tls.sh@36 -- # killprocess 1512134 00:17:40.314 03:18:13 -- common/autotest_common.sh@936 -- # '[' -z 1512134 ']' 00:17:40.314 03:18:13 -- common/autotest_common.sh@940 -- # kill -0 1512134 00:17:40.314 03:18:13 -- common/autotest_common.sh@941 -- # uname 00:17:40.314 03:18:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:40.314 03:18:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1512134 00:17:40.314 03:18:13 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:40.314 03:18:13 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:40.314 03:18:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1512134' 00:17:40.314 killing process with pid 1512134 00:17:40.314 03:18:13 -- common/autotest_common.sh@955 -- # kill 1512134 00:17:40.314 Received shutdown signal, test time was about 10.000000 seconds 00:17:40.314 00:17:40.314 Latency(us) 00:17:40.314 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:40.314 =================================================================================================================== 00:17:40.314 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:40.314 03:18:13 -- common/autotest_common.sh@960 -- # wait 1512134 00:17:40.314 03:18:13 -- target/tls.sh@37 -- # return 1 00:17:40.314 03:18:13 -- common/autotest_common.sh@641 -- # es=1 00:17:40.314 03:18:13 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:17:40.314 03:18:13 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:17:40.314 03:18:13 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:17:40.314 03:18:13 -- target/tls.sh@174 -- # killprocess 1509910 00:17:40.314 03:18:13 -- common/autotest_common.sh@936 -- # '[' -z 1509910 ']' 00:17:40.314 03:18:13 -- common/autotest_common.sh@940 -- # kill -0 1509910 00:17:40.315 03:18:13 -- common/autotest_common.sh@941 -- # uname 00:17:40.315 03:18:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:40.315 03:18:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1509910 00:17:40.315 03:18:13 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:40.315 03:18:13 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:40.315 03:18:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1509910' 00:17:40.315 killing process with pid 1509910 00:17:40.315 03:18:13 -- common/autotest_common.sh@955 -- # kill 1509910 00:17:40.315 [2024-04-25 03:18:13.792847] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:40.315 03:18:13 -- common/autotest_common.sh@960 -- # wait 1509910 00:17:40.315 03:18:14 -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:17:40.315 03:18:14 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:40.315 03:18:14 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:40.315 03:18:14 -- common/autotest_common.sh@10 -- # set +x 00:17:40.315 03:18:14 -- nvmf/common.sh@470 -- # nvmfpid=1512284 00:17:40.315 03:18:14 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:40.315 03:18:14 -- nvmf/common.sh@471 -- # waitforlisten 1512284 00:17:40.315 03:18:14 -- common/autotest_common.sh@817 -- # '[' -z 1512284 ']' 00:17:40.315 03:18:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:40.315 03:18:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:40.315 03:18:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:40.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:40.315 03:18:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:40.315 03:18:14 -- common/autotest_common.sh@10 -- # set +x 00:17:40.315 [2024-04-25 03:18:14.141440] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:17:40.315 [2024-04-25 03:18:14.141523] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:40.315 EAL: No free 2048 kB hugepages reported on node 1 00:17:40.315 [2024-04-25 03:18:14.213772] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:40.315 [2024-04-25 03:18:14.322189] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:40.315 [2024-04-25 03:18:14.322258] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:40.315 [2024-04-25 03:18:14.322288] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:40.315 [2024-04-25 03:18:14.322300] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:40.315 [2024-04-25 03:18:14.322310] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:40.315 [2024-04-25 03:18:14.322346] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:40.881 03:18:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:40.881 03:18:15 -- common/autotest_common.sh@850 -- # return 0 00:17:40.881 03:18:15 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:40.881 03:18:15 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:40.881 03:18:15 -- common/autotest_common.sh@10 -- # set +x 00:17:40.881 03:18:15 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:40.881 03:18:15 -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.zgtd0avrUE 00:17:40.881 03:18:15 -- common/autotest_common.sh@638 -- # local es=0 00:17:40.881 03:18:15 -- common/autotest_common.sh@640 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.zgtd0avrUE 00:17:40.881 03:18:15 -- common/autotest_common.sh@626 -- # local arg=setup_nvmf_tgt 00:17:40.881 03:18:15 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:40.881 03:18:15 -- common/autotest_common.sh@630 -- # type -t setup_nvmf_tgt 00:17:40.881 03:18:15 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:40.881 03:18:15 -- common/autotest_common.sh@641 -- # setup_nvmf_tgt /tmp/tmp.zgtd0avrUE 00:17:40.881 03:18:15 -- target/tls.sh@49 -- # local key=/tmp/tmp.zgtd0avrUE 00:17:40.881 03:18:15 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:40.881 [2024-04-25 03:18:15.362964] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:41.164 03:18:15 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:41.164 03:18:15 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:41.421 [2024-04-25 03:18:15.888345] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:41.421 [2024-04-25 03:18:15.888592] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:41.421 03:18:15 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:41.986 malloc0 00:17:41.986 03:18:16 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:41.986 03:18:16 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.zgtd0avrUE 00:17:42.242 [2024-04-25 03:18:16.687087] tcp.c:3562:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:17:42.242 [2024-04-25 03:18:16.687129] tcp.c:3648:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:17:42.242 [2024-04-25 03:18:16.687160] subsystem.c: 971:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:17:42.243 request: 00:17:42.243 { 00:17:42.243 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:42.243 "host": "nqn.2016-06.io.spdk:host1", 00:17:42.243 "psk": "/tmp/tmp.zgtd0avrUE", 00:17:42.243 "method": "nvmf_subsystem_add_host", 00:17:42.243 "req_id": 1 00:17:42.243 } 00:17:42.243 Got JSON-RPC error response 00:17:42.243 response: 00:17:42.243 { 00:17:42.243 "code": -32603, 00:17:42.243 "message": "Internal error" 00:17:42.243 } 00:17:42.243 03:18:16 -- common/autotest_common.sh@641 -- # es=1 00:17:42.243 03:18:16 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:17:42.243 03:18:16 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:17:42.243 03:18:16 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:17:42.243 03:18:16 -- target/tls.sh@180 -- # killprocess 1512284 00:17:42.243 03:18:16 -- common/autotest_common.sh@936 -- # '[' -z 1512284 ']' 00:17:42.243 03:18:16 -- common/autotest_common.sh@940 -- # kill -0 1512284 00:17:42.243 03:18:16 -- common/autotest_common.sh@941 -- # uname 00:17:42.243 03:18:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:42.243 03:18:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1512284 00:17:42.243 03:18:16 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:42.243 03:18:16 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:42.243 03:18:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1512284' 00:17:42.243 killing process with pid 1512284 00:17:42.243 03:18:16 -- common/autotest_common.sh@955 -- # kill 1512284 00:17:42.243 03:18:16 -- common/autotest_common.sh@960 -- # wait 1512284 00:17:42.530 03:18:17 -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.zgtd0avrUE 00:17:42.530 03:18:17 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:17:42.530 03:18:17 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:42.530 03:18:17 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:42.530 03:18:17 -- common/autotest_common.sh@10 -- # set +x 00:17:42.530 03:18:17 -- nvmf/common.sh@470 -- # nvmfpid=1512707 00:17:42.530 03:18:17 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:42.530 03:18:17 -- nvmf/common.sh@471 -- # waitforlisten 1512707 00:17:42.530 03:18:17 -- common/autotest_common.sh@817 -- # '[' -z 1512707 ']' 00:17:42.530 03:18:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:42.530 03:18:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:42.530 03:18:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:42.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:42.530 03:18:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:42.530 03:18:17 -- common/autotest_common.sh@10 -- # set +x 00:17:42.789 [2024-04-25 03:18:17.049448] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:17:42.789 [2024-04-25 03:18:17.049534] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:42.789 EAL: No free 2048 kB hugepages reported on node 1 00:17:42.789 [2024-04-25 03:18:17.111431] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:42.789 [2024-04-25 03:18:17.217702] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:42.789 [2024-04-25 03:18:17.217760] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:42.789 [2024-04-25 03:18:17.217789] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:42.789 [2024-04-25 03:18:17.217800] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:42.789 [2024-04-25 03:18:17.217810] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:42.789 [2024-04-25 03:18:17.217843] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:43.046 03:18:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:43.046 03:18:17 -- common/autotest_common.sh@850 -- # return 0 00:17:43.046 03:18:17 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:43.046 03:18:17 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:43.046 03:18:17 -- common/autotest_common.sh@10 -- # set +x 00:17:43.046 03:18:17 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:43.046 03:18:17 -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.zgtd0avrUE 00:17:43.046 03:18:17 -- target/tls.sh@49 -- # local key=/tmp/tmp.zgtd0avrUE 00:17:43.046 03:18:17 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:43.304 [2024-04-25 03:18:17.616436] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:43.304 03:18:17 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:43.562 03:18:17 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:43.818 [2024-04-25 03:18:18.141871] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:43.819 [2024-04-25 03:18:18.142113] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:43.819 03:18:18 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:44.082 malloc0 00:17:44.083 03:18:18 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:44.348 03:18:18 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.zgtd0avrUE 00:17:44.605 [2024-04-25 03:18:18.942912] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:44.605 03:18:18 -- target/tls.sh@188 -- # bdevperf_pid=1512877 00:17:44.605 03:18:18 -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:44.605 03:18:18 -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:44.605 03:18:18 -- target/tls.sh@191 -- # waitforlisten 1512877 /var/tmp/bdevperf.sock 00:17:44.605 03:18:18 -- common/autotest_common.sh@817 -- # '[' -z 1512877 ']' 00:17:44.605 03:18:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:44.605 03:18:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:44.605 03:18:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:44.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:44.605 03:18:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:44.605 03:18:18 -- common/autotest_common.sh@10 -- # set +x 00:17:44.605 [2024-04-25 03:18:19.000753] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:17:44.605 [2024-04-25 03:18:19.000827] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1512877 ] 00:17:44.605 EAL: No free 2048 kB hugepages reported on node 1 00:17:44.605 [2024-04-25 03:18:19.057292] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:44.863 [2024-04-25 03:18:19.163303] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:44.863 03:18:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:44.863 03:18:19 -- common/autotest_common.sh@850 -- # return 0 00:17:44.863 03:18:19 -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.zgtd0avrUE 00:17:45.120 [2024-04-25 03:18:19.541713] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:45.120 [2024-04-25 03:18:19.541850] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:45.120 TLSTESTn1 00:17:45.375 03:18:19 -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:17:45.633 03:18:19 -- target/tls.sh@196 -- # tgtconf='{ 00:17:45.633 "subsystems": [ 00:17:45.633 { 00:17:45.633 "subsystem": "keyring", 00:17:45.633 "config": [] 00:17:45.633 }, 00:17:45.633 { 00:17:45.633 "subsystem": "iobuf", 00:17:45.633 "config": [ 00:17:45.633 { 00:17:45.633 "method": "iobuf_set_options", 00:17:45.633 "params": { 00:17:45.633 "small_pool_count": 8192, 00:17:45.633 "large_pool_count": 1024, 00:17:45.633 "small_bufsize": 8192, 00:17:45.633 "large_bufsize": 135168 00:17:45.633 } 00:17:45.633 } 00:17:45.633 ] 00:17:45.633 }, 00:17:45.633 { 00:17:45.633 "subsystem": "sock", 00:17:45.633 "config": [ 00:17:45.633 { 00:17:45.633 "method": "sock_impl_set_options", 00:17:45.633 "params": { 00:17:45.633 "impl_name": "posix", 00:17:45.633 "recv_buf_size": 2097152, 00:17:45.633 "send_buf_size": 2097152, 00:17:45.633 "enable_recv_pipe": true, 00:17:45.633 "enable_quickack": false, 00:17:45.633 "enable_placement_id": 0, 00:17:45.633 "enable_zerocopy_send_server": true, 00:17:45.633 "enable_zerocopy_send_client": false, 00:17:45.633 "zerocopy_threshold": 0, 00:17:45.633 "tls_version": 0, 00:17:45.633 "enable_ktls": false 00:17:45.633 } 00:17:45.633 }, 00:17:45.633 { 00:17:45.633 "method": "sock_impl_set_options", 00:17:45.633 "params": { 00:17:45.633 "impl_name": "ssl", 00:17:45.633 "recv_buf_size": 4096, 00:17:45.633 "send_buf_size": 4096, 00:17:45.633 "enable_recv_pipe": true, 00:17:45.633 "enable_quickack": false, 00:17:45.633 "enable_placement_id": 0, 00:17:45.633 "enable_zerocopy_send_server": true, 00:17:45.633 "enable_zerocopy_send_client": false, 00:17:45.633 "zerocopy_threshold": 0, 00:17:45.633 "tls_version": 0, 00:17:45.633 "enable_ktls": false 00:17:45.634 } 00:17:45.634 } 00:17:45.634 ] 00:17:45.634 }, 00:17:45.634 { 00:17:45.634 "subsystem": "vmd", 00:17:45.634 "config": [] 00:17:45.634 }, 00:17:45.634 { 00:17:45.634 "subsystem": "accel", 00:17:45.634 "config": [ 00:17:45.634 { 00:17:45.634 "method": "accel_set_options", 00:17:45.634 "params": { 00:17:45.634 "small_cache_size": 128, 00:17:45.634 "large_cache_size": 16, 00:17:45.634 "task_count": 2048, 00:17:45.634 "sequence_count": 2048, 00:17:45.634 "buf_count": 2048 00:17:45.634 } 00:17:45.634 } 00:17:45.634 ] 00:17:45.634 }, 00:17:45.634 { 00:17:45.634 "subsystem": "bdev", 00:17:45.634 "config": [ 00:17:45.634 { 00:17:45.634 "method": "bdev_set_options", 00:17:45.634 "params": { 00:17:45.634 "bdev_io_pool_size": 65535, 00:17:45.634 "bdev_io_cache_size": 256, 00:17:45.634 "bdev_auto_examine": true, 00:17:45.634 "iobuf_small_cache_size": 128, 00:17:45.634 "iobuf_large_cache_size": 16 00:17:45.634 } 00:17:45.634 }, 00:17:45.634 { 00:17:45.634 "method": "bdev_raid_set_options", 00:17:45.634 "params": { 00:17:45.634 "process_window_size_kb": 1024 00:17:45.634 } 00:17:45.634 }, 00:17:45.634 { 00:17:45.634 "method": "bdev_iscsi_set_options", 00:17:45.634 "params": { 00:17:45.634 "timeout_sec": 30 00:17:45.634 } 00:17:45.634 }, 00:17:45.634 { 00:17:45.634 "method": "bdev_nvme_set_options", 00:17:45.634 "params": { 00:17:45.634 "action_on_timeout": "none", 00:17:45.634 "timeout_us": 0, 00:17:45.634 "timeout_admin_us": 0, 00:17:45.634 "keep_alive_timeout_ms": 10000, 00:17:45.634 "arbitration_burst": 0, 00:17:45.634 "low_priority_weight": 0, 00:17:45.634 "medium_priority_weight": 0, 00:17:45.634 "high_priority_weight": 0, 00:17:45.634 "nvme_adminq_poll_period_us": 10000, 00:17:45.634 "nvme_ioq_poll_period_us": 0, 00:17:45.634 "io_queue_requests": 0, 00:17:45.634 "delay_cmd_submit": true, 00:17:45.634 "transport_retry_count": 4, 00:17:45.634 "bdev_retry_count": 3, 00:17:45.634 "transport_ack_timeout": 0, 00:17:45.634 "ctrlr_loss_timeout_sec": 0, 00:17:45.634 "reconnect_delay_sec": 0, 00:17:45.634 "fast_io_fail_timeout_sec": 0, 00:17:45.634 "disable_auto_failback": false, 00:17:45.634 "generate_uuids": false, 00:17:45.634 "transport_tos": 0, 00:17:45.634 "nvme_error_stat": false, 00:17:45.634 "rdma_srq_size": 0, 00:17:45.634 "io_path_stat": false, 00:17:45.634 "allow_accel_sequence": false, 00:17:45.634 "rdma_max_cq_size": 0, 00:17:45.634 "rdma_cm_event_timeout_ms": 0, 00:17:45.634 "dhchap_digests": [ 00:17:45.634 "sha256", 00:17:45.634 "sha384", 00:17:45.634 "sha512" 00:17:45.634 ], 00:17:45.634 "dhchap_dhgroups": [ 00:17:45.634 "null", 00:17:45.634 "ffdhe2048", 00:17:45.634 "ffdhe3072", 00:17:45.634 "ffdhe4096", 00:17:45.634 "ffdhe6144", 00:17:45.634 "ffdhe8192" 00:17:45.634 ] 00:17:45.634 } 00:17:45.634 }, 00:17:45.634 { 00:17:45.634 "method": "bdev_nvme_set_hotplug", 00:17:45.634 "params": { 00:17:45.634 "period_us": 100000, 00:17:45.634 "enable": false 00:17:45.634 } 00:17:45.634 }, 00:17:45.634 { 00:17:45.634 "method": "bdev_malloc_create", 00:17:45.634 "params": { 00:17:45.634 "name": "malloc0", 00:17:45.634 "num_blocks": 8192, 00:17:45.634 "block_size": 4096, 00:17:45.634 "physical_block_size": 4096, 00:17:45.634 "uuid": "28591c55-086d-4203-8ecf-d58a3b352ae0", 00:17:45.634 "optimal_io_boundary": 0 00:17:45.634 } 00:17:45.634 }, 00:17:45.634 { 00:17:45.634 "method": "bdev_wait_for_examine" 00:17:45.634 } 00:17:45.634 ] 00:17:45.634 }, 00:17:45.634 { 00:17:45.634 "subsystem": "nbd", 00:17:45.634 "config": [] 00:17:45.634 }, 00:17:45.634 { 00:17:45.634 "subsystem": "scheduler", 00:17:45.634 "config": [ 00:17:45.634 { 00:17:45.634 "method": "framework_set_scheduler", 00:17:45.634 "params": { 00:17:45.634 "name": "static" 00:17:45.634 } 00:17:45.634 } 00:17:45.634 ] 00:17:45.634 }, 00:17:45.634 { 00:17:45.634 "subsystem": "nvmf", 00:17:45.634 "config": [ 00:17:45.634 { 00:17:45.634 "method": "nvmf_set_config", 00:17:45.634 "params": { 00:17:45.634 "discovery_filter": "match_any", 00:17:45.634 "admin_cmd_passthru": { 00:17:45.634 "identify_ctrlr": false 00:17:45.634 } 00:17:45.634 } 00:17:45.634 }, 00:17:45.634 { 00:17:45.634 "method": "nvmf_set_max_subsystems", 00:17:45.634 "params": { 00:17:45.634 "max_subsystems": 1024 00:17:45.634 } 00:17:45.634 }, 00:17:45.634 { 00:17:45.634 "method": "nvmf_set_crdt", 00:17:45.634 "params": { 00:17:45.634 "crdt1": 0, 00:17:45.634 "crdt2": 0, 00:17:45.634 "crdt3": 0 00:17:45.634 } 00:17:45.634 }, 00:17:45.634 { 00:17:45.634 "method": "nvmf_create_transport", 00:17:45.634 "params": { 00:17:45.634 "trtype": "TCP", 00:17:45.634 "max_queue_depth": 128, 00:17:45.634 "max_io_qpairs_per_ctrlr": 127, 00:17:45.634 "in_capsule_data_size": 4096, 00:17:45.634 "max_io_size": 131072, 00:17:45.634 "io_unit_size": 131072, 00:17:45.634 "max_aq_depth": 128, 00:17:45.634 "num_shared_buffers": 511, 00:17:45.634 "buf_cache_size": 4294967295, 00:17:45.634 "dif_insert_or_strip": false, 00:17:45.634 "zcopy": false, 00:17:45.634 "c2h_success": false, 00:17:45.634 "sock_priority": 0, 00:17:45.634 "abort_timeout_sec": 1, 00:17:45.634 "ack_timeout": 0, 00:17:45.634 "data_wr_pool_size": 0 00:17:45.634 } 00:17:45.634 }, 00:17:45.634 { 00:17:45.634 "method": "nvmf_create_subsystem", 00:17:45.634 "params": { 00:17:45.634 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:45.634 "allow_any_host": false, 00:17:45.634 "serial_number": "SPDK00000000000001", 00:17:45.634 "model_number": "SPDK bdev Controller", 00:17:45.634 "max_namespaces": 10, 00:17:45.634 "min_cntlid": 1, 00:17:45.634 "max_cntlid": 65519, 00:17:45.634 "ana_reporting": false 00:17:45.634 } 00:17:45.634 }, 00:17:45.634 { 00:17:45.634 "method": "nvmf_subsystem_add_host", 00:17:45.634 "params": { 00:17:45.634 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:45.634 "host": "nqn.2016-06.io.spdk:host1", 00:17:45.634 "psk": "/tmp/tmp.zgtd0avrUE" 00:17:45.634 } 00:17:45.634 }, 00:17:45.634 { 00:17:45.634 "method": "nvmf_subsystem_add_ns", 00:17:45.634 "params": { 00:17:45.634 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:45.634 "namespace": { 00:17:45.634 "nsid": 1, 00:17:45.634 "bdev_name": "malloc0", 00:17:45.634 "nguid": "28591C55086D42038ECFD58A3B352AE0", 00:17:45.634 "uuid": "28591c55-086d-4203-8ecf-d58a3b352ae0", 00:17:45.634 "no_auto_visible": false 00:17:45.634 } 00:17:45.634 } 00:17:45.634 }, 00:17:45.634 { 00:17:45.634 "method": "nvmf_subsystem_add_listener", 00:17:45.634 "params": { 00:17:45.634 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:45.634 "listen_address": { 00:17:45.634 "trtype": "TCP", 00:17:45.634 "adrfam": "IPv4", 00:17:45.634 "traddr": "10.0.0.2", 00:17:45.634 "trsvcid": "4420" 00:17:45.634 }, 00:17:45.634 "secure_channel": true 00:17:45.634 } 00:17:45.634 } 00:17:45.634 ] 00:17:45.634 } 00:17:45.634 ] 00:17:45.634 }' 00:17:45.634 03:18:19 -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:17:45.892 03:18:20 -- target/tls.sh@197 -- # bdevperfconf='{ 00:17:45.892 "subsystems": [ 00:17:45.892 { 00:17:45.892 "subsystem": "keyring", 00:17:45.892 "config": [] 00:17:45.892 }, 00:17:45.892 { 00:17:45.892 "subsystem": "iobuf", 00:17:45.892 "config": [ 00:17:45.892 { 00:17:45.892 "method": "iobuf_set_options", 00:17:45.892 "params": { 00:17:45.892 "small_pool_count": 8192, 00:17:45.892 "large_pool_count": 1024, 00:17:45.892 "small_bufsize": 8192, 00:17:45.892 "large_bufsize": 135168 00:17:45.892 } 00:17:45.892 } 00:17:45.892 ] 00:17:45.892 }, 00:17:45.892 { 00:17:45.892 "subsystem": "sock", 00:17:45.892 "config": [ 00:17:45.892 { 00:17:45.892 "method": "sock_impl_set_options", 00:17:45.892 "params": { 00:17:45.892 "impl_name": "posix", 00:17:45.892 "recv_buf_size": 2097152, 00:17:45.892 "send_buf_size": 2097152, 00:17:45.892 "enable_recv_pipe": true, 00:17:45.892 "enable_quickack": false, 00:17:45.893 "enable_placement_id": 0, 00:17:45.893 "enable_zerocopy_send_server": true, 00:17:45.893 "enable_zerocopy_send_client": false, 00:17:45.893 "zerocopy_threshold": 0, 00:17:45.893 "tls_version": 0, 00:17:45.893 "enable_ktls": false 00:17:45.893 } 00:17:45.893 }, 00:17:45.893 { 00:17:45.893 "method": "sock_impl_set_options", 00:17:45.893 "params": { 00:17:45.893 "impl_name": "ssl", 00:17:45.893 "recv_buf_size": 4096, 00:17:45.893 "send_buf_size": 4096, 00:17:45.893 "enable_recv_pipe": true, 00:17:45.893 "enable_quickack": false, 00:17:45.893 "enable_placement_id": 0, 00:17:45.893 "enable_zerocopy_send_server": true, 00:17:45.893 "enable_zerocopy_send_client": false, 00:17:45.893 "zerocopy_threshold": 0, 00:17:45.893 "tls_version": 0, 00:17:45.893 "enable_ktls": false 00:17:45.893 } 00:17:45.893 } 00:17:45.893 ] 00:17:45.893 }, 00:17:45.893 { 00:17:45.893 "subsystem": "vmd", 00:17:45.893 "config": [] 00:17:45.893 }, 00:17:45.893 { 00:17:45.893 "subsystem": "accel", 00:17:45.893 "config": [ 00:17:45.893 { 00:17:45.893 "method": "accel_set_options", 00:17:45.893 "params": { 00:17:45.893 "small_cache_size": 128, 00:17:45.893 "large_cache_size": 16, 00:17:45.893 "task_count": 2048, 00:17:45.893 "sequence_count": 2048, 00:17:45.893 "buf_count": 2048 00:17:45.893 } 00:17:45.893 } 00:17:45.893 ] 00:17:45.893 }, 00:17:45.893 { 00:17:45.893 "subsystem": "bdev", 00:17:45.893 "config": [ 00:17:45.893 { 00:17:45.893 "method": "bdev_set_options", 00:17:45.893 "params": { 00:17:45.893 "bdev_io_pool_size": 65535, 00:17:45.893 "bdev_io_cache_size": 256, 00:17:45.893 "bdev_auto_examine": true, 00:17:45.893 "iobuf_small_cache_size": 128, 00:17:45.893 "iobuf_large_cache_size": 16 00:17:45.893 } 00:17:45.893 }, 00:17:45.893 { 00:17:45.893 "method": "bdev_raid_set_options", 00:17:45.893 "params": { 00:17:45.893 "process_window_size_kb": 1024 00:17:45.893 } 00:17:45.893 }, 00:17:45.893 { 00:17:45.893 "method": "bdev_iscsi_set_options", 00:17:45.893 "params": { 00:17:45.893 "timeout_sec": 30 00:17:45.893 } 00:17:45.893 }, 00:17:45.893 { 00:17:45.893 "method": "bdev_nvme_set_options", 00:17:45.893 "params": { 00:17:45.893 "action_on_timeout": "none", 00:17:45.893 "timeout_us": 0, 00:17:45.893 "timeout_admin_us": 0, 00:17:45.893 "keep_alive_timeout_ms": 10000, 00:17:45.893 "arbitration_burst": 0, 00:17:45.893 "low_priority_weight": 0, 00:17:45.893 "medium_priority_weight": 0, 00:17:45.893 "high_priority_weight": 0, 00:17:45.893 "nvme_adminq_poll_period_us": 10000, 00:17:45.893 "nvme_ioq_poll_period_us": 0, 00:17:45.893 "io_queue_requests": 512, 00:17:45.893 "delay_cmd_submit": true, 00:17:45.893 "transport_retry_count": 4, 00:17:45.893 "bdev_retry_count": 3, 00:17:45.893 "transport_ack_timeout": 0, 00:17:45.893 "ctrlr_loss_timeout_sec": 0, 00:17:45.893 "reconnect_delay_sec": 0, 00:17:45.893 "fast_io_fail_timeout_sec": 0, 00:17:45.893 "disable_auto_failback": false, 00:17:45.893 "generate_uuids": false, 00:17:45.893 "transport_tos": 0, 00:17:45.893 "nvme_error_stat": false, 00:17:45.893 "rdma_srq_size": 0, 00:17:45.893 "io_path_stat": false, 00:17:45.893 "allow_accel_sequence": false, 00:17:45.893 "rdma_max_cq_size": 0, 00:17:45.893 "rdma_cm_event_timeout_ms": 0, 00:17:45.893 "dhchap_digests": [ 00:17:45.893 "sha256", 00:17:45.893 "sha384", 00:17:45.893 "sha512" 00:17:45.893 ], 00:17:45.893 "dhchap_dhgroups": [ 00:17:45.893 "null", 00:17:45.893 "ffdhe2048", 00:17:45.893 "ffdhe3072", 00:17:45.893 "ffdhe4096", 00:17:45.893 "ffdhe6144", 00:17:45.893 "ffdhe8192" 00:17:45.893 ] 00:17:45.893 } 00:17:45.893 }, 00:17:45.893 { 00:17:45.893 "method": "bdev_nvme_attach_controller", 00:17:45.893 "params": { 00:17:45.893 "name": "TLSTEST", 00:17:45.893 "trtype": "TCP", 00:17:45.893 "adrfam": "IPv4", 00:17:45.893 "traddr": "10.0.0.2", 00:17:45.893 "trsvcid": "4420", 00:17:45.893 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:45.893 "prchk_reftag": false, 00:17:45.893 "prchk_guard": false, 00:17:45.893 "ctrlr_loss_timeout_sec": 0, 00:17:45.893 "reconnect_delay_sec": 0, 00:17:45.893 "fast_io_fail_timeout_sec": 0, 00:17:45.893 "psk": "/tmp/tmp.zgtd0avrUE", 00:17:45.893 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:45.893 "hdgst": false, 00:17:45.893 "ddgst": false 00:17:45.893 } 00:17:45.893 }, 00:17:45.893 { 00:17:45.893 "method": "bdev_nvme_set_hotplug", 00:17:45.893 "params": { 00:17:45.893 "period_us": 100000, 00:17:45.893 "enable": false 00:17:45.893 } 00:17:45.893 }, 00:17:45.893 { 00:17:45.893 "method": "bdev_wait_for_examine" 00:17:45.893 } 00:17:45.893 ] 00:17:45.893 }, 00:17:45.893 { 00:17:45.893 "subsystem": "nbd", 00:17:45.893 "config": [] 00:17:45.893 } 00:17:45.893 ] 00:17:45.893 }' 00:17:45.893 03:18:20 -- target/tls.sh@199 -- # killprocess 1512877 00:17:45.893 03:18:20 -- common/autotest_common.sh@936 -- # '[' -z 1512877 ']' 00:17:45.893 03:18:20 -- common/autotest_common.sh@940 -- # kill -0 1512877 00:17:45.893 03:18:20 -- common/autotest_common.sh@941 -- # uname 00:17:45.893 03:18:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:45.893 03:18:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1512877 00:17:45.893 03:18:20 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:45.893 03:18:20 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:45.893 03:18:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1512877' 00:17:45.893 killing process with pid 1512877 00:17:45.893 03:18:20 -- common/autotest_common.sh@955 -- # kill 1512877 00:17:45.893 Received shutdown signal, test time was about 10.000000 seconds 00:17:45.893 00:17:45.893 Latency(us) 00:17:45.893 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:45.893 =================================================================================================================== 00:17:45.893 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:45.893 [2024-04-25 03:18:20.315024] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:45.893 03:18:20 -- common/autotest_common.sh@960 -- # wait 1512877 00:17:46.151 03:18:20 -- target/tls.sh@200 -- # killprocess 1512707 00:17:46.151 03:18:20 -- common/autotest_common.sh@936 -- # '[' -z 1512707 ']' 00:17:46.151 03:18:20 -- common/autotest_common.sh@940 -- # kill -0 1512707 00:17:46.151 03:18:20 -- common/autotest_common.sh@941 -- # uname 00:17:46.151 03:18:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:46.151 03:18:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1512707 00:17:46.151 03:18:20 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:46.151 03:18:20 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:46.151 03:18:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1512707' 00:17:46.151 killing process with pid 1512707 00:17:46.151 03:18:20 -- common/autotest_common.sh@955 -- # kill 1512707 00:17:46.151 [2024-04-25 03:18:20.609479] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:46.151 03:18:20 -- common/autotest_common.sh@960 -- # wait 1512707 00:17:46.410 03:18:20 -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:17:46.410 03:18:20 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:46.410 03:18:20 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:46.410 03:18:20 -- target/tls.sh@203 -- # echo '{ 00:17:46.410 "subsystems": [ 00:17:46.410 { 00:17:46.410 "subsystem": "keyring", 00:17:46.410 "config": [] 00:17:46.410 }, 00:17:46.410 { 00:17:46.410 "subsystem": "iobuf", 00:17:46.410 "config": [ 00:17:46.410 { 00:17:46.410 "method": "iobuf_set_options", 00:17:46.410 "params": { 00:17:46.410 "small_pool_count": 8192, 00:17:46.410 "large_pool_count": 1024, 00:17:46.410 "small_bufsize": 8192, 00:17:46.410 "large_bufsize": 135168 00:17:46.410 } 00:17:46.410 } 00:17:46.410 ] 00:17:46.410 }, 00:17:46.410 { 00:17:46.410 "subsystem": "sock", 00:17:46.410 "config": [ 00:17:46.410 { 00:17:46.410 "method": "sock_impl_set_options", 00:17:46.410 "params": { 00:17:46.410 "impl_name": "posix", 00:17:46.410 "recv_buf_size": 2097152, 00:17:46.410 "send_buf_size": 2097152, 00:17:46.410 "enable_recv_pipe": true, 00:17:46.410 "enable_quickack": false, 00:17:46.410 "enable_placement_id": 0, 00:17:46.410 "enable_zerocopy_send_server": true, 00:17:46.410 "enable_zerocopy_send_client": false, 00:17:46.410 "zerocopy_threshold": 0, 00:17:46.410 "tls_version": 0, 00:17:46.410 "enable_ktls": false 00:17:46.410 } 00:17:46.410 }, 00:17:46.410 { 00:17:46.410 "method": "sock_impl_set_options", 00:17:46.410 "params": { 00:17:46.410 "impl_name": "ssl", 00:17:46.410 "recv_buf_size": 4096, 00:17:46.410 "send_buf_size": 4096, 00:17:46.410 "enable_recv_pipe": true, 00:17:46.410 "enable_quickack": false, 00:17:46.410 "enable_placement_id": 0, 00:17:46.410 "enable_zerocopy_send_server": true, 00:17:46.410 "enable_zerocopy_send_client": false, 00:17:46.410 "zerocopy_threshold": 0, 00:17:46.410 "tls_version": 0, 00:17:46.410 "enable_ktls": false 00:17:46.410 } 00:17:46.410 } 00:17:46.410 ] 00:17:46.410 }, 00:17:46.410 { 00:17:46.410 "subsystem": "vmd", 00:17:46.410 "config": [] 00:17:46.410 }, 00:17:46.410 { 00:17:46.410 "subsystem": "accel", 00:17:46.410 "config": [ 00:17:46.410 { 00:17:46.410 "method": "accel_set_options", 00:17:46.410 "params": { 00:17:46.410 "small_cache_size": 128, 00:17:46.410 "large_cache_size": 16, 00:17:46.410 "task_count": 2048, 00:17:46.410 "sequence_count": 2048, 00:17:46.410 "buf_count": 2048 00:17:46.410 } 00:17:46.410 } 00:17:46.410 ] 00:17:46.410 }, 00:17:46.410 { 00:17:46.410 "subsystem": "bdev", 00:17:46.410 "config": [ 00:17:46.410 { 00:17:46.410 "method": "bdev_set_options", 00:17:46.410 "params": { 00:17:46.410 "bdev_io_pool_size": 65535, 00:17:46.410 "bdev_io_cache_size": 256, 00:17:46.410 "bdev_auto_examine": true, 00:17:46.410 "iobuf_small_cache_size": 128, 00:17:46.410 "iobuf_large_cache_size": 16 00:17:46.410 } 00:17:46.410 }, 00:17:46.410 { 00:17:46.410 "method": "bdev_raid_set_options", 00:17:46.410 "params": { 00:17:46.410 "process_window_size_kb": 1024 00:17:46.410 } 00:17:46.410 }, 00:17:46.410 { 00:17:46.410 "method": "bdev_iscsi_set_options", 00:17:46.410 "params": { 00:17:46.410 "timeout_sec": 30 00:17:46.410 } 00:17:46.410 }, 00:17:46.410 { 00:17:46.410 "method": "bdev_nvme_set_options", 00:17:46.410 "params": { 00:17:46.410 "action_on_timeout": "none", 00:17:46.410 "timeout_us": 0, 00:17:46.410 "timeout_admin_us": 0, 00:17:46.410 "keep_alive_timeout_ms": 10000, 00:17:46.410 "arbitration_burst": 0, 00:17:46.410 "low_priority_weight": 0, 00:17:46.410 "medium_priority_weight": 0, 00:17:46.410 "high_priority_weight": 0, 00:17:46.410 "nvme_adminq_poll_period_us": 10000, 00:17:46.410 "nvme_ioq_poll_period_us": 0, 00:17:46.410 "io_queue_requests": 0, 00:17:46.410 "delay_cmd_submit": true, 00:17:46.410 "transport_retry_count": 4, 00:17:46.410 "bdev_retry_count": 3, 00:17:46.410 "transport_ack_timeout": 0, 00:17:46.410 "ctrlr_loss_timeout_sec": 0, 00:17:46.410 "reconnect_delay_sec": 0, 00:17:46.410 "fast_io_fail_timeout_sec": 0, 00:17:46.410 "disable_auto_failback": false, 00:17:46.410 "generate_uuids": false, 00:17:46.410 "transport_tos": 0, 00:17:46.410 "nvme_error_stat": false, 00:17:46.410 "rdma_srq_size": 0, 00:17:46.410 "io_path_stat": false, 00:17:46.410 "allow_accel_sequence": false, 00:17:46.410 "rdma_max_cq_size": 0, 00:17:46.410 "rdma_cm_event_timeout_ms": 0, 00:17:46.410 "dhchap_digests": [ 00:17:46.410 "sha256", 00:17:46.410 "sha384", 00:17:46.410 "sha512" 00:17:46.410 ], 00:17:46.410 "dhchap_dhgroups": [ 00:17:46.410 "null", 00:17:46.410 "ffdhe2048", 00:17:46.410 "ffdhe3072", 00:17:46.410 "ffdhe4096", 00:17:46.410 "ffdhe6144", 00:17:46.410 "ffdhe8192" 00:17:46.410 ] 00:17:46.410 } 00:17:46.410 }, 00:17:46.410 { 00:17:46.410 "method": "bdev_nvme_set_hotplug", 00:17:46.410 "params": { 00:17:46.410 "period_us": 100000, 00:17:46.410 "enable": false 00:17:46.410 } 00:17:46.410 }, 00:17:46.410 { 00:17:46.410 "method": "bdev_malloc_create", 00:17:46.410 "params": { 00:17:46.410 "name": "malloc0", 00:17:46.410 "num_blocks": 8192, 00:17:46.410 "block_size": 4096, 00:17:46.410 "physical_block_size": 4096, 00:17:46.410 "uuid": "28591c55-086d-4203-8ecf-d58a3b352ae0", 00:17:46.410 "optimal_io_boundary": 0 00:17:46.410 } 00:17:46.410 }, 00:17:46.410 { 00:17:46.410 "method": "bdev_wait_for_examine" 00:17:46.410 } 00:17:46.410 ] 00:17:46.410 }, 00:17:46.410 { 00:17:46.410 "subsystem": "nbd", 00:17:46.410 "config": [] 00:17:46.410 }, 00:17:46.410 { 00:17:46.410 "subsystem": "scheduler", 00:17:46.410 "config": [ 00:17:46.410 { 00:17:46.410 "method": "framework_set_scheduler", 00:17:46.410 "params": { 00:17:46.410 "name": "static" 00:17:46.410 } 00:17:46.410 } 00:17:46.410 ] 00:17:46.410 }, 00:17:46.410 { 00:17:46.410 "subsystem": "nvmf", 00:17:46.410 "config": [ 00:17:46.410 { 00:17:46.410 "method": "nvmf_set_config", 00:17:46.410 "params": { 00:17:46.410 "discovery_filter": "match_any", 00:17:46.410 "admin_cmd_passthru": { 00:17:46.410 "identify_ctrlr": false 00:17:46.410 } 00:17:46.410 } 00:17:46.410 }, 00:17:46.410 { 00:17:46.410 "method": "nvmf_set_max_subsystems", 00:17:46.410 "params": { 00:17:46.410 "max_subsystems": 1024 00:17:46.410 } 00:17:46.410 }, 00:17:46.410 { 00:17:46.410 "method": "nvmf_set_crdt", 00:17:46.410 "params": { 00:17:46.410 "crdt1": 0, 00:17:46.410 "crdt2": 0, 00:17:46.410 "crdt3": 0 00:17:46.410 } 00:17:46.410 }, 00:17:46.410 { 00:17:46.410 "method": "nvmf_create_transport", 00:17:46.410 "params": { 00:17:46.410 "trtype": "TCP", 00:17:46.410 "max_queue_depth": 128, 00:17:46.410 "max_io_qpairs_per_ctrlr": 127, 00:17:46.410 "in_capsule_data_size": 4096, 00:17:46.410 "max_io_size": 131072, 00:17:46.410 "io_unit_size": 131072, 00:17:46.410 "max_aq_depth": 128, 00:17:46.410 "num_shared_buffers": 511, 00:17:46.410 "buf_cache_size": 4294967295, 00:17:46.410 "dif_insert_or_strip": false, 00:17:46.410 "zcopy": false, 00:17:46.410 "c2h_success": false, 00:17:46.410 "sock_priority": 0, 00:17:46.410 "abort_timeout_sec": 1, 00:17:46.410 "ack_timeout": 0, 00:17:46.410 "data_wr_pool_size": 0 00:17:46.410 } 00:17:46.411 }, 00:17:46.411 { 00:17:46.411 "method": "nvmf_create_subsystem", 00:17:46.411 "params": { 00:17:46.411 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:46.411 "allow_any_host": false, 00:17:46.411 "serial_number": "SPDK00000000000001", 00:17:46.411 "model_number": "SPDK bdev Controller", 00:17:46.411 "max_namespaces": 10, 00:17:46.411 "min_cntlid": 1, 00:17:46.411 "max_cntlid": 65519, 00:17:46.411 "ana_reporting": false 00:17:46.411 } 00:17:46.411 }, 00:17:46.411 { 00:17:46.411 "method": "nvmf_subsystem_add_host", 00:17:46.411 "params": { 00:17:46.411 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:46.411 "host": "nqn.2016-06.io.spdk:host1", 00:17:46.411 "psk": "/tmp/tmp.zgtd0avrUE" 00:17:46.411 } 00:17:46.411 }, 00:17:46.411 { 00:17:46.411 "method": "nvmf_subsystem_add_ns", 00:17:46.411 "params": { 00:17:46.411 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:46.411 "namespace": { 00:17:46.411 "nsid": 1, 00:17:46.411 "bdev_name": "malloc0", 00:17:46.411 "nguid": "28591C55086D42038ECFD58A3B352AE0", 00:17:46.411 "uuid": "28591c55-086d-4203-8ecf-d58a3b352ae0", 00:17:46.411 "no_auto_visible": false 00:17:46.411 } 00:17:46.411 } 00:17:46.411 }, 00:17:46.411 { 00:17:46.411 "method": "nvmf_subsystem_add_listener", 00:17:46.411 "params": { 00:17:46.411 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:46.411 "listen_address": { 00:17:46.411 "trtype": "TCP", 00:17:46.411 "adrfam": "IPv4", 00:17:46.411 "traddr": "10.0.0.2", 00:17:46.411 "trsvcid": "4420" 00:17:46.411 }, 00:17:46.411 "secure_channel": true 00:17:46.411 } 00:17:46.411 } 00:17:46.411 ] 00:17:46.411 } 00:17:46.411 ] 00:17:46.411 }' 00:17:46.411 03:18:20 -- common/autotest_common.sh@10 -- # set +x 00:17:46.670 03:18:20 -- nvmf/common.sh@470 -- # nvmfpid=1513152 00:17:46.670 03:18:20 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:17:46.670 03:18:20 -- nvmf/common.sh@471 -- # waitforlisten 1513152 00:17:46.670 03:18:20 -- common/autotest_common.sh@817 -- # '[' -z 1513152 ']' 00:17:46.670 03:18:20 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:46.670 03:18:20 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:46.670 03:18:20 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:46.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:46.670 03:18:20 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:46.670 03:18:20 -- common/autotest_common.sh@10 -- # set +x 00:17:46.670 [2024-04-25 03:18:20.954998] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:17:46.670 [2024-04-25 03:18:20.955079] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:46.670 EAL: No free 2048 kB hugepages reported on node 1 00:17:46.670 [2024-04-25 03:18:21.019133] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:46.670 [2024-04-25 03:18:21.127126] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:46.670 [2024-04-25 03:18:21.127197] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:46.670 [2024-04-25 03:18:21.127228] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:46.670 [2024-04-25 03:18:21.127240] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:46.670 [2024-04-25 03:18:21.127251] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:46.670 [2024-04-25 03:18:21.127361] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:46.928 [2024-04-25 03:18:21.359198] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:46.928 [2024-04-25 03:18:21.375133] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:46.928 [2024-04-25 03:18:21.391195] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:46.928 [2024-04-25 03:18:21.405804] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:47.494 03:18:21 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:47.494 03:18:21 -- common/autotest_common.sh@850 -- # return 0 00:17:47.494 03:18:21 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:47.494 03:18:21 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:47.494 03:18:21 -- common/autotest_common.sh@10 -- # set +x 00:17:47.494 03:18:21 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:47.494 03:18:21 -- target/tls.sh@207 -- # bdevperf_pid=1513301 00:17:47.494 03:18:21 -- target/tls.sh@208 -- # waitforlisten 1513301 /var/tmp/bdevperf.sock 00:17:47.494 03:18:21 -- common/autotest_common.sh@817 -- # '[' -z 1513301 ']' 00:17:47.494 03:18:21 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:47.494 03:18:21 -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:17:47.494 03:18:21 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:47.494 03:18:21 -- target/tls.sh@204 -- # echo '{ 00:17:47.494 "subsystems": [ 00:17:47.494 { 00:17:47.494 "subsystem": "keyring", 00:17:47.494 "config": [] 00:17:47.494 }, 00:17:47.494 { 00:17:47.494 "subsystem": "iobuf", 00:17:47.494 "config": [ 00:17:47.494 { 00:17:47.494 "method": "iobuf_set_options", 00:17:47.494 "params": { 00:17:47.494 "small_pool_count": 8192, 00:17:47.494 "large_pool_count": 1024, 00:17:47.494 "small_bufsize": 8192, 00:17:47.494 "large_bufsize": 135168 00:17:47.494 } 00:17:47.494 } 00:17:47.494 ] 00:17:47.494 }, 00:17:47.494 { 00:17:47.494 "subsystem": "sock", 00:17:47.494 "config": [ 00:17:47.494 { 00:17:47.494 "method": "sock_impl_set_options", 00:17:47.494 "params": { 00:17:47.494 "impl_name": "posix", 00:17:47.494 "recv_buf_size": 2097152, 00:17:47.494 "send_buf_size": 2097152, 00:17:47.494 "enable_recv_pipe": true, 00:17:47.494 "enable_quickack": false, 00:17:47.494 "enable_placement_id": 0, 00:17:47.494 "enable_zerocopy_send_server": true, 00:17:47.494 "enable_zerocopy_send_client": false, 00:17:47.494 "zerocopy_threshold": 0, 00:17:47.494 "tls_version": 0, 00:17:47.494 "enable_ktls": false 00:17:47.494 } 00:17:47.494 }, 00:17:47.494 { 00:17:47.494 "method": "sock_impl_set_options", 00:17:47.494 "params": { 00:17:47.494 "impl_name": "ssl", 00:17:47.494 "recv_buf_size": 4096, 00:17:47.494 "send_buf_size": 4096, 00:17:47.494 "enable_recv_pipe": true, 00:17:47.494 "enable_quickack": false, 00:17:47.494 "enable_placement_id": 0, 00:17:47.494 "enable_zerocopy_send_server": true, 00:17:47.494 "enable_zerocopy_send_client": false, 00:17:47.494 "zerocopy_threshold": 0, 00:17:47.494 "tls_version": 0, 00:17:47.494 "enable_ktls": false 00:17:47.494 } 00:17:47.494 } 00:17:47.494 ] 00:17:47.494 }, 00:17:47.494 { 00:17:47.494 "subsystem": "vmd", 00:17:47.494 "config": [] 00:17:47.494 }, 00:17:47.494 { 00:17:47.494 "subsystem": "accel", 00:17:47.494 "config": [ 00:17:47.494 { 00:17:47.494 "method": "accel_set_options", 00:17:47.494 "params": { 00:17:47.494 "small_cache_size": 128, 00:17:47.494 "large_cache_size": 16, 00:17:47.494 "task_count": 2048, 00:17:47.494 "sequence_count": 2048, 00:17:47.494 "buf_count": 2048 00:17:47.494 } 00:17:47.494 } 00:17:47.494 ] 00:17:47.494 }, 00:17:47.494 { 00:17:47.494 "subsystem": "bdev", 00:17:47.494 "config": [ 00:17:47.494 { 00:17:47.494 "method": "bdev_set_options", 00:17:47.494 "params": { 00:17:47.494 "bdev_io_pool_size": 65535, 00:17:47.494 "bdev_io_cache_size": 256, 00:17:47.494 "bdev_auto_examine": true, 00:17:47.494 "iobuf_small_cache_size": 128, 00:17:47.494 "iobuf_large_cache_size": 16 00:17:47.494 } 00:17:47.494 }, 00:17:47.494 { 00:17:47.494 "method": "bdev_raid_set_options", 00:17:47.494 "params": { 00:17:47.494 "process_window_size_kb": 1024 00:17:47.494 } 00:17:47.494 }, 00:17:47.494 { 00:17:47.494 "method": "bdev_iscsi_set_options", 00:17:47.494 "params": { 00:17:47.494 "timeout_sec": 30 00:17:47.494 } 00:17:47.494 }, 00:17:47.494 { 00:17:47.494 "method": "bdev_nvme_set_options", 00:17:47.494 "params": { 00:17:47.494 "action_on_timeout": "none", 00:17:47.494 "timeout_us": 0, 00:17:47.494 "timeout_admin_us": 0, 00:17:47.494 "keep_alive_timeout_ms": 10000, 00:17:47.494 "arbitration_burst": 0, 00:17:47.494 "low_priority_weight": 0, 00:17:47.494 "medium_priority_weight": 0, 00:17:47.494 "high_priority_weight": 0, 00:17:47.494 "nvme_adminq_poll_period_us": 10000, 00:17:47.494 "nvme_ioq_poll_period_us": 0, 00:17:47.494 "io_queue_requests": 512, 00:17:47.494 "delay_cmd_submit": true, 00:17:47.494 "transport_retry_count": 4, 00:17:47.494 "bdev_retry_count": 3, 00:17:47.494 "transport_ack_timeout": 0, 00:17:47.495 "ctrlr_loss_timeout_sec": 0, 00:17:47.495 "reconnect_delay_sec": 0, 00:17:47.495 "fast_io_fail_timeout_sec": 0, 00:17:47.495 "disable_auto_failback": false, 00:17:47.495 "generate_uuids": false, 00:17:47.495 "transport_tos": 0, 00:17:47.495 "nvme_error_stat": false, 00:17:47.495 "rdma_srq_size": 0, 00:17:47.495 "io_path_stat": false, 00:17:47.495 "allow_accel_sequence": false, 00:17:47.495 "rdma_max_cq_size": 0, 00:17:47.495 "rdma_cm_event_timeout_ms": 0, 00:17:47.495 "dhchap_digests": [ 00:17:47.495 "sha256", 00:17:47.495 "sha384", 00:17:47.495 "sha512" 00:17:47.495 ], 00:17:47.495 "dhchap_dhgroups": [ 00:17:47.495 "null", 00:17:47.495 "ffdhe2048", 00:17:47.495 "ffdhe3072", 00:17:47.495 "ffdhe4096", 00:17:47.495 "ffdhe6144", 00:17:47.495 "ffdhe8192" 00:17:47.495 ] 00:17:47.495 } 00:17:47.495 }, 00:17:47.495 { 00:17:47.495 "method": "bdev_nvme_attach_controller", 00:17:47.495 "params": { 00:17:47.495 "name": "TLSTEST", 00:17:47.495 "trtype": "TCP", 00:17:47.495 "adrfam": "IPv4", 00:17:47.495 "traddr": "10.0.0.2", 00:17:47.495 "trsvcid": "4420", 00:17:47.495 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:47.495 "prchk_reftag": false, 00:17:47.495 "prchk_guard": false, 00:17:47.495 "ctrlr_loss_timeout_sec": 0, 00:17:47.495 "reconnect_delay_sec": 0, 00:17:47.495 "fast_io_fail_timeout_sec": 0, 00:17:47.495 "psk": "/tmp/tmp.zgtd0avrUE", 00:17:47.495 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:47.495 "hdgst": false, 00:17:47.495 "ddgst": false 00:17:47.495 } 00:17:47.495 }, 00:17:47.495 { 00:17:47.495 "method": "bdev_nvme_set_hotplug", 00:17:47.495 "params": { 00:17:47.495 "period_us": 100000, 00:17:47.495 "enable": false 00:17:47.495 } 00:17:47.495 }, 00:17:47.495 { 00:17:47.495 "method": "bdev_wait_for_examine" 00:17:47.495 } 00:17:47.495 ] 00:17:47.495 }, 00:17:47.495 { 00:17:47.495 "subsystem": "nbd", 00:17:47.495 "config": [] 00:17:47.495 } 00:17:47.495 ] 00:17:47.495 }' 00:17:47.495 03:18:21 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:47.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:47.495 03:18:21 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:47.495 03:18:21 -- common/autotest_common.sh@10 -- # set +x 00:17:47.753 [2024-04-25 03:18:22.019236] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:17:47.753 [2024-04-25 03:18:22.019316] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1513301 ] 00:17:47.753 EAL: No free 2048 kB hugepages reported on node 1 00:17:47.753 [2024-04-25 03:18:22.076547] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:47.753 [2024-04-25 03:18:22.180266] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:48.011 [2024-04-25 03:18:22.344726] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:48.011 [2024-04-25 03:18:22.344873] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:48.575 03:18:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:48.575 03:18:23 -- common/autotest_common.sh@850 -- # return 0 00:17:48.575 03:18:23 -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:48.836 Running I/O for 10 seconds... 00:17:58.800 00:17:58.800 Latency(us) 00:17:58.800 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:58.800 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:58.800 Verification LBA range: start 0x0 length 0x2000 00:17:58.800 TLSTESTn1 : 10.07 1240.49 4.85 0.00 0.00 102878.94 6602.15 144470.47 00:17:58.800 =================================================================================================================== 00:17:58.800 Total : 1240.49 4.85 0.00 0.00 102878.94 6602.15 144470.47 00:17:58.800 0 00:17:58.800 03:18:33 -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:58.800 03:18:33 -- target/tls.sh@214 -- # killprocess 1513301 00:17:58.800 03:18:33 -- common/autotest_common.sh@936 -- # '[' -z 1513301 ']' 00:17:58.800 03:18:33 -- common/autotest_common.sh@940 -- # kill -0 1513301 00:17:58.800 03:18:33 -- common/autotest_common.sh@941 -- # uname 00:17:58.800 03:18:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:58.800 03:18:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1513301 00:17:58.800 03:18:33 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:58.800 03:18:33 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:58.800 03:18:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1513301' 00:17:58.800 killing process with pid 1513301 00:17:58.800 03:18:33 -- common/autotest_common.sh@955 -- # kill 1513301 00:17:58.800 Received shutdown signal, test time was about 10.000000 seconds 00:17:58.800 00:17:58.800 Latency(us) 00:17:58.800 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:58.800 =================================================================================================================== 00:17:58.800 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:58.800 [2024-04-25 03:18:33.243016] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:58.800 03:18:33 -- common/autotest_common.sh@960 -- # wait 1513301 00:17:59.058 03:18:33 -- target/tls.sh@215 -- # killprocess 1513152 00:17:59.058 03:18:33 -- common/autotest_common.sh@936 -- # '[' -z 1513152 ']' 00:17:59.058 03:18:33 -- common/autotest_common.sh@940 -- # kill -0 1513152 00:17:59.058 03:18:33 -- common/autotest_common.sh@941 -- # uname 00:17:59.058 03:18:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:59.058 03:18:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1513152 00:17:59.058 03:18:33 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:59.058 03:18:33 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:59.058 03:18:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1513152' 00:17:59.058 killing process with pid 1513152 00:17:59.058 03:18:33 -- common/autotest_common.sh@955 -- # kill 1513152 00:17:59.058 [2024-04-25 03:18:33.520156] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:59.058 03:18:33 -- common/autotest_common.sh@960 -- # wait 1513152 00:17:59.622 03:18:33 -- target/tls.sh@218 -- # nvmfappstart 00:17:59.622 03:18:33 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:59.622 03:18:33 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:59.622 03:18:33 -- common/autotest_common.sh@10 -- # set +x 00:17:59.622 03:18:33 -- nvmf/common.sh@470 -- # nvmfpid=1514634 00:17:59.622 03:18:33 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:17:59.622 03:18:33 -- nvmf/common.sh@471 -- # waitforlisten 1514634 00:17:59.622 03:18:33 -- common/autotest_common.sh@817 -- # '[' -z 1514634 ']' 00:17:59.622 03:18:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:59.622 03:18:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:59.622 03:18:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:59.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:59.622 03:18:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:59.622 03:18:33 -- common/autotest_common.sh@10 -- # set +x 00:17:59.622 [2024-04-25 03:18:33.866456] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:17:59.622 [2024-04-25 03:18:33.866553] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:59.622 EAL: No free 2048 kB hugepages reported on node 1 00:17:59.623 [2024-04-25 03:18:33.936528] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:59.623 [2024-04-25 03:18:34.054300] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:59.623 [2024-04-25 03:18:34.054368] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:59.623 [2024-04-25 03:18:34.054385] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:59.623 [2024-04-25 03:18:34.054399] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:59.623 [2024-04-25 03:18:34.054411] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:59.623 [2024-04-25 03:18:34.054444] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:59.880 03:18:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:59.880 03:18:34 -- common/autotest_common.sh@850 -- # return 0 00:17:59.880 03:18:34 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:59.880 03:18:34 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:59.880 03:18:34 -- common/autotest_common.sh@10 -- # set +x 00:17:59.880 03:18:34 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:59.880 03:18:34 -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.zgtd0avrUE 00:17:59.880 03:18:34 -- target/tls.sh@49 -- # local key=/tmp/tmp.zgtd0avrUE 00:17:59.880 03:18:34 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:00.137 [2024-04-25 03:18:34.460400] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:00.137 03:18:34 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:00.394 03:18:34 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:00.652 [2024-04-25 03:18:34.997860] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:00.652 [2024-04-25 03:18:34.998133] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:00.652 03:18:35 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:00.910 malloc0 00:18:00.910 03:18:35 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:01.167 03:18:35 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.zgtd0avrUE 00:18:01.425 [2024-04-25 03:18:35.823933] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:01.425 03:18:35 -- target/tls.sh@222 -- # bdevperf_pid=1514913 00:18:01.425 03:18:35 -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:01.425 03:18:35 -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:01.425 03:18:35 -- target/tls.sh@225 -- # waitforlisten 1514913 /var/tmp/bdevperf.sock 00:18:01.425 03:18:35 -- common/autotest_common.sh@817 -- # '[' -z 1514913 ']' 00:18:01.425 03:18:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:01.425 03:18:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:01.425 03:18:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:01.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:01.425 03:18:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:01.425 03:18:35 -- common/autotest_common.sh@10 -- # set +x 00:18:01.425 [2024-04-25 03:18:35.887108] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:18:01.425 [2024-04-25 03:18:35.887191] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1514913 ] 00:18:01.425 EAL: No free 2048 kB hugepages reported on node 1 00:18:01.684 [2024-04-25 03:18:35.950407] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:01.684 [2024-04-25 03:18:36.058386] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:01.684 03:18:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:01.684 03:18:36 -- common/autotest_common.sh@850 -- # return 0 00:18:01.684 03:18:36 -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.zgtd0avrUE 00:18:01.998 03:18:36 -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:02.281 [2024-04-25 03:18:36.634551] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:02.281 nvme0n1 00:18:02.281 03:18:36 -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:02.538 Running I/O for 1 seconds... 00:18:03.473 00:18:03.473 Latency(us) 00:18:03.473 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:03.473 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:03.473 Verification LBA range: start 0x0 length 0x2000 00:18:03.473 nvme0n1 : 1.07 1189.96 4.65 0.00 0.00 104907.27 7573.05 149130.81 00:18:03.473 =================================================================================================================== 00:18:03.473 Total : 1189.96 4.65 0.00 0.00 104907.27 7573.05 149130.81 00:18:03.473 0 00:18:03.473 03:18:37 -- target/tls.sh@234 -- # killprocess 1514913 00:18:03.473 03:18:37 -- common/autotest_common.sh@936 -- # '[' -z 1514913 ']' 00:18:03.473 03:18:37 -- common/autotest_common.sh@940 -- # kill -0 1514913 00:18:03.473 03:18:37 -- common/autotest_common.sh@941 -- # uname 00:18:03.473 03:18:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:03.473 03:18:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1514913 00:18:03.473 03:18:37 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:03.473 03:18:37 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:03.473 03:18:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1514913' 00:18:03.473 killing process with pid 1514913 00:18:03.473 03:18:37 -- common/autotest_common.sh@955 -- # kill 1514913 00:18:03.473 Received shutdown signal, test time was about 1.000000 seconds 00:18:03.473 00:18:03.473 Latency(us) 00:18:03.473 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:03.473 =================================================================================================================== 00:18:03.473 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:03.473 03:18:37 -- common/autotest_common.sh@960 -- # wait 1514913 00:18:03.731 03:18:38 -- target/tls.sh@235 -- # killprocess 1514634 00:18:03.731 03:18:38 -- common/autotest_common.sh@936 -- # '[' -z 1514634 ']' 00:18:03.731 03:18:38 -- common/autotest_common.sh@940 -- # kill -0 1514634 00:18:03.731 03:18:38 -- common/autotest_common.sh@941 -- # uname 00:18:03.731 03:18:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:03.731 03:18:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1514634 00:18:03.990 03:18:38 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:03.990 03:18:38 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:03.990 03:18:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1514634' 00:18:03.990 killing process with pid 1514634 00:18:03.990 03:18:38 -- common/autotest_common.sh@955 -- # kill 1514634 00:18:03.990 [2024-04-25 03:18:38.236466] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:03.990 03:18:38 -- common/autotest_common.sh@960 -- # wait 1514634 00:18:04.248 03:18:38 -- target/tls.sh@238 -- # nvmfappstart 00:18:04.248 03:18:38 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:04.248 03:18:38 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:04.248 03:18:38 -- common/autotest_common.sh@10 -- # set +x 00:18:04.248 03:18:38 -- nvmf/common.sh@470 -- # nvmfpid=1515280 00:18:04.248 03:18:38 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:04.248 03:18:38 -- nvmf/common.sh@471 -- # waitforlisten 1515280 00:18:04.248 03:18:38 -- common/autotest_common.sh@817 -- # '[' -z 1515280 ']' 00:18:04.248 03:18:38 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:04.248 03:18:38 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:04.248 03:18:38 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:04.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:04.248 03:18:38 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:04.248 03:18:38 -- common/autotest_common.sh@10 -- # set +x 00:18:04.248 [2024-04-25 03:18:38.560329] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:18:04.248 [2024-04-25 03:18:38.560419] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:04.248 EAL: No free 2048 kB hugepages reported on node 1 00:18:04.249 [2024-04-25 03:18:38.630139] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:04.249 [2024-04-25 03:18:38.742664] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:04.249 [2024-04-25 03:18:38.742725] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:04.249 [2024-04-25 03:18:38.742742] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:04.249 [2024-04-25 03:18:38.742764] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:04.249 [2024-04-25 03:18:38.742776] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:04.249 [2024-04-25 03:18:38.742807] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:05.181 03:18:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:05.181 03:18:39 -- common/autotest_common.sh@850 -- # return 0 00:18:05.181 03:18:39 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:05.182 03:18:39 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:05.182 03:18:39 -- common/autotest_common.sh@10 -- # set +x 00:18:05.182 03:18:39 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:05.182 03:18:39 -- target/tls.sh@239 -- # rpc_cmd 00:18:05.182 03:18:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:05.182 03:18:39 -- common/autotest_common.sh@10 -- # set +x 00:18:05.182 [2024-04-25 03:18:39.558162] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:05.182 malloc0 00:18:05.182 [2024-04-25 03:18:39.589321] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:05.182 [2024-04-25 03:18:39.589567] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:05.182 03:18:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:05.182 03:18:39 -- target/tls.sh@252 -- # bdevperf_pid=1515403 00:18:05.182 03:18:39 -- target/tls.sh@254 -- # waitforlisten 1515403 /var/tmp/bdevperf.sock 00:18:05.182 03:18:39 -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:05.182 03:18:39 -- common/autotest_common.sh@817 -- # '[' -z 1515403 ']' 00:18:05.182 03:18:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:05.182 03:18:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:05.182 03:18:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:05.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:05.182 03:18:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:05.182 03:18:39 -- common/autotest_common.sh@10 -- # set +x 00:18:05.182 [2024-04-25 03:18:39.663521] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:18:05.182 [2024-04-25 03:18:39.663598] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1515403 ] 00:18:05.439 EAL: No free 2048 kB hugepages reported on node 1 00:18:05.439 [2024-04-25 03:18:39.725579] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:05.439 [2024-04-25 03:18:39.831665] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:05.696 03:18:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:05.696 03:18:39 -- common/autotest_common.sh@850 -- # return 0 00:18:05.696 03:18:39 -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.zgtd0avrUE 00:18:05.954 03:18:40 -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:06.212 [2024-04-25 03:18:40.510151] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:06.212 nvme0n1 00:18:06.212 03:18:40 -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:06.212 Running I/O for 1 seconds... 00:18:07.591 00:18:07.591 Latency(us) 00:18:07.591 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:07.591 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:07.591 Verification LBA range: start 0x0 length 0x2000 00:18:07.591 nvme0n1 : 1.06 926.59 3.62 0.00 0.00 135985.83 7670.14 145247.19 00:18:07.591 =================================================================================================================== 00:18:07.591 Total : 926.59 3.62 0.00 0.00 135985.83 7670.14 145247.19 00:18:07.591 0 00:18:07.591 03:18:41 -- target/tls.sh@263 -- # rpc_cmd save_config 00:18:07.591 03:18:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:07.591 03:18:41 -- common/autotest_common.sh@10 -- # set +x 00:18:07.591 03:18:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:07.591 03:18:41 -- target/tls.sh@263 -- # tgtcfg='{ 00:18:07.591 "subsystems": [ 00:18:07.591 { 00:18:07.591 "subsystem": "keyring", 00:18:07.591 "config": [ 00:18:07.591 { 00:18:07.591 "method": "keyring_file_add_key", 00:18:07.591 "params": { 00:18:07.591 "name": "key0", 00:18:07.591 "path": "/tmp/tmp.zgtd0avrUE" 00:18:07.591 } 00:18:07.591 } 00:18:07.591 ] 00:18:07.591 }, 00:18:07.591 { 00:18:07.591 "subsystem": "iobuf", 00:18:07.591 "config": [ 00:18:07.591 { 00:18:07.591 "method": "iobuf_set_options", 00:18:07.591 "params": { 00:18:07.591 "small_pool_count": 8192, 00:18:07.591 "large_pool_count": 1024, 00:18:07.591 "small_bufsize": 8192, 00:18:07.591 "large_bufsize": 135168 00:18:07.591 } 00:18:07.591 } 00:18:07.591 ] 00:18:07.591 }, 00:18:07.591 { 00:18:07.591 "subsystem": "sock", 00:18:07.591 "config": [ 00:18:07.591 { 00:18:07.591 "method": "sock_impl_set_options", 00:18:07.591 "params": { 00:18:07.591 "impl_name": "posix", 00:18:07.591 "recv_buf_size": 2097152, 00:18:07.591 "send_buf_size": 2097152, 00:18:07.591 "enable_recv_pipe": true, 00:18:07.591 "enable_quickack": false, 00:18:07.591 "enable_placement_id": 0, 00:18:07.591 "enable_zerocopy_send_server": true, 00:18:07.591 "enable_zerocopy_send_client": false, 00:18:07.591 "zerocopy_threshold": 0, 00:18:07.591 "tls_version": 0, 00:18:07.591 "enable_ktls": false 00:18:07.591 } 00:18:07.591 }, 00:18:07.591 { 00:18:07.591 "method": "sock_impl_set_options", 00:18:07.591 "params": { 00:18:07.591 "impl_name": "ssl", 00:18:07.591 "recv_buf_size": 4096, 00:18:07.591 "send_buf_size": 4096, 00:18:07.591 "enable_recv_pipe": true, 00:18:07.591 "enable_quickack": false, 00:18:07.591 "enable_placement_id": 0, 00:18:07.591 "enable_zerocopy_send_server": true, 00:18:07.591 "enable_zerocopy_send_client": false, 00:18:07.591 "zerocopy_threshold": 0, 00:18:07.591 "tls_version": 0, 00:18:07.591 "enable_ktls": false 00:18:07.591 } 00:18:07.591 } 00:18:07.591 ] 00:18:07.591 }, 00:18:07.591 { 00:18:07.591 "subsystem": "vmd", 00:18:07.591 "config": [] 00:18:07.591 }, 00:18:07.591 { 00:18:07.591 "subsystem": "accel", 00:18:07.591 "config": [ 00:18:07.591 { 00:18:07.591 "method": "accel_set_options", 00:18:07.591 "params": { 00:18:07.591 "small_cache_size": 128, 00:18:07.591 "large_cache_size": 16, 00:18:07.591 "task_count": 2048, 00:18:07.591 "sequence_count": 2048, 00:18:07.591 "buf_count": 2048 00:18:07.591 } 00:18:07.591 } 00:18:07.591 ] 00:18:07.591 }, 00:18:07.591 { 00:18:07.591 "subsystem": "bdev", 00:18:07.591 "config": [ 00:18:07.591 { 00:18:07.591 "method": "bdev_set_options", 00:18:07.591 "params": { 00:18:07.591 "bdev_io_pool_size": 65535, 00:18:07.591 "bdev_io_cache_size": 256, 00:18:07.591 "bdev_auto_examine": true, 00:18:07.591 "iobuf_small_cache_size": 128, 00:18:07.591 "iobuf_large_cache_size": 16 00:18:07.591 } 00:18:07.591 }, 00:18:07.591 { 00:18:07.591 "method": "bdev_raid_set_options", 00:18:07.591 "params": { 00:18:07.591 "process_window_size_kb": 1024 00:18:07.591 } 00:18:07.591 }, 00:18:07.591 { 00:18:07.591 "method": "bdev_iscsi_set_options", 00:18:07.591 "params": { 00:18:07.591 "timeout_sec": 30 00:18:07.591 } 00:18:07.591 }, 00:18:07.591 { 00:18:07.591 "method": "bdev_nvme_set_options", 00:18:07.591 "params": { 00:18:07.591 "action_on_timeout": "none", 00:18:07.591 "timeout_us": 0, 00:18:07.591 "timeout_admin_us": 0, 00:18:07.591 "keep_alive_timeout_ms": 10000, 00:18:07.591 "arbitration_burst": 0, 00:18:07.591 "low_priority_weight": 0, 00:18:07.591 "medium_priority_weight": 0, 00:18:07.591 "high_priority_weight": 0, 00:18:07.591 "nvme_adminq_poll_period_us": 10000, 00:18:07.591 "nvme_ioq_poll_period_us": 0, 00:18:07.591 "io_queue_requests": 0, 00:18:07.591 "delay_cmd_submit": true, 00:18:07.591 "transport_retry_count": 4, 00:18:07.591 "bdev_retry_count": 3, 00:18:07.591 "transport_ack_timeout": 0, 00:18:07.591 "ctrlr_loss_timeout_sec": 0, 00:18:07.591 "reconnect_delay_sec": 0, 00:18:07.591 "fast_io_fail_timeout_sec": 0, 00:18:07.591 "disable_auto_failback": false, 00:18:07.591 "generate_uuids": false, 00:18:07.591 "transport_tos": 0, 00:18:07.591 "nvme_error_stat": false, 00:18:07.591 "rdma_srq_size": 0, 00:18:07.591 "io_path_stat": false, 00:18:07.591 "allow_accel_sequence": false, 00:18:07.591 "rdma_max_cq_size": 0, 00:18:07.591 "rdma_cm_event_timeout_ms": 0, 00:18:07.591 "dhchap_digests": [ 00:18:07.591 "sha256", 00:18:07.591 "sha384", 00:18:07.591 "sha512" 00:18:07.591 ], 00:18:07.591 "dhchap_dhgroups": [ 00:18:07.591 "null", 00:18:07.591 "ffdhe2048", 00:18:07.591 "ffdhe3072", 00:18:07.591 "ffdhe4096", 00:18:07.591 "ffdhe6144", 00:18:07.591 "ffdhe8192" 00:18:07.591 ] 00:18:07.591 } 00:18:07.591 }, 00:18:07.591 { 00:18:07.591 "method": "bdev_nvme_set_hotplug", 00:18:07.591 "params": { 00:18:07.591 "period_us": 100000, 00:18:07.591 "enable": false 00:18:07.591 } 00:18:07.591 }, 00:18:07.591 { 00:18:07.591 "method": "bdev_malloc_create", 00:18:07.591 "params": { 00:18:07.591 "name": "malloc0", 00:18:07.591 "num_blocks": 8192, 00:18:07.591 "block_size": 4096, 00:18:07.591 "physical_block_size": 4096, 00:18:07.591 "uuid": "c7eaba06-5f04-426b-805d-e45c5ecbe813", 00:18:07.591 "optimal_io_boundary": 0 00:18:07.591 } 00:18:07.591 }, 00:18:07.591 { 00:18:07.591 "method": "bdev_wait_for_examine" 00:18:07.591 } 00:18:07.591 ] 00:18:07.591 }, 00:18:07.591 { 00:18:07.591 "subsystem": "nbd", 00:18:07.591 "config": [] 00:18:07.591 }, 00:18:07.591 { 00:18:07.591 "subsystem": "scheduler", 00:18:07.591 "config": [ 00:18:07.591 { 00:18:07.591 "method": "framework_set_scheduler", 00:18:07.591 "params": { 00:18:07.591 "name": "static" 00:18:07.592 } 00:18:07.592 } 00:18:07.592 ] 00:18:07.592 }, 00:18:07.592 { 00:18:07.592 "subsystem": "nvmf", 00:18:07.592 "config": [ 00:18:07.592 { 00:18:07.592 "method": "nvmf_set_config", 00:18:07.592 "params": { 00:18:07.592 "discovery_filter": "match_any", 00:18:07.592 "admin_cmd_passthru": { 00:18:07.592 "identify_ctrlr": false 00:18:07.592 } 00:18:07.592 } 00:18:07.592 }, 00:18:07.592 { 00:18:07.592 "method": "nvmf_set_max_subsystems", 00:18:07.592 "params": { 00:18:07.592 "max_subsystems": 1024 00:18:07.592 } 00:18:07.592 }, 00:18:07.592 { 00:18:07.592 "method": "nvmf_set_crdt", 00:18:07.592 "params": { 00:18:07.592 "crdt1": 0, 00:18:07.592 "crdt2": 0, 00:18:07.592 "crdt3": 0 00:18:07.592 } 00:18:07.592 }, 00:18:07.592 { 00:18:07.592 "method": "nvmf_create_transport", 00:18:07.592 "params": { 00:18:07.592 "trtype": "TCP", 00:18:07.592 "max_queue_depth": 128, 00:18:07.592 "max_io_qpairs_per_ctrlr": 127, 00:18:07.592 "in_capsule_data_size": 4096, 00:18:07.592 "max_io_size": 131072, 00:18:07.592 "io_unit_size": 131072, 00:18:07.592 "max_aq_depth": 128, 00:18:07.592 "num_shared_buffers": 511, 00:18:07.592 "buf_cache_size": 4294967295, 00:18:07.592 "dif_insert_or_strip": false, 00:18:07.592 "zcopy": false, 00:18:07.592 "c2h_success": false, 00:18:07.592 "sock_priority": 0, 00:18:07.592 "abort_timeout_sec": 1, 00:18:07.592 "ack_timeout": 0, 00:18:07.592 "data_wr_pool_size": 0 00:18:07.592 } 00:18:07.592 }, 00:18:07.592 { 00:18:07.592 "method": "nvmf_create_subsystem", 00:18:07.592 "params": { 00:18:07.592 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:07.592 "allow_any_host": false, 00:18:07.592 "serial_number": "00000000000000000000", 00:18:07.592 "model_number": "SPDK bdev Controller", 00:18:07.592 "max_namespaces": 32, 00:18:07.592 "min_cntlid": 1, 00:18:07.592 "max_cntlid": 65519, 00:18:07.592 "ana_reporting": false 00:18:07.592 } 00:18:07.592 }, 00:18:07.592 { 00:18:07.592 "method": "nvmf_subsystem_add_host", 00:18:07.592 "params": { 00:18:07.592 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:07.592 "host": "nqn.2016-06.io.spdk:host1", 00:18:07.592 "psk": "key0" 00:18:07.592 } 00:18:07.592 }, 00:18:07.592 { 00:18:07.592 "method": "nvmf_subsystem_add_ns", 00:18:07.592 "params": { 00:18:07.592 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:07.592 "namespace": { 00:18:07.592 "nsid": 1, 00:18:07.592 "bdev_name": "malloc0", 00:18:07.592 "nguid": "C7EABA065F04426B805DE45C5ECBE813", 00:18:07.592 "uuid": "c7eaba06-5f04-426b-805d-e45c5ecbe813", 00:18:07.592 "no_auto_visible": false 00:18:07.592 } 00:18:07.592 } 00:18:07.592 }, 00:18:07.592 { 00:18:07.592 "method": "nvmf_subsystem_add_listener", 00:18:07.592 "params": { 00:18:07.592 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:07.592 "listen_address": { 00:18:07.592 "trtype": "TCP", 00:18:07.592 "adrfam": "IPv4", 00:18:07.592 "traddr": "10.0.0.2", 00:18:07.592 "trsvcid": "4420" 00:18:07.592 }, 00:18:07.592 "secure_channel": true 00:18:07.592 } 00:18:07.592 } 00:18:07.592 ] 00:18:07.592 } 00:18:07.592 ] 00:18:07.592 }' 00:18:07.592 03:18:41 -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:07.850 03:18:42 -- target/tls.sh@264 -- # bperfcfg='{ 00:18:07.850 "subsystems": [ 00:18:07.850 { 00:18:07.850 "subsystem": "keyring", 00:18:07.850 "config": [ 00:18:07.850 { 00:18:07.850 "method": "keyring_file_add_key", 00:18:07.850 "params": { 00:18:07.850 "name": "key0", 00:18:07.850 "path": "/tmp/tmp.zgtd0avrUE" 00:18:07.850 } 00:18:07.850 } 00:18:07.850 ] 00:18:07.850 }, 00:18:07.850 { 00:18:07.850 "subsystem": "iobuf", 00:18:07.850 "config": [ 00:18:07.850 { 00:18:07.850 "method": "iobuf_set_options", 00:18:07.850 "params": { 00:18:07.850 "small_pool_count": 8192, 00:18:07.850 "large_pool_count": 1024, 00:18:07.850 "small_bufsize": 8192, 00:18:07.850 "large_bufsize": 135168 00:18:07.850 } 00:18:07.850 } 00:18:07.850 ] 00:18:07.850 }, 00:18:07.850 { 00:18:07.850 "subsystem": "sock", 00:18:07.850 "config": [ 00:18:07.850 { 00:18:07.850 "method": "sock_impl_set_options", 00:18:07.850 "params": { 00:18:07.850 "impl_name": "posix", 00:18:07.850 "recv_buf_size": 2097152, 00:18:07.850 "send_buf_size": 2097152, 00:18:07.850 "enable_recv_pipe": true, 00:18:07.850 "enable_quickack": false, 00:18:07.850 "enable_placement_id": 0, 00:18:07.850 "enable_zerocopy_send_server": true, 00:18:07.850 "enable_zerocopy_send_client": false, 00:18:07.850 "zerocopy_threshold": 0, 00:18:07.850 "tls_version": 0, 00:18:07.850 "enable_ktls": false 00:18:07.850 } 00:18:07.850 }, 00:18:07.850 { 00:18:07.850 "method": "sock_impl_set_options", 00:18:07.850 "params": { 00:18:07.850 "impl_name": "ssl", 00:18:07.850 "recv_buf_size": 4096, 00:18:07.850 "send_buf_size": 4096, 00:18:07.850 "enable_recv_pipe": true, 00:18:07.850 "enable_quickack": false, 00:18:07.850 "enable_placement_id": 0, 00:18:07.850 "enable_zerocopy_send_server": true, 00:18:07.850 "enable_zerocopy_send_client": false, 00:18:07.850 "zerocopy_threshold": 0, 00:18:07.850 "tls_version": 0, 00:18:07.850 "enable_ktls": false 00:18:07.850 } 00:18:07.850 } 00:18:07.850 ] 00:18:07.850 }, 00:18:07.850 { 00:18:07.850 "subsystem": "vmd", 00:18:07.850 "config": [] 00:18:07.850 }, 00:18:07.850 { 00:18:07.850 "subsystem": "accel", 00:18:07.850 "config": [ 00:18:07.850 { 00:18:07.850 "method": "accel_set_options", 00:18:07.850 "params": { 00:18:07.850 "small_cache_size": 128, 00:18:07.850 "large_cache_size": 16, 00:18:07.850 "task_count": 2048, 00:18:07.850 "sequence_count": 2048, 00:18:07.850 "buf_count": 2048 00:18:07.850 } 00:18:07.850 } 00:18:07.850 ] 00:18:07.850 }, 00:18:07.850 { 00:18:07.850 "subsystem": "bdev", 00:18:07.850 "config": [ 00:18:07.850 { 00:18:07.850 "method": "bdev_set_options", 00:18:07.850 "params": { 00:18:07.850 "bdev_io_pool_size": 65535, 00:18:07.850 "bdev_io_cache_size": 256, 00:18:07.850 "bdev_auto_examine": true, 00:18:07.850 "iobuf_small_cache_size": 128, 00:18:07.850 "iobuf_large_cache_size": 16 00:18:07.850 } 00:18:07.850 }, 00:18:07.850 { 00:18:07.850 "method": "bdev_raid_set_options", 00:18:07.850 "params": { 00:18:07.850 "process_window_size_kb": 1024 00:18:07.850 } 00:18:07.850 }, 00:18:07.850 { 00:18:07.850 "method": "bdev_iscsi_set_options", 00:18:07.850 "params": { 00:18:07.850 "timeout_sec": 30 00:18:07.850 } 00:18:07.850 }, 00:18:07.850 { 00:18:07.850 "method": "bdev_nvme_set_options", 00:18:07.850 "params": { 00:18:07.850 "action_on_timeout": "none", 00:18:07.850 "timeout_us": 0, 00:18:07.850 "timeout_admin_us": 0, 00:18:07.850 "keep_alive_timeout_ms": 10000, 00:18:07.850 "arbitration_burst": 0, 00:18:07.850 "low_priority_weight": 0, 00:18:07.850 "medium_priority_weight": 0, 00:18:07.850 "high_priority_weight": 0, 00:18:07.850 "nvme_adminq_poll_period_us": 10000, 00:18:07.850 "nvme_ioq_poll_period_us": 0, 00:18:07.850 "io_queue_requests": 512, 00:18:07.850 "delay_cmd_submit": true, 00:18:07.850 "transport_retry_count": 4, 00:18:07.850 "bdev_retry_count": 3, 00:18:07.850 "transport_ack_timeout": 0, 00:18:07.850 "ctrlr_loss_timeout_sec": 0, 00:18:07.850 "reconnect_delay_sec": 0, 00:18:07.850 "fast_io_fail_timeout_sec": 0, 00:18:07.850 "disable_auto_failback": false, 00:18:07.850 "generate_uuids": false, 00:18:07.850 "transport_tos": 0, 00:18:07.850 "nvme_error_stat": false, 00:18:07.850 "rdma_srq_size": 0, 00:18:07.850 "io_path_stat": false, 00:18:07.850 "allow_accel_sequence": false, 00:18:07.850 "rdma_max_cq_size": 0, 00:18:07.850 "rdma_cm_event_timeout_ms": 0, 00:18:07.850 "dhchap_digests": [ 00:18:07.850 "sha256", 00:18:07.850 "sha384", 00:18:07.850 "sha512" 00:18:07.850 ], 00:18:07.850 "dhchap_dhgroups": [ 00:18:07.850 "null", 00:18:07.850 "ffdhe2048", 00:18:07.850 "ffdhe3072", 00:18:07.850 "ffdhe4096", 00:18:07.850 "ffdhe6144", 00:18:07.850 "ffdhe8192" 00:18:07.850 ] 00:18:07.850 } 00:18:07.850 }, 00:18:07.850 { 00:18:07.850 "method": "bdev_nvme_attach_controller", 00:18:07.850 "params": { 00:18:07.850 "name": "nvme0", 00:18:07.850 "trtype": "TCP", 00:18:07.850 "adrfam": "IPv4", 00:18:07.850 "traddr": "10.0.0.2", 00:18:07.850 "trsvcid": "4420", 00:18:07.850 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:07.850 "prchk_reftag": false, 00:18:07.850 "prchk_guard": false, 00:18:07.850 "ctrlr_loss_timeout_sec": 0, 00:18:07.850 "reconnect_delay_sec": 0, 00:18:07.850 "fast_io_fail_timeout_sec": 0, 00:18:07.850 "psk": "key0", 00:18:07.850 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:07.850 "hdgst": false, 00:18:07.851 "ddgst": false 00:18:07.851 } 00:18:07.851 }, 00:18:07.851 { 00:18:07.851 "method": "bdev_nvme_set_hotplug", 00:18:07.851 "params": { 00:18:07.851 "period_us": 100000, 00:18:07.851 "enable": false 00:18:07.851 } 00:18:07.851 }, 00:18:07.851 { 00:18:07.851 "method": "bdev_enable_histogram", 00:18:07.851 "params": { 00:18:07.851 "name": "nvme0n1", 00:18:07.851 "enable": true 00:18:07.851 } 00:18:07.851 }, 00:18:07.851 { 00:18:07.851 "method": "bdev_wait_for_examine" 00:18:07.851 } 00:18:07.851 ] 00:18:07.851 }, 00:18:07.851 { 00:18:07.851 "subsystem": "nbd", 00:18:07.851 "config": [] 00:18:07.851 } 00:18:07.851 ] 00:18:07.851 }' 00:18:07.851 03:18:42 -- target/tls.sh@266 -- # killprocess 1515403 00:18:07.851 03:18:42 -- common/autotest_common.sh@936 -- # '[' -z 1515403 ']' 00:18:07.851 03:18:42 -- common/autotest_common.sh@940 -- # kill -0 1515403 00:18:07.851 03:18:42 -- common/autotest_common.sh@941 -- # uname 00:18:07.851 03:18:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:07.851 03:18:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1515403 00:18:07.851 03:18:42 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:07.851 03:18:42 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:07.851 03:18:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1515403' 00:18:07.851 killing process with pid 1515403 00:18:07.851 03:18:42 -- common/autotest_common.sh@955 -- # kill 1515403 00:18:07.851 Received shutdown signal, test time was about 1.000000 seconds 00:18:07.851 00:18:07.851 Latency(us) 00:18:07.851 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:07.851 =================================================================================================================== 00:18:07.851 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:07.851 03:18:42 -- common/autotest_common.sh@960 -- # wait 1515403 00:18:08.108 03:18:42 -- target/tls.sh@267 -- # killprocess 1515280 00:18:08.108 03:18:42 -- common/autotest_common.sh@936 -- # '[' -z 1515280 ']' 00:18:08.108 03:18:42 -- common/autotest_common.sh@940 -- # kill -0 1515280 00:18:08.108 03:18:42 -- common/autotest_common.sh@941 -- # uname 00:18:08.108 03:18:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:08.108 03:18:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1515280 00:18:08.108 03:18:42 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:08.108 03:18:42 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:08.108 03:18:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1515280' 00:18:08.108 killing process with pid 1515280 00:18:08.108 03:18:42 -- common/autotest_common.sh@955 -- # kill 1515280 00:18:08.108 03:18:42 -- common/autotest_common.sh@960 -- # wait 1515280 00:18:08.676 03:18:42 -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:18:08.676 03:18:42 -- target/tls.sh@269 -- # echo '{ 00:18:08.676 "subsystems": [ 00:18:08.676 { 00:18:08.676 "subsystem": "keyring", 00:18:08.676 "config": [ 00:18:08.676 { 00:18:08.676 "method": "keyring_file_add_key", 00:18:08.676 "params": { 00:18:08.676 "name": "key0", 00:18:08.676 "path": "/tmp/tmp.zgtd0avrUE" 00:18:08.676 } 00:18:08.676 } 00:18:08.676 ] 00:18:08.676 }, 00:18:08.676 { 00:18:08.676 "subsystem": "iobuf", 00:18:08.676 "config": [ 00:18:08.676 { 00:18:08.676 "method": "iobuf_set_options", 00:18:08.676 "params": { 00:18:08.676 "small_pool_count": 8192, 00:18:08.676 "large_pool_count": 1024, 00:18:08.676 "small_bufsize": 8192, 00:18:08.676 "large_bufsize": 135168 00:18:08.676 } 00:18:08.676 } 00:18:08.676 ] 00:18:08.676 }, 00:18:08.676 { 00:18:08.676 "subsystem": "sock", 00:18:08.676 "config": [ 00:18:08.676 { 00:18:08.676 "method": "sock_impl_set_options", 00:18:08.676 "params": { 00:18:08.676 "impl_name": "posix", 00:18:08.676 "recv_buf_size": 2097152, 00:18:08.676 "send_buf_size": 2097152, 00:18:08.676 "enable_recv_pipe": true, 00:18:08.676 "enable_quickack": false, 00:18:08.676 "enable_placement_id": 0, 00:18:08.676 "enable_zerocopy_send_server": true, 00:18:08.676 "enable_zerocopy_send_client": false, 00:18:08.676 "zerocopy_threshold": 0, 00:18:08.676 "tls_version": 0, 00:18:08.676 "enable_ktls": false 00:18:08.676 } 00:18:08.676 }, 00:18:08.676 { 00:18:08.676 "method": "sock_impl_set_options", 00:18:08.676 "params": { 00:18:08.676 "impl_name": "ssl", 00:18:08.676 "recv_buf_size": 4096, 00:18:08.676 "send_buf_size": 4096, 00:18:08.676 "enable_recv_pipe": true, 00:18:08.676 "enable_quickack": false, 00:18:08.676 "enable_placement_id": 0, 00:18:08.676 "enable_zerocopy_send_server": true, 00:18:08.676 "enable_zerocopy_send_client": false, 00:18:08.676 "zerocopy_threshold": 0, 00:18:08.676 "tls_version": 0, 00:18:08.676 "enable_ktls": false 00:18:08.676 } 00:18:08.676 } 00:18:08.676 ] 00:18:08.676 }, 00:18:08.676 { 00:18:08.676 "subsystem": "vmd", 00:18:08.676 "config": [] 00:18:08.676 }, 00:18:08.676 { 00:18:08.676 "subsystem": "accel", 00:18:08.676 "config": [ 00:18:08.676 { 00:18:08.676 "method": "accel_set_options", 00:18:08.676 "params": { 00:18:08.676 "small_cache_size": 128, 00:18:08.676 "large_cache_size": 16, 00:18:08.676 "task_count": 2048, 00:18:08.676 "sequence_count": 2048, 00:18:08.676 "buf_count": 2048 00:18:08.676 } 00:18:08.676 } 00:18:08.676 ] 00:18:08.676 }, 00:18:08.676 { 00:18:08.676 "subsystem": "bdev", 00:18:08.676 "config": [ 00:18:08.676 { 00:18:08.676 "method": "bdev_set_options", 00:18:08.676 "params": { 00:18:08.676 "bdev_io_pool_size": 65535, 00:18:08.676 "bdev_io_cache_size": 256, 00:18:08.676 "bdev_auto_examine": true, 00:18:08.676 "iobuf_small_cache_size": 128, 00:18:08.676 "iobuf_large_cache_size": 16 00:18:08.676 } 00:18:08.676 }, 00:18:08.676 { 00:18:08.676 "method": "bdev_raid_set_options", 00:18:08.676 "params": { 00:18:08.676 "process_window_size_kb": 1024 00:18:08.676 } 00:18:08.676 }, 00:18:08.676 { 00:18:08.676 "method": "bdev_iscsi_set_options", 00:18:08.676 "params": { 00:18:08.676 "timeout_sec": 30 00:18:08.676 } 00:18:08.676 }, 00:18:08.676 { 00:18:08.676 "method": "bdev_nvme_set_options", 00:18:08.676 "params": { 00:18:08.676 "action_on_timeout": "none", 00:18:08.676 "timeout_us": 0, 00:18:08.676 "timeout_admin_us": 0, 00:18:08.676 "keep_alive_timeout_ms": 10000, 00:18:08.676 "arbitration_burst": 0, 00:18:08.676 "low_priority_weight": 0, 00:18:08.676 "medium_priority_weight": 0, 00:18:08.676 "high_priority_weight": 0, 00:18:08.676 "nvme_adminq_poll_period_us": 10000, 00:18:08.676 "nvme_ioq_poll_period_us": 0, 00:18:08.676 "io_queue_requests": 0, 00:18:08.676 "delay_cmd_submit": true, 00:18:08.676 "transport_retry_count": 4, 00:18:08.676 "bdev_retry_count": 3, 00:18:08.676 "transport_ack_timeout": 0, 00:18:08.676 "ctrlr_loss_timeout_sec": 0, 00:18:08.676 "reconnect_delay_sec": 0, 00:18:08.676 "fast_io_fail_timeout_sec": 0, 00:18:08.676 "disable_auto_failback": false, 00:18:08.676 "generate_uuids": false, 00:18:08.676 "transport_tos": 0, 00:18:08.676 "nvme_error_stat": false, 00:18:08.676 "rdma_srq_size": 0, 00:18:08.676 "io_path_stat": false, 00:18:08.676 "allow_accel_sequence": false, 00:18:08.676 "rdma_max_cq_size": 0, 00:18:08.676 "rdma_cm_event_timeout_ms": 0, 00:18:08.676 "dhchap_digests": [ 00:18:08.676 "sha256", 00:18:08.676 "sha384", 00:18:08.676 "sha512" 00:18:08.676 ], 00:18:08.676 "dhchap_dhgroups": [ 00:18:08.676 "null", 00:18:08.676 "ffdhe2048", 00:18:08.676 "ffdhe3072", 00:18:08.676 "ffdhe4096", 00:18:08.676 "ffdhe6144", 00:18:08.676 "ffdhe8192" 00:18:08.676 ] 00:18:08.676 } 00:18:08.676 }, 00:18:08.676 { 00:18:08.676 "method": "bdev_nvme_set_hotplug", 00:18:08.676 "params": { 00:18:08.676 "period_us": 100000, 00:18:08.676 "enable": false 00:18:08.676 } 00:18:08.676 }, 00:18:08.676 { 00:18:08.676 "method": "bdev_malloc_create", 00:18:08.676 "params": { 00:18:08.676 "name": "malloc0", 00:18:08.676 "num_blocks": 8192, 00:18:08.676 "block_size": 4096, 00:18:08.676 "physical_block_size": 4096, 00:18:08.676 "uuid": "c7eaba06-5f04-426b-805d-e45c5ecbe813", 00:18:08.676 "optimal_io_boundary": 0 00:18:08.676 } 00:18:08.676 }, 00:18:08.676 { 00:18:08.676 "method": "bdev_wait_for_examine" 00:18:08.676 } 00:18:08.676 ] 00:18:08.676 }, 00:18:08.676 { 00:18:08.676 "subsystem": "nbd", 00:18:08.676 "config": [] 00:18:08.676 }, 00:18:08.676 { 00:18:08.676 "subsystem": "scheduler", 00:18:08.676 "config": [ 00:18:08.676 { 00:18:08.676 "method": "framework_set_scheduler", 00:18:08.676 "params": { 00:18:08.676 "name": "static" 00:18:08.676 } 00:18:08.676 } 00:18:08.676 ] 00:18:08.676 }, 00:18:08.676 { 00:18:08.676 "subsystem": "nvmf", 00:18:08.676 "config": [ 00:18:08.676 { 00:18:08.676 "method": "nvmf_set_config", 00:18:08.676 "params": { 00:18:08.676 "discovery_filter": "match_any", 00:18:08.676 "admin_cmd_passthru": { 00:18:08.676 "identify_ctrlr": false 00:18:08.676 } 00:18:08.676 } 00:18:08.676 }, 00:18:08.676 { 00:18:08.676 "method": "nvmf_set_max_subsystems", 00:18:08.676 "params": { 00:18:08.676 "max_subsystems": 1024 00:18:08.676 } 00:18:08.676 }, 00:18:08.676 { 00:18:08.676 "method": "nvmf_set_crdt", 00:18:08.676 "params": { 00:18:08.676 "crdt1": 0, 00:18:08.676 "crdt2": 0, 00:18:08.676 "crdt3": 0 00:18:08.676 } 00:18:08.676 }, 00:18:08.676 { 00:18:08.676 "method": "nvmf_create_transport", 00:18:08.676 "params": { 00:18:08.676 "trtype": "TCP", 00:18:08.676 "max_queue_depth": 128, 00:18:08.676 "max_io_qpairs_per_ctrlr": 127, 00:18:08.676 "in_capsule_data_size": 4096, 00:18:08.676 "max_io_size": 131072, 00:18:08.676 "io_unit_size": 131072, 00:18:08.676 "max_aq_depth": 128, 00:18:08.676 "num_shared_buffers": 511, 00:18:08.676 "buf_cache_size": 4294967295, 00:18:08.676 "dif_insert_or_strip": false, 00:18:08.676 "zcopy": false, 00:18:08.676 "c2h_success": false, 00:18:08.676 "sock_priority": 0, 00:18:08.676 "abort_timeout_sec": 1, 00:18:08.676 "ack_timeout": 0, 00:18:08.676 "data_wr_pool_size": 0 00:18:08.676 } 00:18:08.676 }, 00:18:08.676 { 00:18:08.676 "method": "nvmf_create_subsystem", 00:18:08.676 "params": { 00:18:08.676 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:08.676 "allow_any_host": false, 00:18:08.676 "serial_number": "00000000000000000000", 00:18:08.676 "model_number": "SPDK bdev Controller", 00:18:08.676 "max_namespaces": 32, 00:18:08.676 "min_cntlid": 1, 00:18:08.676 "max_cntlid": 65519, 00:18:08.676 "ana_reporting": false 00:18:08.676 } 00:18:08.676 }, 00:18:08.676 { 00:18:08.676 "method": "nvmf_subsystem_add_host", 00:18:08.676 "params": { 00:18:08.676 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:08.676 "host": "nqn.2016-06.io.spdk:host1", 00:18:08.676 "psk": "key0" 00:18:08.676 } 00:18:08.676 }, 00:18:08.676 { 00:18:08.676 "method": "nvmf_subsystem_add_ns", 00:18:08.676 "params": { 00:18:08.676 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:08.676 "namespace": { 00:18:08.676 "nsid": 1, 00:18:08.676 "bdev_name": "malloc0", 00:18:08.676 "nguid": "C7EABA065F04426B805DE45C5ECBE813", 00:18:08.676 "uuid": "c7eaba06-5f04-426b-805d-e45c5ecbe813", 00:18:08.676 "no_auto_visible": false 00:18:08.676 } 00:18:08.676 } 00:18:08.676 }, 00:18:08.676 { 00:18:08.676 "method": "nvmf_subsystem_add_listener", 00:18:08.676 "params": { 00:18:08.676 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:08.676 "listen_address": { 00:18:08.676 "trtype": "TCP", 00:18:08.676 "adrfam": "IPv4", 00:18:08.676 "traddr": "10.0.0.2", 00:18:08.676 "trsvcid": "4420" 00:18:08.676 }, 00:18:08.676 "secure_channel": true 00:18:08.676 } 00:18:08.676 } 00:18:08.676 ] 00:18:08.676 } 00:18:08.676 ] 00:18:08.676 }' 00:18:08.676 03:18:42 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:08.676 03:18:42 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:08.676 03:18:42 -- common/autotest_common.sh@10 -- # set +x 00:18:08.676 03:18:42 -- nvmf/common.sh@470 -- # nvmfpid=1515766 00:18:08.676 03:18:42 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:18:08.676 03:18:42 -- nvmf/common.sh@471 -- # waitforlisten 1515766 00:18:08.676 03:18:42 -- common/autotest_common.sh@817 -- # '[' -z 1515766 ']' 00:18:08.676 03:18:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:08.676 03:18:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:08.676 03:18:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:08.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:08.676 03:18:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:08.676 03:18:42 -- common/autotest_common.sh@10 -- # set +x 00:18:08.676 [2024-04-25 03:18:42.919121] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:18:08.676 [2024-04-25 03:18:42.919207] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:08.676 EAL: No free 2048 kB hugepages reported on node 1 00:18:08.676 [2024-04-25 03:18:42.994714] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:08.676 [2024-04-25 03:18:43.112165] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:08.676 [2024-04-25 03:18:43.112214] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:08.676 [2024-04-25 03:18:43.112230] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:08.676 [2024-04-25 03:18:43.112243] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:08.676 [2024-04-25 03:18:43.112254] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:08.676 [2024-04-25 03:18:43.112334] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:08.934 [2024-04-25 03:18:43.343034] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:08.934 [2024-04-25 03:18:43.375060] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:08.934 [2024-04-25 03:18:43.384849] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:09.500 03:18:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:09.500 03:18:43 -- common/autotest_common.sh@850 -- # return 0 00:18:09.500 03:18:43 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:09.500 03:18:43 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:09.500 03:18:43 -- common/autotest_common.sh@10 -- # set +x 00:18:09.500 03:18:43 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:09.500 03:18:43 -- target/tls.sh@272 -- # bdevperf_pid=1515914 00:18:09.500 03:18:43 -- target/tls.sh@273 -- # waitforlisten 1515914 /var/tmp/bdevperf.sock 00:18:09.500 03:18:43 -- common/autotest_common.sh@817 -- # '[' -z 1515914 ']' 00:18:09.500 03:18:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:09.500 03:18:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:09.500 03:18:43 -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:18:09.500 03:18:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:09.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:09.500 03:18:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:09.500 03:18:43 -- target/tls.sh@270 -- # echo '{ 00:18:09.500 "subsystems": [ 00:18:09.500 { 00:18:09.500 "subsystem": "keyring", 00:18:09.500 "config": [ 00:18:09.500 { 00:18:09.500 "method": "keyring_file_add_key", 00:18:09.500 "params": { 00:18:09.500 "name": "key0", 00:18:09.500 "path": "/tmp/tmp.zgtd0avrUE" 00:18:09.500 } 00:18:09.500 } 00:18:09.500 ] 00:18:09.500 }, 00:18:09.500 { 00:18:09.500 "subsystem": "iobuf", 00:18:09.500 "config": [ 00:18:09.500 { 00:18:09.500 "method": "iobuf_set_options", 00:18:09.500 "params": { 00:18:09.500 "small_pool_count": 8192, 00:18:09.500 "large_pool_count": 1024, 00:18:09.500 "small_bufsize": 8192, 00:18:09.500 "large_bufsize": 135168 00:18:09.500 } 00:18:09.500 } 00:18:09.500 ] 00:18:09.500 }, 00:18:09.500 { 00:18:09.500 "subsystem": "sock", 00:18:09.500 "config": [ 00:18:09.500 { 00:18:09.500 "method": "sock_impl_set_options", 00:18:09.500 "params": { 00:18:09.500 "impl_name": "posix", 00:18:09.500 "recv_buf_size": 2097152, 00:18:09.500 "send_buf_size": 2097152, 00:18:09.500 "enable_recv_pipe": true, 00:18:09.500 "enable_quickack": false, 00:18:09.500 "enable_placement_id": 0, 00:18:09.500 "enable_zerocopy_send_server": true, 00:18:09.500 "enable_zerocopy_send_client": false, 00:18:09.500 "zerocopy_threshold": 0, 00:18:09.500 "tls_version": 0, 00:18:09.500 "enable_ktls": false 00:18:09.500 } 00:18:09.500 }, 00:18:09.500 { 00:18:09.500 "method": "sock_impl_set_options", 00:18:09.500 "params": { 00:18:09.500 "impl_name": "ssl", 00:18:09.500 "recv_buf_size": 4096, 00:18:09.500 "send_buf_size": 4096, 00:18:09.500 "enable_recv_pipe": true, 00:18:09.500 "enable_quickack": false, 00:18:09.500 "enable_placement_id": 0, 00:18:09.500 "enable_zerocopy_send_server": true, 00:18:09.500 "enable_zerocopy_send_client": false, 00:18:09.500 "zerocopy_threshold": 0, 00:18:09.500 "tls_version": 0, 00:18:09.500 "enable_ktls": false 00:18:09.500 } 00:18:09.500 } 00:18:09.500 ] 00:18:09.500 }, 00:18:09.500 { 00:18:09.500 "subsystem": "vmd", 00:18:09.500 "config": [] 00:18:09.500 }, 00:18:09.500 { 00:18:09.500 "subsystem": "accel", 00:18:09.500 "config": [ 00:18:09.500 { 00:18:09.500 "method": "accel_set_options", 00:18:09.500 "params": { 00:18:09.500 "small_cache_size": 128, 00:18:09.500 "large_cache_size": 16, 00:18:09.500 "task_count": 2048, 00:18:09.500 "sequence_count": 2048, 00:18:09.500 "buf_count": 2048 00:18:09.500 } 00:18:09.500 } 00:18:09.500 ] 00:18:09.500 }, 00:18:09.500 { 00:18:09.500 "subsystem": "bdev", 00:18:09.500 "config": [ 00:18:09.500 { 00:18:09.500 "method": "bdev_set_options", 00:18:09.500 "params": { 00:18:09.500 "bdev_io_pool_size": 65535, 00:18:09.500 "bdev_io_cache_size": 256, 00:18:09.500 "bdev_auto_examine": true, 00:18:09.500 "iobuf_small_cache_size": 128, 00:18:09.500 "iobuf_large_cache_size": 16 00:18:09.500 } 00:18:09.500 }, 00:18:09.500 { 00:18:09.500 "method": "bdev_raid_set_options", 00:18:09.500 "params": { 00:18:09.500 "process_window_size_kb": 1024 00:18:09.500 } 00:18:09.500 }, 00:18:09.500 { 00:18:09.500 "method": "bdev_iscsi_set_options", 00:18:09.500 "params": { 00:18:09.500 "timeout_sec": 30 00:18:09.500 } 00:18:09.500 }, 00:18:09.500 { 00:18:09.500 "method": "bdev_nvme_set_options", 00:18:09.500 "params": { 00:18:09.500 "action_on_timeout": "none", 00:18:09.500 "timeout_us": 0, 00:18:09.500 "timeout_admin_us": 0, 00:18:09.500 "keep_alive_timeout_ms": 10000, 00:18:09.500 "arbitration_burst": 0, 00:18:09.500 "low_priority_weight": 0, 00:18:09.500 "medium_priority_weight": 0, 00:18:09.500 "high_priority_weight": 0, 00:18:09.500 "nvme_adminq_poll_period_us": 10000, 00:18:09.500 "nvme_ioq_poll_period_us": 0, 00:18:09.501 "io_queue_requests": 512, 00:18:09.501 "delay_cmd_submit": true, 00:18:09.501 "transport_retry_count": 4, 00:18:09.501 "bdev_retry_count": 3, 00:18:09.501 "transport_ack_timeout": 0, 00:18:09.501 "ctrlr_loss_timeout_sec": 0, 00:18:09.501 "reconnect_delay_sec": 0, 00:18:09.501 "fast_io_fail_timeout_sec": 0, 00:18:09.501 "disable_auto_failback": false, 00:18:09.501 "generate_uuids": false, 00:18:09.501 "transport_tos": 0, 00:18:09.501 "nvme_error_stat": false, 00:18:09.501 "rdma_srq_size": 0, 00:18:09.501 "io_path_stat": false, 00:18:09.501 "allow_accel_sequence": false, 00:18:09.501 "rdma_max_cq_size": 0, 00:18:09.501 "rdma_cm_event_timeout_ms": 0, 00:18:09.501 "dhchap_digests": [ 00:18:09.501 "sha256", 00:18:09.501 "sha384", 00:18:09.501 "sha512" 00:18:09.501 ], 00:18:09.501 "dhchap_dhgroups": [ 00:18:09.501 "null", 00:18:09.501 "ffdhe2048", 00:18:09.501 "ffdhe3072", 00:18:09.501 "ffdhe4096", 00:18:09.501 "ffdhe6144", 00:18:09.501 "ffdhe8192" 00:18:09.501 ] 00:18:09.501 } 00:18:09.501 }, 00:18:09.501 { 00:18:09.501 "method": "bdev_nvme_attach_controller", 00:18:09.501 "params": { 00:18:09.501 "name": "nvme0", 00:18:09.501 "trtype": "TCP", 00:18:09.501 "adrfam": "IPv4", 00:18:09.501 "traddr": "10.0.0.2", 00:18:09.501 "trsvcid": "4420", 00:18:09.501 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:09.501 "prchk_reftag": false, 00:18:09.501 "prchk_guard": false, 00:18:09.501 "ctrlr_loss_timeout_sec": 0, 00:18:09.501 "reconnect_delay_sec": 0, 00:18:09.501 "fast_io_fail_timeout_sec": 0, 00:18:09.501 "psk": "key0", 00:18:09.501 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:09.501 "hdgst": false, 00:18:09.501 "ddgst": false 00:18:09.501 } 00:18:09.501 }, 00:18:09.501 { 00:18:09.501 "method": "bdev_nvme_set_hotplug", 00:18:09.501 "params": { 00:18:09.501 "period_us": 100000, 00:18:09.501 "enable": false 00:18:09.501 } 00:18:09.501 }, 00:18:09.501 { 00:18:09.501 "method": "bdev_enable_histogram", 00:18:09.501 "params": { 00:18:09.501 "name": "nvme0n1", 00:18:09.501 "enable": true 00:18:09.501 } 00:18:09.501 }, 00:18:09.501 { 00:18:09.501 "method": "bdev_wait_for_examine" 00:18:09.501 } 00:18:09.501 ] 00:18:09.501 }, 00:18:09.501 { 00:18:09.501 "subsystem": "nbd", 00:18:09.501 "config": [] 00:18:09.501 } 00:18:09.501 ] 00:18:09.501 }' 00:18:09.501 03:18:43 -- common/autotest_common.sh@10 -- # set +x 00:18:09.759 [2024-04-25 03:18:44.019471] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:18:09.759 [2024-04-25 03:18:44.019557] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1515914 ] 00:18:09.759 EAL: No free 2048 kB hugepages reported on node 1 00:18:09.759 [2024-04-25 03:18:44.088270] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:09.759 [2024-04-25 03:18:44.197301] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:10.018 [2024-04-25 03:18:44.376365] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:10.584 03:18:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:10.584 03:18:45 -- common/autotest_common.sh@850 -- # return 0 00:18:10.584 03:18:45 -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:10.584 03:18:45 -- target/tls.sh@275 -- # jq -r '.[].name' 00:18:10.842 03:18:45 -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:10.842 03:18:45 -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:11.099 Running I/O for 1 seconds... 00:18:12.033 00:18:12.033 Latency(us) 00:18:12.033 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:12.033 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:12.033 Verification LBA range: start 0x0 length 0x2000 00:18:12.033 nvme0n1 : 1.08 1256.55 4.91 0.00 0.00 98744.61 6553.60 147577.36 00:18:12.033 =================================================================================================================== 00:18:12.033 Total : 1256.55 4.91 0.00 0.00 98744.61 6553.60 147577.36 00:18:12.033 0 00:18:12.033 03:18:46 -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:18:12.033 03:18:46 -- target/tls.sh@279 -- # cleanup 00:18:12.033 03:18:46 -- target/tls.sh@15 -- # process_shm --id 0 00:18:12.033 03:18:46 -- common/autotest_common.sh@794 -- # type=--id 00:18:12.033 03:18:46 -- common/autotest_common.sh@795 -- # id=0 00:18:12.033 03:18:46 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:18:12.033 03:18:46 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:12.033 03:18:46 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:18:12.033 03:18:46 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:18:12.033 03:18:46 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:18:12.033 03:18:46 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:12.033 nvmf_trace.0 00:18:12.033 03:18:46 -- common/autotest_common.sh@809 -- # return 0 00:18:12.033 03:18:46 -- target/tls.sh@16 -- # killprocess 1515914 00:18:12.033 03:18:46 -- common/autotest_common.sh@936 -- # '[' -z 1515914 ']' 00:18:12.033 03:18:46 -- common/autotest_common.sh@940 -- # kill -0 1515914 00:18:12.033 03:18:46 -- common/autotest_common.sh@941 -- # uname 00:18:12.033 03:18:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:12.033 03:18:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1515914 00:18:12.292 03:18:46 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:12.292 03:18:46 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:12.292 03:18:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1515914' 00:18:12.292 killing process with pid 1515914 00:18:12.292 03:18:46 -- common/autotest_common.sh@955 -- # kill 1515914 00:18:12.292 Received shutdown signal, test time was about 1.000000 seconds 00:18:12.292 00:18:12.292 Latency(us) 00:18:12.292 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:12.292 =================================================================================================================== 00:18:12.292 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:12.292 03:18:46 -- common/autotest_common.sh@960 -- # wait 1515914 00:18:12.550 03:18:46 -- target/tls.sh@17 -- # nvmftestfini 00:18:12.550 03:18:46 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:12.550 03:18:46 -- nvmf/common.sh@117 -- # sync 00:18:12.550 03:18:46 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:12.550 03:18:46 -- nvmf/common.sh@120 -- # set +e 00:18:12.550 03:18:46 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:12.550 03:18:46 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:12.550 rmmod nvme_tcp 00:18:12.550 rmmod nvme_fabrics 00:18:12.550 rmmod nvme_keyring 00:18:12.550 03:18:46 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:12.550 03:18:46 -- nvmf/common.sh@124 -- # set -e 00:18:12.550 03:18:46 -- nvmf/common.sh@125 -- # return 0 00:18:12.550 03:18:46 -- nvmf/common.sh@478 -- # '[' -n 1515766 ']' 00:18:12.550 03:18:46 -- nvmf/common.sh@479 -- # killprocess 1515766 00:18:12.550 03:18:46 -- common/autotest_common.sh@936 -- # '[' -z 1515766 ']' 00:18:12.550 03:18:46 -- common/autotest_common.sh@940 -- # kill -0 1515766 00:18:12.550 03:18:46 -- common/autotest_common.sh@941 -- # uname 00:18:12.550 03:18:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:12.550 03:18:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1515766 00:18:12.550 03:18:46 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:12.550 03:18:46 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:12.550 03:18:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1515766' 00:18:12.550 killing process with pid 1515766 00:18:12.550 03:18:46 -- common/autotest_common.sh@955 -- # kill 1515766 00:18:12.550 03:18:46 -- common/autotest_common.sh@960 -- # wait 1515766 00:18:12.809 03:18:47 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:12.809 03:18:47 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:18:12.809 03:18:47 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:18:12.809 03:18:47 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:12.809 03:18:47 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:12.809 03:18:47 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:12.809 03:18:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:12.809 03:18:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:15.343 03:18:49 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:15.343 03:18:49 -- target/tls.sh@18 -- # rm -f /tmp/tmp.ZD9rHB5R25 /tmp/tmp.YkFNsSux2U /tmp/tmp.zgtd0avrUE 00:18:15.344 00:18:15.344 real 1m22.759s 00:18:15.344 user 2m12.085s 00:18:15.344 sys 0m27.766s 00:18:15.344 03:18:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:15.344 03:18:49 -- common/autotest_common.sh@10 -- # set +x 00:18:15.344 ************************************ 00:18:15.344 END TEST nvmf_tls 00:18:15.344 ************************************ 00:18:15.344 03:18:49 -- nvmf/nvmf.sh@61 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:15.344 03:18:49 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:15.344 03:18:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:15.344 03:18:49 -- common/autotest_common.sh@10 -- # set +x 00:18:15.344 ************************************ 00:18:15.344 START TEST nvmf_fips 00:18:15.344 ************************************ 00:18:15.344 03:18:49 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:15.344 * Looking for test storage... 00:18:15.344 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:18:15.344 03:18:49 -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:15.344 03:18:49 -- nvmf/common.sh@7 -- # uname -s 00:18:15.344 03:18:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:15.344 03:18:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:15.344 03:18:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:15.344 03:18:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:15.344 03:18:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:15.344 03:18:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:15.344 03:18:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:15.344 03:18:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:15.344 03:18:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:15.344 03:18:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:15.344 03:18:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:15.344 03:18:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:15.344 03:18:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:15.344 03:18:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:15.344 03:18:49 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:15.344 03:18:49 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:15.344 03:18:49 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:15.344 03:18:49 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:15.344 03:18:49 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:15.344 03:18:49 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:15.344 03:18:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:15.344 03:18:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:15.344 03:18:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:15.344 03:18:49 -- paths/export.sh@5 -- # export PATH 00:18:15.344 03:18:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:15.344 03:18:49 -- nvmf/common.sh@47 -- # : 0 00:18:15.344 03:18:49 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:15.344 03:18:49 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:15.344 03:18:49 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:15.344 03:18:49 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:15.344 03:18:49 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:15.344 03:18:49 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:15.344 03:18:49 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:15.344 03:18:49 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:15.344 03:18:49 -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:15.344 03:18:49 -- fips/fips.sh@89 -- # check_openssl_version 00:18:15.344 03:18:49 -- fips/fips.sh@83 -- # local target=3.0.0 00:18:15.344 03:18:49 -- fips/fips.sh@85 -- # openssl version 00:18:15.344 03:18:49 -- fips/fips.sh@85 -- # awk '{print $2}' 00:18:15.344 03:18:49 -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:18:15.344 03:18:49 -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:18:15.344 03:18:49 -- scripts/common.sh@330 -- # local ver1 ver1_l 00:18:15.344 03:18:49 -- scripts/common.sh@331 -- # local ver2 ver2_l 00:18:15.344 03:18:49 -- scripts/common.sh@333 -- # IFS=.-: 00:18:15.344 03:18:49 -- scripts/common.sh@333 -- # read -ra ver1 00:18:15.344 03:18:49 -- scripts/common.sh@334 -- # IFS=.-: 00:18:15.344 03:18:49 -- scripts/common.sh@334 -- # read -ra ver2 00:18:15.344 03:18:49 -- scripts/common.sh@335 -- # local 'op=>=' 00:18:15.344 03:18:49 -- scripts/common.sh@337 -- # ver1_l=3 00:18:15.344 03:18:49 -- scripts/common.sh@338 -- # ver2_l=3 00:18:15.344 03:18:49 -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:18:15.344 03:18:49 -- scripts/common.sh@341 -- # case "$op" in 00:18:15.344 03:18:49 -- scripts/common.sh@345 -- # : 1 00:18:15.344 03:18:49 -- scripts/common.sh@361 -- # (( v = 0 )) 00:18:15.344 03:18:49 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:15.344 03:18:49 -- scripts/common.sh@362 -- # decimal 3 00:18:15.344 03:18:49 -- scripts/common.sh@350 -- # local d=3 00:18:15.344 03:18:49 -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:15.344 03:18:49 -- scripts/common.sh@352 -- # echo 3 00:18:15.344 03:18:49 -- scripts/common.sh@362 -- # ver1[v]=3 00:18:15.344 03:18:49 -- scripts/common.sh@363 -- # decimal 3 00:18:15.344 03:18:49 -- scripts/common.sh@350 -- # local d=3 00:18:15.344 03:18:49 -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:15.344 03:18:49 -- scripts/common.sh@352 -- # echo 3 00:18:15.344 03:18:49 -- scripts/common.sh@363 -- # ver2[v]=3 00:18:15.344 03:18:49 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:18:15.344 03:18:49 -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:18:15.344 03:18:49 -- scripts/common.sh@361 -- # (( v++ )) 00:18:15.344 03:18:49 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:15.344 03:18:49 -- scripts/common.sh@362 -- # decimal 0 00:18:15.344 03:18:49 -- scripts/common.sh@350 -- # local d=0 00:18:15.344 03:18:49 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:15.344 03:18:49 -- scripts/common.sh@352 -- # echo 0 00:18:15.344 03:18:49 -- scripts/common.sh@362 -- # ver1[v]=0 00:18:15.344 03:18:49 -- scripts/common.sh@363 -- # decimal 0 00:18:15.344 03:18:49 -- scripts/common.sh@350 -- # local d=0 00:18:15.344 03:18:49 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:15.344 03:18:49 -- scripts/common.sh@352 -- # echo 0 00:18:15.344 03:18:49 -- scripts/common.sh@363 -- # ver2[v]=0 00:18:15.344 03:18:49 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:18:15.344 03:18:49 -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:18:15.344 03:18:49 -- scripts/common.sh@361 -- # (( v++ )) 00:18:15.344 03:18:49 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:15.344 03:18:49 -- scripts/common.sh@362 -- # decimal 9 00:18:15.344 03:18:49 -- scripts/common.sh@350 -- # local d=9 00:18:15.344 03:18:49 -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:18:15.344 03:18:49 -- scripts/common.sh@352 -- # echo 9 00:18:15.344 03:18:49 -- scripts/common.sh@362 -- # ver1[v]=9 00:18:15.344 03:18:49 -- scripts/common.sh@363 -- # decimal 0 00:18:15.344 03:18:49 -- scripts/common.sh@350 -- # local d=0 00:18:15.344 03:18:49 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:15.344 03:18:49 -- scripts/common.sh@352 -- # echo 0 00:18:15.344 03:18:49 -- scripts/common.sh@363 -- # ver2[v]=0 00:18:15.344 03:18:49 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:18:15.344 03:18:49 -- scripts/common.sh@364 -- # return 0 00:18:15.344 03:18:49 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:18:15.344 03:18:49 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:18:15.344 03:18:49 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:18:15.344 03:18:49 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:18:15.344 03:18:49 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:18:15.344 03:18:49 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:18:15.344 03:18:49 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:18:15.344 03:18:49 -- fips/fips.sh@113 -- # build_openssl_config 00:18:15.344 03:18:49 -- fips/fips.sh@37 -- # cat 00:18:15.344 03:18:49 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:18:15.344 03:18:49 -- fips/fips.sh@58 -- # cat - 00:18:15.344 03:18:49 -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:18:15.344 03:18:49 -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:18:15.344 03:18:49 -- fips/fips.sh@116 -- # mapfile -t providers 00:18:15.344 03:18:49 -- fips/fips.sh@116 -- # openssl list -providers 00:18:15.344 03:18:49 -- fips/fips.sh@116 -- # grep name 00:18:15.344 03:18:49 -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:18:15.344 03:18:49 -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:18:15.344 03:18:49 -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:18:15.344 03:18:49 -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:18:15.344 03:18:49 -- fips/fips.sh@127 -- # : 00:18:15.344 03:18:49 -- common/autotest_common.sh@638 -- # local es=0 00:18:15.345 03:18:49 -- common/autotest_common.sh@640 -- # valid_exec_arg openssl md5 /dev/fd/62 00:18:15.345 03:18:49 -- common/autotest_common.sh@626 -- # local arg=openssl 00:18:15.345 03:18:49 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:15.345 03:18:49 -- common/autotest_common.sh@630 -- # type -t openssl 00:18:15.345 03:18:49 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:15.345 03:18:49 -- common/autotest_common.sh@632 -- # type -P openssl 00:18:15.345 03:18:49 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:15.345 03:18:49 -- common/autotest_common.sh@632 -- # arg=/usr/bin/openssl 00:18:15.345 03:18:49 -- common/autotest_common.sh@632 -- # [[ -x /usr/bin/openssl ]] 00:18:15.345 03:18:49 -- common/autotest_common.sh@641 -- # openssl md5 /dev/fd/62 00:18:15.345 Error setting digest 00:18:15.345 00728A56727F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:18:15.345 00728A56727F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:18:15.345 03:18:49 -- common/autotest_common.sh@641 -- # es=1 00:18:15.345 03:18:49 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:18:15.345 03:18:49 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:18:15.345 03:18:49 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:18:15.345 03:18:49 -- fips/fips.sh@130 -- # nvmftestinit 00:18:15.345 03:18:49 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:18:15.345 03:18:49 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:15.345 03:18:49 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:15.345 03:18:49 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:15.345 03:18:49 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:15.345 03:18:49 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:15.345 03:18:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:15.345 03:18:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:15.345 03:18:49 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:18:15.345 03:18:49 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:18:15.345 03:18:49 -- nvmf/common.sh@285 -- # xtrace_disable 00:18:15.345 03:18:49 -- common/autotest_common.sh@10 -- # set +x 00:18:17.274 03:18:51 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:17.274 03:18:51 -- nvmf/common.sh@291 -- # pci_devs=() 00:18:17.274 03:18:51 -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:17.274 03:18:51 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:17.274 03:18:51 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:17.274 03:18:51 -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:17.274 03:18:51 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:17.274 03:18:51 -- nvmf/common.sh@295 -- # net_devs=() 00:18:17.274 03:18:51 -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:17.274 03:18:51 -- nvmf/common.sh@296 -- # e810=() 00:18:17.274 03:18:51 -- nvmf/common.sh@296 -- # local -ga e810 00:18:17.274 03:18:51 -- nvmf/common.sh@297 -- # x722=() 00:18:17.274 03:18:51 -- nvmf/common.sh@297 -- # local -ga x722 00:18:17.274 03:18:51 -- nvmf/common.sh@298 -- # mlx=() 00:18:17.274 03:18:51 -- nvmf/common.sh@298 -- # local -ga mlx 00:18:17.274 03:18:51 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:17.274 03:18:51 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:17.274 03:18:51 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:17.274 03:18:51 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:17.274 03:18:51 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:17.274 03:18:51 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:17.274 03:18:51 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:17.274 03:18:51 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:17.274 03:18:51 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:17.274 03:18:51 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:17.274 03:18:51 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:17.274 03:18:51 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:17.274 03:18:51 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:17.274 03:18:51 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:17.274 03:18:51 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:17.274 03:18:51 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:17.274 03:18:51 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:17.274 03:18:51 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:17.274 03:18:51 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:17.274 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:17.274 03:18:51 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:17.274 03:18:51 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:17.274 03:18:51 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:17.274 03:18:51 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:17.274 03:18:51 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:17.274 03:18:51 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:17.274 03:18:51 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:17.274 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:17.274 03:18:51 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:17.274 03:18:51 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:17.274 03:18:51 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:17.274 03:18:51 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:17.274 03:18:51 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:17.274 03:18:51 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:17.274 03:18:51 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:17.274 03:18:51 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:17.274 03:18:51 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:17.274 03:18:51 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:17.274 03:18:51 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:17.274 03:18:51 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:17.274 03:18:51 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:17.274 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:17.274 03:18:51 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:17.274 03:18:51 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:17.274 03:18:51 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:17.274 03:18:51 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:17.274 03:18:51 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:17.274 03:18:51 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:17.274 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:17.274 03:18:51 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:17.274 03:18:51 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:18:17.274 03:18:51 -- nvmf/common.sh@403 -- # is_hw=yes 00:18:17.274 03:18:51 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:18:17.274 03:18:51 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:18:17.274 03:18:51 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:18:17.274 03:18:51 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:17.274 03:18:51 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:17.274 03:18:51 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:17.274 03:18:51 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:17.274 03:18:51 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:17.274 03:18:51 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:17.274 03:18:51 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:17.274 03:18:51 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:17.274 03:18:51 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:17.274 03:18:51 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:17.274 03:18:51 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:17.274 03:18:51 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:17.274 03:18:51 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:17.274 03:18:51 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:17.274 03:18:51 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:17.274 03:18:51 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:17.274 03:18:51 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:17.274 03:18:51 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:17.274 03:18:51 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:17.274 03:18:51 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:17.274 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:17.274 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.186 ms 00:18:17.274 00:18:17.274 --- 10.0.0.2 ping statistics --- 00:18:17.274 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:17.274 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:18:17.274 03:18:51 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:17.274 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:17.274 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.075 ms 00:18:17.274 00:18:17.274 --- 10.0.0.1 ping statistics --- 00:18:17.274 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:17.274 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:18:17.274 03:18:51 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:17.274 03:18:51 -- nvmf/common.sh@411 -- # return 0 00:18:17.274 03:18:51 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:17.274 03:18:51 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:17.274 03:18:51 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:18:17.274 03:18:51 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:18:17.274 03:18:51 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:17.274 03:18:51 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:18:17.274 03:18:51 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:18:17.274 03:18:51 -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:18:17.274 03:18:51 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:17.274 03:18:51 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:17.274 03:18:51 -- common/autotest_common.sh@10 -- # set +x 00:18:17.274 03:18:51 -- nvmf/common.sh@470 -- # nvmfpid=1518285 00:18:17.274 03:18:51 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:17.274 03:18:51 -- nvmf/common.sh@471 -- # waitforlisten 1518285 00:18:17.274 03:18:51 -- common/autotest_common.sh@817 -- # '[' -z 1518285 ']' 00:18:17.274 03:18:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:17.274 03:18:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:17.275 03:18:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:17.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:17.275 03:18:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:17.275 03:18:51 -- common/autotest_common.sh@10 -- # set +x 00:18:17.275 [2024-04-25 03:18:51.632763] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:18:17.275 [2024-04-25 03:18:51.632852] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:17.275 EAL: No free 2048 kB hugepages reported on node 1 00:18:17.275 [2024-04-25 03:18:51.701464] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:17.532 [2024-04-25 03:18:51.815322] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:17.532 [2024-04-25 03:18:51.815391] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:17.532 [2024-04-25 03:18:51.815408] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:17.532 [2024-04-25 03:18:51.815422] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:17.532 [2024-04-25 03:18:51.815435] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:17.532 [2024-04-25 03:18:51.815469] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:18.098 03:18:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:18.098 03:18:52 -- common/autotest_common.sh@850 -- # return 0 00:18:18.098 03:18:52 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:18.098 03:18:52 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:18.098 03:18:52 -- common/autotest_common.sh@10 -- # set +x 00:18:18.355 03:18:52 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:18.356 03:18:52 -- fips/fips.sh@133 -- # trap cleanup EXIT 00:18:18.356 03:18:52 -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:18.356 03:18:52 -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:18.356 03:18:52 -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:18.356 03:18:52 -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:18.356 03:18:52 -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:18.356 03:18:52 -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:18.356 03:18:52 -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:18.356 [2024-04-25 03:18:52.844250] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:18.614 [2024-04-25 03:18:52.860223] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:18.614 [2024-04-25 03:18:52.860423] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:18.614 [2024-04-25 03:18:52.891608] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:18.614 malloc0 00:18:18.614 03:18:52 -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:18.614 03:18:52 -- fips/fips.sh@147 -- # bdevperf_pid=1518441 00:18:18.614 03:18:52 -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:18.614 03:18:52 -- fips/fips.sh@148 -- # waitforlisten 1518441 /var/tmp/bdevperf.sock 00:18:18.614 03:18:52 -- common/autotest_common.sh@817 -- # '[' -z 1518441 ']' 00:18:18.614 03:18:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:18.614 03:18:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:18.614 03:18:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:18.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:18.614 03:18:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:18.614 03:18:52 -- common/autotest_common.sh@10 -- # set +x 00:18:18.614 [2024-04-25 03:18:52.978320] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:18:18.614 [2024-04-25 03:18:52.978404] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1518441 ] 00:18:18.614 EAL: No free 2048 kB hugepages reported on node 1 00:18:18.614 [2024-04-25 03:18:53.035177] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:18.872 [2024-04-25 03:18:53.139695] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:19.436 03:18:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:19.436 03:18:53 -- common/autotest_common.sh@850 -- # return 0 00:18:19.436 03:18:53 -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:19.694 [2024-04-25 03:18:54.096095] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:19.694 [2024-04-25 03:18:54.096212] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:19.694 TLSTESTn1 00:18:19.694 03:18:54 -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:19.951 Running I/O for 10 seconds... 00:18:29.953 00:18:29.953 Latency(us) 00:18:29.954 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:29.954 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:29.954 Verification LBA range: start 0x0 length 0x2000 00:18:29.954 TLSTESTn1 : 10.07 1371.62 5.36 0.00 0.00 93040.55 11990.66 140586.86 00:18:29.954 =================================================================================================================== 00:18:29.954 Total : 1371.62 5.36 0.00 0.00 93040.55 11990.66 140586.86 00:18:29.954 0 00:18:29.954 03:19:04 -- fips/fips.sh@1 -- # cleanup 00:18:29.954 03:19:04 -- fips/fips.sh@15 -- # process_shm --id 0 00:18:29.954 03:19:04 -- common/autotest_common.sh@794 -- # type=--id 00:18:29.954 03:19:04 -- common/autotest_common.sh@795 -- # id=0 00:18:29.954 03:19:04 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:18:29.954 03:19:04 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:29.954 03:19:04 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:18:29.954 03:19:04 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:18:29.954 03:19:04 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:18:29.954 03:19:04 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:29.954 nvmf_trace.0 00:18:30.212 03:19:04 -- common/autotest_common.sh@809 -- # return 0 00:18:30.212 03:19:04 -- fips/fips.sh@16 -- # killprocess 1518441 00:18:30.212 03:19:04 -- common/autotest_common.sh@936 -- # '[' -z 1518441 ']' 00:18:30.212 03:19:04 -- common/autotest_common.sh@940 -- # kill -0 1518441 00:18:30.212 03:19:04 -- common/autotest_common.sh@941 -- # uname 00:18:30.212 03:19:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:30.212 03:19:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1518441 00:18:30.212 03:19:04 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:30.212 03:19:04 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:30.212 03:19:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1518441' 00:18:30.212 killing process with pid 1518441 00:18:30.212 03:19:04 -- common/autotest_common.sh@955 -- # kill 1518441 00:18:30.212 Received shutdown signal, test time was about 10.000000 seconds 00:18:30.212 00:18:30.212 Latency(us) 00:18:30.212 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:30.212 =================================================================================================================== 00:18:30.212 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:30.212 [2024-04-25 03:19:04.490946] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:30.212 03:19:04 -- common/autotest_common.sh@960 -- # wait 1518441 00:18:30.470 03:19:04 -- fips/fips.sh@17 -- # nvmftestfini 00:18:30.470 03:19:04 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:30.470 03:19:04 -- nvmf/common.sh@117 -- # sync 00:18:30.470 03:19:04 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:30.470 03:19:04 -- nvmf/common.sh@120 -- # set +e 00:18:30.470 03:19:04 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:30.470 03:19:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:30.470 rmmod nvme_tcp 00:18:30.470 rmmod nvme_fabrics 00:18:30.470 rmmod nvme_keyring 00:18:30.470 03:19:04 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:30.470 03:19:04 -- nvmf/common.sh@124 -- # set -e 00:18:30.470 03:19:04 -- nvmf/common.sh@125 -- # return 0 00:18:30.470 03:19:04 -- nvmf/common.sh@478 -- # '[' -n 1518285 ']' 00:18:30.470 03:19:04 -- nvmf/common.sh@479 -- # killprocess 1518285 00:18:30.470 03:19:04 -- common/autotest_common.sh@936 -- # '[' -z 1518285 ']' 00:18:30.470 03:19:04 -- common/autotest_common.sh@940 -- # kill -0 1518285 00:18:30.470 03:19:04 -- common/autotest_common.sh@941 -- # uname 00:18:30.470 03:19:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:30.470 03:19:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1518285 00:18:30.470 03:19:04 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:30.470 03:19:04 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:30.470 03:19:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1518285' 00:18:30.470 killing process with pid 1518285 00:18:30.470 03:19:04 -- common/autotest_common.sh@955 -- # kill 1518285 00:18:30.470 [2024-04-25 03:19:04.841984] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:30.470 03:19:04 -- common/autotest_common.sh@960 -- # wait 1518285 00:18:30.728 03:19:05 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:30.728 03:19:05 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:18:30.728 03:19:05 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:18:30.728 03:19:05 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:30.728 03:19:05 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:30.728 03:19:05 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:30.728 03:19:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:30.728 03:19:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:33.262 03:19:07 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:33.262 03:19:07 -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:33.262 00:18:33.262 real 0m17.820s 00:18:33.262 user 0m22.541s 00:18:33.262 sys 0m6.700s 00:18:33.262 03:19:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:33.262 03:19:07 -- common/autotest_common.sh@10 -- # set +x 00:18:33.262 ************************************ 00:18:33.262 END TEST nvmf_fips 00:18:33.262 ************************************ 00:18:33.262 03:19:07 -- nvmf/nvmf.sh@64 -- # '[' 1 -eq 1 ']' 00:18:33.262 03:19:07 -- nvmf/nvmf.sh@65 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:18:33.262 03:19:07 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:33.262 03:19:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:33.262 03:19:07 -- common/autotest_common.sh@10 -- # set +x 00:18:33.262 ************************************ 00:18:33.262 START TEST nvmf_fuzz 00:18:33.262 ************************************ 00:18:33.262 03:19:07 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:18:33.262 * Looking for test storage... 00:18:33.262 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:33.262 03:19:07 -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:33.262 03:19:07 -- nvmf/common.sh@7 -- # uname -s 00:18:33.262 03:19:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:33.262 03:19:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:33.262 03:19:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:33.262 03:19:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:33.262 03:19:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:33.262 03:19:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:33.262 03:19:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:33.262 03:19:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:33.262 03:19:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:33.262 03:19:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:33.262 03:19:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:33.262 03:19:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:33.262 03:19:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:33.262 03:19:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:33.262 03:19:07 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:33.262 03:19:07 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:33.262 03:19:07 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:33.262 03:19:07 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:33.262 03:19:07 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:33.262 03:19:07 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:33.262 03:19:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:33.262 03:19:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:33.263 03:19:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:33.263 03:19:07 -- paths/export.sh@5 -- # export PATH 00:18:33.263 03:19:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:33.263 03:19:07 -- nvmf/common.sh@47 -- # : 0 00:18:33.263 03:19:07 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:33.263 03:19:07 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:33.263 03:19:07 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:33.263 03:19:07 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:33.263 03:19:07 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:33.263 03:19:07 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:33.263 03:19:07 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:33.263 03:19:07 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:33.263 03:19:07 -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:18:33.263 03:19:07 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:18:33.263 03:19:07 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:33.263 03:19:07 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:33.263 03:19:07 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:33.263 03:19:07 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:33.263 03:19:07 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:33.263 03:19:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:33.263 03:19:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:33.263 03:19:07 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:18:33.263 03:19:07 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:18:33.263 03:19:07 -- nvmf/common.sh@285 -- # xtrace_disable 00:18:33.263 03:19:07 -- common/autotest_common.sh@10 -- # set +x 00:18:35.165 03:19:09 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:35.165 03:19:09 -- nvmf/common.sh@291 -- # pci_devs=() 00:18:35.165 03:19:09 -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:35.165 03:19:09 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:35.166 03:19:09 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:35.166 03:19:09 -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:35.166 03:19:09 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:35.166 03:19:09 -- nvmf/common.sh@295 -- # net_devs=() 00:18:35.166 03:19:09 -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:35.166 03:19:09 -- nvmf/common.sh@296 -- # e810=() 00:18:35.166 03:19:09 -- nvmf/common.sh@296 -- # local -ga e810 00:18:35.166 03:19:09 -- nvmf/common.sh@297 -- # x722=() 00:18:35.166 03:19:09 -- nvmf/common.sh@297 -- # local -ga x722 00:18:35.166 03:19:09 -- nvmf/common.sh@298 -- # mlx=() 00:18:35.166 03:19:09 -- nvmf/common.sh@298 -- # local -ga mlx 00:18:35.166 03:19:09 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:35.166 03:19:09 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:35.166 03:19:09 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:35.166 03:19:09 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:35.166 03:19:09 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:35.166 03:19:09 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:35.166 03:19:09 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:35.166 03:19:09 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:35.166 03:19:09 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:35.166 03:19:09 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:35.166 03:19:09 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:35.166 03:19:09 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:35.166 03:19:09 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:35.166 03:19:09 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:35.166 03:19:09 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:35.166 03:19:09 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:35.166 03:19:09 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:35.166 03:19:09 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:35.166 03:19:09 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:35.166 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:35.166 03:19:09 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:35.166 03:19:09 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:35.166 03:19:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:35.166 03:19:09 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:35.166 03:19:09 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:35.166 03:19:09 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:35.166 03:19:09 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:35.166 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:35.166 03:19:09 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:35.166 03:19:09 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:35.166 03:19:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:35.166 03:19:09 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:35.166 03:19:09 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:35.166 03:19:09 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:35.166 03:19:09 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:35.166 03:19:09 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:35.166 03:19:09 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:35.166 03:19:09 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:35.166 03:19:09 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:35.166 03:19:09 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:35.166 03:19:09 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:35.166 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:35.166 03:19:09 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:35.166 03:19:09 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:35.166 03:19:09 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:35.166 03:19:09 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:35.166 03:19:09 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:35.166 03:19:09 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:35.166 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:35.166 03:19:09 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:35.166 03:19:09 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:18:35.166 03:19:09 -- nvmf/common.sh@403 -- # is_hw=yes 00:18:35.166 03:19:09 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:18:35.166 03:19:09 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:18:35.166 03:19:09 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:18:35.166 03:19:09 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:35.166 03:19:09 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:35.166 03:19:09 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:35.166 03:19:09 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:35.166 03:19:09 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:35.166 03:19:09 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:35.166 03:19:09 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:35.166 03:19:09 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:35.166 03:19:09 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:35.166 03:19:09 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:35.166 03:19:09 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:35.166 03:19:09 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:35.166 03:19:09 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:35.166 03:19:09 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:35.166 03:19:09 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:35.166 03:19:09 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:35.166 03:19:09 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:35.166 03:19:09 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:35.166 03:19:09 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:35.166 03:19:09 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:35.166 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:35.166 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.171 ms 00:18:35.166 00:18:35.166 --- 10.0.0.2 ping statistics --- 00:18:35.166 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:35.166 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:18:35.166 03:19:09 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:35.166 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:35.166 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:18:35.166 00:18:35.166 --- 10.0.0.1 ping statistics --- 00:18:35.166 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:35.166 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:18:35.166 03:19:09 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:35.166 03:19:09 -- nvmf/common.sh@411 -- # return 0 00:18:35.166 03:19:09 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:35.166 03:19:09 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:35.166 03:19:09 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:18:35.166 03:19:09 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:18:35.166 03:19:09 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:35.166 03:19:09 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:18:35.166 03:19:09 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:18:35.166 03:19:09 -- target/fabrics_fuzz.sh@14 -- # nvmfpid=1521819 00:18:35.166 03:19:09 -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:35.166 03:19:09 -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:18:35.166 03:19:09 -- target/fabrics_fuzz.sh@18 -- # waitforlisten 1521819 00:18:35.166 03:19:09 -- common/autotest_common.sh@817 -- # '[' -z 1521819 ']' 00:18:35.166 03:19:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:35.166 03:19:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:35.166 03:19:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:35.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:35.166 03:19:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:35.166 03:19:09 -- common/autotest_common.sh@10 -- # set +x 00:18:36.102 03:19:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:36.102 03:19:10 -- common/autotest_common.sh@850 -- # return 0 00:18:36.102 03:19:10 -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:36.102 03:19:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:36.102 03:19:10 -- common/autotest_common.sh@10 -- # set +x 00:18:36.102 03:19:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:36.102 03:19:10 -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:18:36.102 03:19:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:36.102 03:19:10 -- common/autotest_common.sh@10 -- # set +x 00:18:36.102 Malloc0 00:18:36.102 03:19:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:36.102 03:19:10 -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:36.102 03:19:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:36.102 03:19:10 -- common/autotest_common.sh@10 -- # set +x 00:18:36.102 03:19:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:36.102 03:19:10 -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:36.102 03:19:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:36.102 03:19:10 -- common/autotest_common.sh@10 -- # set +x 00:18:36.102 03:19:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:36.102 03:19:10 -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:36.102 03:19:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:36.102 03:19:10 -- common/autotest_common.sh@10 -- # set +x 00:18:36.102 03:19:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:36.102 03:19:10 -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:18:36.102 03:19:10 -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:19:08.267 Fuzzing completed. Shutting down the fuzz application 00:19:08.267 00:19:08.267 Dumping successful admin opcodes: 00:19:08.267 8, 9, 10, 24, 00:19:08.267 Dumping successful io opcodes: 00:19:08.267 0, 9, 00:19:08.267 NS: 0x200003aeff00 I/O qp, Total commands completed: 439739, total successful commands: 2564, random_seed: 2418471680 00:19:08.267 NS: 0x200003aeff00 admin qp, Total commands completed: 54560, total successful commands: 439, random_seed: 4155592960 00:19:08.267 03:19:41 -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:19:08.267 Fuzzing completed. Shutting down the fuzz application 00:19:08.267 00:19:08.267 Dumping successful admin opcodes: 00:19:08.267 24, 00:19:08.268 Dumping successful io opcodes: 00:19:08.268 00:19:08.268 NS: 0x200003aeff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 1643657282 00:19:08.268 NS: 0x200003aeff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 1643808122 00:19:08.268 03:19:42 -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:08.268 03:19:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:08.268 03:19:42 -- common/autotest_common.sh@10 -- # set +x 00:19:08.268 03:19:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:08.268 03:19:42 -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:19:08.268 03:19:42 -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:19:08.268 03:19:42 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:08.268 03:19:42 -- nvmf/common.sh@117 -- # sync 00:19:08.268 03:19:42 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:08.268 03:19:42 -- nvmf/common.sh@120 -- # set +e 00:19:08.268 03:19:42 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:08.268 03:19:42 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:08.268 rmmod nvme_tcp 00:19:08.268 rmmod nvme_fabrics 00:19:08.268 rmmod nvme_keyring 00:19:08.268 03:19:42 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:08.268 03:19:42 -- nvmf/common.sh@124 -- # set -e 00:19:08.268 03:19:42 -- nvmf/common.sh@125 -- # return 0 00:19:08.268 03:19:42 -- nvmf/common.sh@478 -- # '[' -n 1521819 ']' 00:19:08.268 03:19:42 -- nvmf/common.sh@479 -- # killprocess 1521819 00:19:08.268 03:19:42 -- common/autotest_common.sh@936 -- # '[' -z 1521819 ']' 00:19:08.268 03:19:42 -- common/autotest_common.sh@940 -- # kill -0 1521819 00:19:08.268 03:19:42 -- common/autotest_common.sh@941 -- # uname 00:19:08.268 03:19:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:08.268 03:19:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1521819 00:19:08.268 03:19:42 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:08.268 03:19:42 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:08.268 03:19:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1521819' 00:19:08.268 killing process with pid 1521819 00:19:08.268 03:19:42 -- common/autotest_common.sh@955 -- # kill 1521819 00:19:08.268 03:19:42 -- common/autotest_common.sh@960 -- # wait 1521819 00:19:08.528 03:19:42 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:08.528 03:19:42 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:08.528 03:19:42 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:08.528 03:19:42 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:08.528 03:19:42 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:08.528 03:19:42 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:08.528 03:19:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:08.528 03:19:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:10.425 03:19:44 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:10.425 03:19:44 -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:19:10.425 00:19:10.425 real 0m37.572s 00:19:10.425 user 0m51.568s 00:19:10.425 sys 0m15.314s 00:19:10.425 03:19:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:10.425 03:19:44 -- common/autotest_common.sh@10 -- # set +x 00:19:10.425 ************************************ 00:19:10.425 END TEST nvmf_fuzz 00:19:10.425 ************************************ 00:19:10.425 03:19:44 -- nvmf/nvmf.sh@66 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:19:10.425 03:19:44 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:10.425 03:19:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:10.425 03:19:44 -- common/autotest_common.sh@10 -- # set +x 00:19:10.683 ************************************ 00:19:10.683 START TEST nvmf_multiconnection 00:19:10.683 ************************************ 00:19:10.683 03:19:44 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:19:10.683 * Looking for test storage... 00:19:10.683 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:10.683 03:19:45 -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:10.683 03:19:45 -- nvmf/common.sh@7 -- # uname -s 00:19:10.683 03:19:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:10.683 03:19:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:10.683 03:19:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:10.683 03:19:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:10.683 03:19:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:10.683 03:19:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:10.683 03:19:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:10.683 03:19:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:10.683 03:19:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:10.683 03:19:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:10.683 03:19:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:10.683 03:19:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:10.683 03:19:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:10.683 03:19:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:10.683 03:19:45 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:10.683 03:19:45 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:10.683 03:19:45 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:10.683 03:19:45 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:10.683 03:19:45 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:10.683 03:19:45 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:10.683 03:19:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:10.683 03:19:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:10.683 03:19:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:10.683 03:19:45 -- paths/export.sh@5 -- # export PATH 00:19:10.683 03:19:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:10.683 03:19:45 -- nvmf/common.sh@47 -- # : 0 00:19:10.683 03:19:45 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:10.683 03:19:45 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:10.683 03:19:45 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:10.683 03:19:45 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:10.683 03:19:45 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:10.683 03:19:45 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:10.683 03:19:45 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:10.683 03:19:45 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:10.683 03:19:45 -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:10.683 03:19:45 -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:10.683 03:19:45 -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:19:10.683 03:19:45 -- target/multiconnection.sh@16 -- # nvmftestinit 00:19:10.683 03:19:45 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:19:10.683 03:19:45 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:10.683 03:19:45 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:10.683 03:19:45 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:10.683 03:19:45 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:10.683 03:19:45 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:10.683 03:19:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:10.683 03:19:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:10.683 03:19:45 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:19:10.683 03:19:45 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:19:10.683 03:19:45 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:10.683 03:19:45 -- common/autotest_common.sh@10 -- # set +x 00:19:12.585 03:19:46 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:12.585 03:19:46 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:12.585 03:19:46 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:12.585 03:19:46 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:12.585 03:19:46 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:12.585 03:19:46 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:12.585 03:19:46 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:12.585 03:19:46 -- nvmf/common.sh@295 -- # net_devs=() 00:19:12.585 03:19:46 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:12.585 03:19:46 -- nvmf/common.sh@296 -- # e810=() 00:19:12.585 03:19:46 -- nvmf/common.sh@296 -- # local -ga e810 00:19:12.585 03:19:46 -- nvmf/common.sh@297 -- # x722=() 00:19:12.585 03:19:46 -- nvmf/common.sh@297 -- # local -ga x722 00:19:12.585 03:19:46 -- nvmf/common.sh@298 -- # mlx=() 00:19:12.585 03:19:46 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:12.585 03:19:46 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:12.585 03:19:46 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:12.585 03:19:46 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:12.585 03:19:46 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:12.585 03:19:46 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:12.585 03:19:46 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:12.585 03:19:46 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:12.585 03:19:46 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:12.585 03:19:46 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:12.585 03:19:46 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:12.585 03:19:46 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:12.585 03:19:46 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:12.585 03:19:46 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:12.585 03:19:46 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:12.585 03:19:46 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:12.585 03:19:46 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:12.585 03:19:46 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:12.585 03:19:46 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:12.585 03:19:46 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:12.585 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:12.585 03:19:46 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:12.585 03:19:46 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:12.585 03:19:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:12.585 03:19:46 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:12.585 03:19:46 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:12.585 03:19:46 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:12.585 03:19:46 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:12.585 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:12.585 03:19:46 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:12.585 03:19:46 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:12.585 03:19:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:12.585 03:19:46 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:12.585 03:19:46 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:12.585 03:19:46 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:12.585 03:19:46 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:12.585 03:19:46 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:12.585 03:19:46 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:12.585 03:19:46 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:12.585 03:19:46 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:12.585 03:19:46 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:12.585 03:19:46 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:12.585 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:12.585 03:19:46 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:12.585 03:19:46 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:12.585 03:19:46 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:12.585 03:19:46 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:12.585 03:19:46 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:12.585 03:19:46 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:12.585 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:12.585 03:19:46 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:12.585 03:19:46 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:19:12.585 03:19:46 -- nvmf/common.sh@403 -- # is_hw=yes 00:19:12.585 03:19:46 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:19:12.585 03:19:46 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:19:12.585 03:19:46 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:19:12.585 03:19:46 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:12.585 03:19:46 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:12.585 03:19:46 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:12.585 03:19:46 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:12.585 03:19:46 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:12.585 03:19:46 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:12.585 03:19:46 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:12.585 03:19:46 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:12.585 03:19:46 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:12.585 03:19:46 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:12.585 03:19:46 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:12.585 03:19:46 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:12.585 03:19:46 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:12.585 03:19:46 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:12.585 03:19:46 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:12.585 03:19:46 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:12.585 03:19:46 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:12.585 03:19:46 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:12.585 03:19:46 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:12.585 03:19:47 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:12.585 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:12.585 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.242 ms 00:19:12.585 00:19:12.585 --- 10.0.0.2 ping statistics --- 00:19:12.585 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:12.585 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:19:12.585 03:19:47 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:12.585 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:12.585 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.098 ms 00:19:12.585 00:19:12.585 --- 10.0.0.1 ping statistics --- 00:19:12.585 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:12.585 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:19:12.585 03:19:47 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:12.585 03:19:47 -- nvmf/common.sh@411 -- # return 0 00:19:12.585 03:19:47 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:12.585 03:19:47 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:12.585 03:19:47 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:19:12.585 03:19:47 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:19:12.585 03:19:47 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:12.585 03:19:47 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:19:12.585 03:19:47 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:19:12.585 03:19:47 -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:19:12.585 03:19:47 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:12.585 03:19:47 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:12.585 03:19:47 -- common/autotest_common.sh@10 -- # set +x 00:19:12.585 03:19:47 -- nvmf/common.sh@470 -- # nvmfpid=1527575 00:19:12.585 03:19:47 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:12.585 03:19:47 -- nvmf/common.sh@471 -- # waitforlisten 1527575 00:19:12.585 03:19:47 -- common/autotest_common.sh@817 -- # '[' -z 1527575 ']' 00:19:12.585 03:19:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:12.585 03:19:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:12.585 03:19:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:12.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:12.585 03:19:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:12.585 03:19:47 -- common/autotest_common.sh@10 -- # set +x 00:19:12.844 [2024-04-25 03:19:47.088349] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:19:12.844 [2024-04-25 03:19:47.088438] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:12.844 EAL: No free 2048 kB hugepages reported on node 1 00:19:12.844 [2024-04-25 03:19:47.151698] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:12.844 [2024-04-25 03:19:47.257583] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:12.844 [2024-04-25 03:19:47.257657] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:12.844 [2024-04-25 03:19:47.257682] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:12.844 [2024-04-25 03:19:47.257693] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:12.844 [2024-04-25 03:19:47.257703] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:12.844 [2024-04-25 03:19:47.257767] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:12.844 [2024-04-25 03:19:47.257827] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:12.844 [2024-04-25 03:19:47.257894] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:12.844 [2024-04-25 03:19:47.257897] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:13.102 03:19:47 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:13.102 03:19:47 -- common/autotest_common.sh@850 -- # return 0 00:19:13.102 03:19:47 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:13.102 03:19:47 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:13.102 03:19:47 -- common/autotest_common.sh@10 -- # set +x 00:19:13.102 03:19:47 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:13.102 03:19:47 -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:13.102 03:19:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.102 03:19:47 -- common/autotest_common.sh@10 -- # set +x 00:19:13.102 [2024-04-25 03:19:47.408444] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:13.102 03:19:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:13.102 03:19:47 -- target/multiconnection.sh@21 -- # seq 1 11 00:19:13.102 03:19:47 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:13.102 03:19:47 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:13.102 03:19:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.102 03:19:47 -- common/autotest_common.sh@10 -- # set +x 00:19:13.102 Malloc1 00:19:13.102 03:19:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:13.102 03:19:47 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:19:13.102 03:19:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.103 03:19:47 -- common/autotest_common.sh@10 -- # set +x 00:19:13.103 03:19:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:13.103 03:19:47 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:13.103 03:19:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.103 03:19:47 -- common/autotest_common.sh@10 -- # set +x 00:19:13.103 03:19:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:13.103 03:19:47 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:13.103 03:19:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.103 03:19:47 -- common/autotest_common.sh@10 -- # set +x 00:19:13.103 [2024-04-25 03:19:47.464167] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:13.103 03:19:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:13.103 03:19:47 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:13.103 03:19:47 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:19:13.103 03:19:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.103 03:19:47 -- common/autotest_common.sh@10 -- # set +x 00:19:13.103 Malloc2 00:19:13.103 03:19:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:13.103 03:19:47 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:19:13.103 03:19:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.103 03:19:47 -- common/autotest_common.sh@10 -- # set +x 00:19:13.103 03:19:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:13.103 03:19:47 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:19:13.103 03:19:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.103 03:19:47 -- common/autotest_common.sh@10 -- # set +x 00:19:13.103 03:19:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:13.103 03:19:47 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:19:13.103 03:19:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.103 03:19:47 -- common/autotest_common.sh@10 -- # set +x 00:19:13.103 03:19:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:13.103 03:19:47 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:13.103 03:19:47 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:19:13.103 03:19:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.103 03:19:47 -- common/autotest_common.sh@10 -- # set +x 00:19:13.103 Malloc3 00:19:13.103 03:19:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:13.103 03:19:47 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:19:13.103 03:19:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.103 03:19:47 -- common/autotest_common.sh@10 -- # set +x 00:19:13.103 03:19:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:13.103 03:19:47 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:19:13.103 03:19:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.103 03:19:47 -- common/autotest_common.sh@10 -- # set +x 00:19:13.103 03:19:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:13.103 03:19:47 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:19:13.103 03:19:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.103 03:19:47 -- common/autotest_common.sh@10 -- # set +x 00:19:13.103 03:19:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:13.103 03:19:47 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:13.103 03:19:47 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:19:13.103 03:19:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.103 03:19:47 -- common/autotest_common.sh@10 -- # set +x 00:19:13.103 Malloc4 00:19:13.103 03:19:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:13.103 03:19:47 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:19:13.103 03:19:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.103 03:19:47 -- common/autotest_common.sh@10 -- # set +x 00:19:13.103 03:19:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:13.103 03:19:47 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:19:13.103 03:19:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.103 03:19:47 -- common/autotest_common.sh@10 -- # set +x 00:19:13.361 03:19:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:13.361 03:19:47 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:19:13.361 03:19:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.361 03:19:47 -- common/autotest_common.sh@10 -- # set +x 00:19:13.361 03:19:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:13.361 03:19:47 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:13.361 03:19:47 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:19:13.361 03:19:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.361 03:19:47 -- common/autotest_common.sh@10 -- # set +x 00:19:13.361 Malloc5 00:19:13.361 03:19:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:13.361 03:19:47 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:19:13.361 03:19:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.361 03:19:47 -- common/autotest_common.sh@10 -- # set +x 00:19:13.361 03:19:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:13.361 03:19:47 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:19:13.361 03:19:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.361 03:19:47 -- common/autotest_common.sh@10 -- # set +x 00:19:13.361 03:19:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:13.361 03:19:47 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:19:13.361 03:19:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.361 03:19:47 -- common/autotest_common.sh@10 -- # set +x 00:19:13.361 03:19:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:13.361 03:19:47 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:13.361 03:19:47 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:19:13.361 03:19:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.361 03:19:47 -- common/autotest_common.sh@10 -- # set +x 00:19:13.361 Malloc6 00:19:13.361 03:19:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:13.361 03:19:47 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:19:13.361 03:19:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.361 03:19:47 -- common/autotest_common.sh@10 -- # set +x 00:19:13.361 03:19:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:13.361 03:19:47 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:19:13.361 03:19:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.361 03:19:47 -- common/autotest_common.sh@10 -- # set +x 00:19:13.361 03:19:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:13.361 03:19:47 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:19:13.361 03:19:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.361 03:19:47 -- common/autotest_common.sh@10 -- # set +x 00:19:13.361 03:19:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:13.361 03:19:47 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:13.361 03:19:47 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:19:13.361 03:19:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.361 03:19:47 -- common/autotest_common.sh@10 -- # set +x 00:19:13.361 Malloc7 00:19:13.361 03:19:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:13.361 03:19:47 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:19:13.361 03:19:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.361 03:19:47 -- common/autotest_common.sh@10 -- # set +x 00:19:13.361 03:19:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:13.361 03:19:47 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:19:13.361 03:19:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.361 03:19:47 -- common/autotest_common.sh@10 -- # set +x 00:19:13.361 03:19:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:13.362 03:19:47 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:19:13.362 03:19:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.362 03:19:47 -- common/autotest_common.sh@10 -- # set +x 00:19:13.362 03:19:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:13.362 03:19:47 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:13.362 03:19:47 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:19:13.362 03:19:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.362 03:19:47 -- common/autotest_common.sh@10 -- # set +x 00:19:13.362 Malloc8 00:19:13.362 03:19:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:13.362 03:19:47 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:19:13.362 03:19:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.362 03:19:47 -- common/autotest_common.sh@10 -- # set +x 00:19:13.362 03:19:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:13.362 03:19:47 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:19:13.362 03:19:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.362 03:19:47 -- common/autotest_common.sh@10 -- # set +x 00:19:13.362 03:19:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:13.362 03:19:47 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:19:13.362 03:19:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.362 03:19:47 -- common/autotest_common.sh@10 -- # set +x 00:19:13.362 03:19:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:13.362 03:19:47 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:13.362 03:19:47 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:19:13.362 03:19:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.362 03:19:47 -- common/autotest_common.sh@10 -- # set +x 00:19:13.362 Malloc9 00:19:13.362 03:19:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:13.362 03:19:47 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:19:13.362 03:19:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.362 03:19:47 -- common/autotest_common.sh@10 -- # set +x 00:19:13.362 03:19:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:13.362 03:19:47 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:19:13.362 03:19:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.362 03:19:47 -- common/autotest_common.sh@10 -- # set +x 00:19:13.620 03:19:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:13.620 03:19:47 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:19:13.620 03:19:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.620 03:19:47 -- common/autotest_common.sh@10 -- # set +x 00:19:13.620 03:19:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:13.620 03:19:47 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:13.620 03:19:47 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:19:13.620 03:19:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.620 03:19:47 -- common/autotest_common.sh@10 -- # set +x 00:19:13.620 Malloc10 00:19:13.620 03:19:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:13.620 03:19:47 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:19:13.620 03:19:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.620 03:19:47 -- common/autotest_common.sh@10 -- # set +x 00:19:13.620 03:19:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:13.620 03:19:47 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:19:13.620 03:19:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.620 03:19:47 -- common/autotest_common.sh@10 -- # set +x 00:19:13.620 03:19:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:13.620 03:19:47 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:19:13.620 03:19:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.620 03:19:47 -- common/autotest_common.sh@10 -- # set +x 00:19:13.620 03:19:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:13.620 03:19:47 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:13.620 03:19:47 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:19:13.620 03:19:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.620 03:19:47 -- common/autotest_common.sh@10 -- # set +x 00:19:13.620 Malloc11 00:19:13.620 03:19:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:13.620 03:19:47 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:19:13.620 03:19:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.620 03:19:47 -- common/autotest_common.sh@10 -- # set +x 00:19:13.620 03:19:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:13.620 03:19:47 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:19:13.620 03:19:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.620 03:19:47 -- common/autotest_common.sh@10 -- # set +x 00:19:13.620 03:19:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:13.620 03:19:47 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:19:13.620 03:19:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.620 03:19:47 -- common/autotest_common.sh@10 -- # set +x 00:19:13.620 03:19:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:13.620 03:19:47 -- target/multiconnection.sh@28 -- # seq 1 11 00:19:13.620 03:19:47 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:13.620 03:19:47 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:14.185 03:19:48 -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:19:14.185 03:19:48 -- common/autotest_common.sh@1184 -- # local i=0 00:19:14.185 03:19:48 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:19:14.185 03:19:48 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:19:14.185 03:19:48 -- common/autotest_common.sh@1191 -- # sleep 2 00:19:16.086 03:19:50 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:19:16.086 03:19:50 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:19:16.086 03:19:50 -- common/autotest_common.sh@1193 -- # grep -c SPDK1 00:19:16.343 03:19:50 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:19:16.343 03:19:50 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:19:16.343 03:19:50 -- common/autotest_common.sh@1194 -- # return 0 00:19:16.343 03:19:50 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:16.343 03:19:50 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:19:16.907 03:19:51 -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:19:16.907 03:19:51 -- common/autotest_common.sh@1184 -- # local i=0 00:19:16.907 03:19:51 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:19:16.907 03:19:51 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:19:16.907 03:19:51 -- common/autotest_common.sh@1191 -- # sleep 2 00:19:18.802 03:19:53 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:19:18.802 03:19:53 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:19:18.802 03:19:53 -- common/autotest_common.sh@1193 -- # grep -c SPDK2 00:19:18.802 03:19:53 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:19:18.802 03:19:53 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:19:18.802 03:19:53 -- common/autotest_common.sh@1194 -- # return 0 00:19:18.802 03:19:53 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:18.802 03:19:53 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:19:19.734 03:19:53 -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:19:19.734 03:19:53 -- common/autotest_common.sh@1184 -- # local i=0 00:19:19.734 03:19:53 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:19:19.734 03:19:53 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:19:19.734 03:19:53 -- common/autotest_common.sh@1191 -- # sleep 2 00:19:21.630 03:19:55 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:19:21.630 03:19:55 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:19:21.630 03:19:55 -- common/autotest_common.sh@1193 -- # grep -c SPDK3 00:19:21.630 03:19:56 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:19:21.630 03:19:56 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:19:21.630 03:19:56 -- common/autotest_common.sh@1194 -- # return 0 00:19:21.630 03:19:56 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:21.630 03:19:56 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:19:22.250 03:19:56 -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:19:22.250 03:19:56 -- common/autotest_common.sh@1184 -- # local i=0 00:19:22.250 03:19:56 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:19:22.250 03:19:56 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:19:22.250 03:19:56 -- common/autotest_common.sh@1191 -- # sleep 2 00:19:24.151 03:19:58 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:19:24.151 03:19:58 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:19:24.408 03:19:58 -- common/autotest_common.sh@1193 -- # grep -c SPDK4 00:19:24.408 03:19:58 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:19:24.408 03:19:58 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:19:24.408 03:19:58 -- common/autotest_common.sh@1194 -- # return 0 00:19:24.408 03:19:58 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:24.408 03:19:58 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:19:24.973 03:19:59 -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:19:24.973 03:19:59 -- common/autotest_common.sh@1184 -- # local i=0 00:19:24.973 03:19:59 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:19:24.973 03:19:59 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:19:24.973 03:19:59 -- common/autotest_common.sh@1191 -- # sleep 2 00:19:27.500 03:20:01 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:19:27.500 03:20:01 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:19:27.500 03:20:01 -- common/autotest_common.sh@1193 -- # grep -c SPDK5 00:19:27.500 03:20:01 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:19:27.500 03:20:01 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:19:27.500 03:20:01 -- common/autotest_common.sh@1194 -- # return 0 00:19:27.500 03:20:01 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:27.500 03:20:01 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:19:28.066 03:20:02 -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:19:28.066 03:20:02 -- common/autotest_common.sh@1184 -- # local i=0 00:19:28.066 03:20:02 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:19:28.066 03:20:02 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:19:28.066 03:20:02 -- common/autotest_common.sh@1191 -- # sleep 2 00:19:29.966 03:20:04 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:19:29.966 03:20:04 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:19:29.966 03:20:04 -- common/autotest_common.sh@1193 -- # grep -c SPDK6 00:19:29.966 03:20:04 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:19:29.966 03:20:04 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:19:29.966 03:20:04 -- common/autotest_common.sh@1194 -- # return 0 00:19:29.966 03:20:04 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:29.966 03:20:04 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:19:30.532 03:20:05 -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:19:30.532 03:20:05 -- common/autotest_common.sh@1184 -- # local i=0 00:19:30.532 03:20:05 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:19:30.532 03:20:05 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:19:30.532 03:20:05 -- common/autotest_common.sh@1191 -- # sleep 2 00:19:33.060 03:20:07 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:19:33.060 03:20:07 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:19:33.060 03:20:07 -- common/autotest_common.sh@1193 -- # grep -c SPDK7 00:19:33.060 03:20:07 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:19:33.060 03:20:07 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:19:33.060 03:20:07 -- common/autotest_common.sh@1194 -- # return 0 00:19:33.060 03:20:07 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:33.060 03:20:07 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:19:33.318 03:20:07 -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:19:33.318 03:20:07 -- common/autotest_common.sh@1184 -- # local i=0 00:19:33.318 03:20:07 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:19:33.318 03:20:07 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:19:33.318 03:20:07 -- common/autotest_common.sh@1191 -- # sleep 2 00:19:35.846 03:20:09 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:19:35.846 03:20:09 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:19:35.846 03:20:09 -- common/autotest_common.sh@1193 -- # grep -c SPDK8 00:19:35.846 03:20:09 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:19:35.846 03:20:09 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:19:35.846 03:20:09 -- common/autotest_common.sh@1194 -- # return 0 00:19:35.846 03:20:09 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:35.846 03:20:09 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:19:36.103 03:20:10 -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:19:36.103 03:20:10 -- common/autotest_common.sh@1184 -- # local i=0 00:19:36.103 03:20:10 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:19:36.103 03:20:10 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:19:36.103 03:20:10 -- common/autotest_common.sh@1191 -- # sleep 2 00:19:38.630 03:20:12 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:19:38.630 03:20:12 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:19:38.630 03:20:12 -- common/autotest_common.sh@1193 -- # grep -c SPDK9 00:19:38.630 03:20:12 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:19:38.630 03:20:12 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:19:38.630 03:20:12 -- common/autotest_common.sh@1194 -- # return 0 00:19:38.630 03:20:12 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:38.630 03:20:12 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:19:38.888 03:20:13 -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:19:38.888 03:20:13 -- common/autotest_common.sh@1184 -- # local i=0 00:19:38.888 03:20:13 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:19:38.888 03:20:13 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:19:38.888 03:20:13 -- common/autotest_common.sh@1191 -- # sleep 2 00:19:41.471 03:20:15 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:19:41.471 03:20:15 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:19:41.471 03:20:15 -- common/autotest_common.sh@1193 -- # grep -c SPDK10 00:19:41.471 03:20:15 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:19:41.471 03:20:15 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:19:41.471 03:20:15 -- common/autotest_common.sh@1194 -- # return 0 00:19:41.471 03:20:15 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:41.471 03:20:15 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:19:42.038 03:20:16 -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:19:42.038 03:20:16 -- common/autotest_common.sh@1184 -- # local i=0 00:19:42.038 03:20:16 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:19:42.038 03:20:16 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:19:42.038 03:20:16 -- common/autotest_common.sh@1191 -- # sleep 2 00:19:43.936 03:20:18 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:19:43.936 03:20:18 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:19:43.936 03:20:18 -- common/autotest_common.sh@1193 -- # grep -c SPDK11 00:19:43.936 03:20:18 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:19:43.936 03:20:18 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:19:43.936 03:20:18 -- common/autotest_common.sh@1194 -- # return 0 00:19:43.936 03:20:18 -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:19:43.936 [global] 00:19:43.936 thread=1 00:19:43.936 invalidate=1 00:19:43.936 rw=read 00:19:43.936 time_based=1 00:19:43.936 runtime=10 00:19:43.936 ioengine=libaio 00:19:43.936 direct=1 00:19:43.936 bs=262144 00:19:43.936 iodepth=64 00:19:43.936 norandommap=1 00:19:43.936 numjobs=1 00:19:43.936 00:19:43.936 [job0] 00:19:43.936 filename=/dev/nvme0n1 00:19:43.936 [job1] 00:19:43.936 filename=/dev/nvme10n1 00:19:43.936 [job2] 00:19:43.936 filename=/dev/nvme1n1 00:19:43.936 [job3] 00:19:43.936 filename=/dev/nvme2n1 00:19:43.936 [job4] 00:19:43.936 filename=/dev/nvme3n1 00:19:43.936 [job5] 00:19:43.936 filename=/dev/nvme4n1 00:19:43.936 [job6] 00:19:43.936 filename=/dev/nvme5n1 00:19:43.936 [job7] 00:19:43.936 filename=/dev/nvme6n1 00:19:43.936 [job8] 00:19:43.936 filename=/dev/nvme7n1 00:19:43.936 [job9] 00:19:43.936 filename=/dev/nvme8n1 00:19:43.936 [job10] 00:19:43.936 filename=/dev/nvme9n1 00:19:43.936 Could not set queue depth (nvme0n1) 00:19:43.936 Could not set queue depth (nvme10n1) 00:19:43.936 Could not set queue depth (nvme1n1) 00:19:43.936 Could not set queue depth (nvme2n1) 00:19:43.936 Could not set queue depth (nvme3n1) 00:19:43.936 Could not set queue depth (nvme4n1) 00:19:43.936 Could not set queue depth (nvme5n1) 00:19:43.936 Could not set queue depth (nvme6n1) 00:19:43.936 Could not set queue depth (nvme7n1) 00:19:43.936 Could not set queue depth (nvme8n1) 00:19:43.936 Could not set queue depth (nvme9n1) 00:19:44.194 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:44.194 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:44.194 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:44.194 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:44.194 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:44.194 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:44.194 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:44.194 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:44.194 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:44.194 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:44.194 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:44.194 fio-3.35 00:19:44.194 Starting 11 threads 00:19:56.405 00:19:56.405 job0: (groupid=0, jobs=1): err= 0: pid=1531838: Thu Apr 25 03:20:29 2024 00:19:56.405 read: IOPS=895, BW=224MiB/s (235MB/s)(2242MiB/10021msec) 00:19:56.405 slat (usec): min=14, max=196203, avg=1022.79, stdev=3995.75 00:19:56.405 clat (msec): min=17, max=336, avg=70.44, stdev=44.33 00:19:56.405 lat (msec): min=22, max=336, avg=71.47, stdev=44.84 00:19:56.406 clat percentiles (msec): 00:19:56.406 | 1.00th=[ 35], 5.00th=[ 39], 10.00th=[ 40], 20.00th=[ 42], 00:19:56.406 | 30.00th=[ 44], 40.00th=[ 48], 50.00th=[ 54], 60.00th=[ 60], 00:19:56.406 | 70.00th=[ 68], 80.00th=[ 93], 90.00th=[ 132], 95.00th=[ 157], 00:19:56.406 | 99.00th=[ 257], 99.50th=[ 264], 99.90th=[ 292], 99.95th=[ 296], 00:19:56.406 | 99.99th=[ 338] 00:19:56.406 bw ( KiB/s): min=58880, max=397312, per=13.74%, avg=227963.60, stdev=109870.45, samples=20 00:19:56.406 iops : min= 230, max= 1552, avg=890.35, stdev=429.29, samples=20 00:19:56.406 lat (msec) : 20=0.01%, 50=43.98%, 100=37.72%, 250=17.01%, 500=1.27% 00:19:56.406 cpu : usr=0.56%, sys=2.97%, ctx=2021, majf=0, minf=4097 00:19:56.406 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:19:56.406 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:56.406 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:56.406 issued rwts: total=8969,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:56.406 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:56.406 job1: (groupid=0, jobs=1): err= 0: pid=1531839: Thu Apr 25 03:20:29 2024 00:19:56.406 read: IOPS=579, BW=145MiB/s (152MB/s)(1457MiB/10065msec) 00:19:56.406 slat (usec): min=9, max=738701, avg=1318.66, stdev=11671.63 00:19:56.406 clat (msec): min=2, max=1224, avg=109.10, stdev=126.69 00:19:56.406 lat (msec): min=2, max=1367, avg=110.42, stdev=128.04 00:19:56.406 clat percentiles (msec): 00:19:56.406 | 1.00th=[ 17], 5.00th=[ 42], 10.00th=[ 46], 20.00th=[ 55], 00:19:56.406 | 30.00th=[ 62], 40.00th=[ 73], 50.00th=[ 88], 60.00th=[ 100], 00:19:56.406 | 70.00th=[ 112], 80.00th=[ 125], 90.00th=[ 150], 95.00th=[ 234], 00:19:56.406 | 99.00th=[ 1099], 99.50th=[ 1167], 99.90th=[ 1200], 99.95th=[ 1217], 00:19:56.406 | 99.99th=[ 1217] 00:19:56.406 bw ( KiB/s): min=48640, max=353280, per=9.36%, avg=155329.68, stdev=74303.75, samples=19 00:19:56.406 iops : min= 190, max= 1380, avg=606.68, stdev=290.26, samples=19 00:19:56.406 lat (msec) : 4=0.24%, 10=0.34%, 20=0.79%, 50=15.37%, 100=43.66% 00:19:56.406 lat (msec) : 250=35.07%, 500=2.78%, 750=0.67%, 1000=0.07%, 2000=1.01% 00:19:56.406 cpu : usr=0.32%, sys=1.92%, ctx=1502, majf=0, minf=4097 00:19:56.406 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:19:56.406 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:56.406 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:56.406 issued rwts: total=5829,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:56.406 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:56.406 job2: (groupid=0, jobs=1): err= 0: pid=1531840: Thu Apr 25 03:20:29 2024 00:19:56.406 read: IOPS=542, BW=136MiB/s (142MB/s)(1377MiB/10141msec) 00:19:56.406 slat (usec): min=9, max=682046, avg=1466.21, stdev=12993.81 00:19:56.406 clat (msec): min=2, max=1064, avg=116.31, stdev=111.30 00:19:56.406 lat (msec): min=4, max=1064, avg=117.77, stdev=112.71 00:19:56.406 clat percentiles (msec): 00:19:56.406 | 1.00th=[ 13], 5.00th=[ 32], 10.00th=[ 50], 20.00th=[ 63], 00:19:56.406 | 30.00th=[ 72], 40.00th=[ 82], 50.00th=[ 96], 60.00th=[ 107], 00:19:56.406 | 70.00th=[ 116], 80.00th=[ 129], 90.00th=[ 176], 95.00th=[ 239], 00:19:56.406 | 99.00th=[ 751], 99.50th=[ 785], 99.90th=[ 810], 99.95th=[ 1062], 00:19:56.406 | 99.99th=[ 1062] 00:19:56.406 bw ( KiB/s): min=31232, max=282624, per=8.84%, avg=146606.74, stdev=64524.79, samples=19 00:19:56.406 iops : min= 122, max= 1104, avg=572.58, stdev=252.02, samples=19 00:19:56.406 lat (msec) : 4=0.02%, 10=0.38%, 20=2.89%, 50=7.56%, 100=42.83% 00:19:56.406 lat (msec) : 250=41.68%, 500=2.22%, 750=1.25%, 1000=1.13%, 2000=0.05% 00:19:56.406 cpu : usr=0.31%, sys=1.77%, ctx=1465, majf=0, minf=4097 00:19:56.406 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:19:56.406 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:56.406 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:56.406 issued rwts: total=5506,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:56.406 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:56.406 job3: (groupid=0, jobs=1): err= 0: pid=1531841: Thu Apr 25 03:20:29 2024 00:19:56.406 read: IOPS=464, BW=116MiB/s (122MB/s)(1178MiB/10136msec) 00:19:56.406 slat (usec): min=10, max=228130, avg=1878.82, stdev=9242.69 00:19:56.406 clat (usec): min=1149, max=805936, avg=135695.40, stdev=145388.86 00:19:56.406 lat (usec): min=1172, max=884410, avg=137574.22, stdev=147551.93 00:19:56.406 clat percentiles (msec): 00:19:56.406 | 1.00th=[ 10], 5.00th=[ 23], 10.00th=[ 44], 20.00th=[ 62], 00:19:56.406 | 30.00th=[ 68], 40.00th=[ 72], 50.00th=[ 81], 60.00th=[ 97], 00:19:56.406 | 70.00th=[ 129], 80.00th=[ 157], 90.00th=[ 292], 95.00th=[ 550], 00:19:56.406 | 99.00th=[ 684], 99.50th=[ 743], 99.90th=[ 776], 99.95th=[ 793], 00:19:56.406 | 99.99th=[ 810] 00:19:56.406 bw ( KiB/s): min= 9216, max=256000, per=7.17%, avg=118956.00, stdev=82379.13, samples=20 00:19:56.406 iops : min= 36, max= 1000, avg=464.60, stdev=321.80, samples=20 00:19:56.406 lat (msec) : 2=0.11%, 4=0.32%, 10=0.74%, 20=2.99%, 50=9.78% 00:19:56.406 lat (msec) : 100=46.75%, 250=27.59%, 500=5.58%, 750=5.88%, 1000=0.25% 00:19:56.406 cpu : usr=0.34%, sys=1.52%, ctx=1189, majf=0, minf=4097 00:19:56.406 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:19:56.406 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:56.406 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:56.406 issued rwts: total=4712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:56.406 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:56.406 job4: (groupid=0, jobs=1): err= 0: pid=1531842: Thu Apr 25 03:20:29 2024 00:19:56.406 read: IOPS=891, BW=223MiB/s (234MB/s)(2245MiB/10071msec) 00:19:56.406 slat (usec): min=10, max=87714, avg=967.92, stdev=2984.08 00:19:56.406 clat (msec): min=4, max=233, avg=70.75, stdev=34.21 00:19:56.406 lat (msec): min=4, max=233, avg=71.71, stdev=34.61 00:19:56.406 clat percentiles (msec): 00:19:56.406 | 1.00th=[ 17], 5.00th=[ 33], 10.00th=[ 36], 20.00th=[ 43], 00:19:56.406 | 30.00th=[ 48], 40.00th=[ 56], 50.00th=[ 62], 60.00th=[ 68], 00:19:56.406 | 70.00th=[ 81], 80.00th=[ 105], 90.00th=[ 122], 95.00th=[ 134], 00:19:56.406 | 99.00th=[ 167], 99.50th=[ 180], 99.90th=[ 228], 99.95th=[ 232], 00:19:56.406 | 99.99th=[ 234] 00:19:56.406 bw ( KiB/s): min=109568, max=423936, per=13.76%, avg=228220.25, stdev=88426.80, samples=20 00:19:56.406 iops : min= 428, max= 1656, avg=891.40, stdev=345.50, samples=20 00:19:56.406 lat (msec) : 10=0.21%, 20=1.11%, 50=31.08%, 100=45.38%, 250=22.22% 00:19:56.406 cpu : usr=0.53%, sys=2.89%, ctx=2102, majf=0, minf=4097 00:19:56.406 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:19:56.406 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:56.406 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:56.406 issued rwts: total=8980,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:56.406 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:56.406 job5: (groupid=0, jobs=1): err= 0: pid=1531843: Thu Apr 25 03:20:29 2024 00:19:56.406 read: IOPS=367, BW=91.8MiB/s (96.3MB/s)(932MiB/10142msec) 00:19:56.406 slat (usec): min=9, max=754115, avg=2018.81, stdev=18810.05 00:19:56.406 clat (msec): min=3, max=1027, avg=172.05, stdev=170.68 00:19:56.406 lat (msec): min=3, max=1027, avg=174.07, stdev=172.51 00:19:56.406 clat percentiles (msec): 00:19:56.406 | 1.00th=[ 9], 5.00th=[ 30], 10.00th=[ 59], 20.00th=[ 85], 00:19:56.406 | 30.00th=[ 101], 40.00th=[ 110], 50.00th=[ 120], 60.00th=[ 132], 00:19:56.406 | 70.00th=[ 153], 80.00th=[ 199], 90.00th=[ 288], 95.00th=[ 634], 00:19:56.406 | 99.00th=[ 827], 99.50th=[ 911], 99.90th=[ 936], 99.95th=[ 1028], 00:19:56.406 | 99.99th=[ 1028] 00:19:56.406 bw ( KiB/s): min= 6144, max=247808, per=5.65%, avg=93715.85, stdev=64405.68, samples=20 00:19:56.406 iops : min= 24, max= 968, avg=365.90, stdev=251.45, samples=20 00:19:56.406 lat (msec) : 4=0.05%, 10=1.32%, 20=2.42%, 50=4.43%, 100=21.15% 00:19:56.406 lat (msec) : 250=57.65%, 500=4.91%, 750=5.34%, 1000=2.68%, 2000=0.05% 00:19:56.406 cpu : usr=0.16%, sys=0.99%, ctx=1036, majf=0, minf=4097 00:19:56.406 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:19:56.406 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:56.406 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:56.406 issued rwts: total=3726,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:56.406 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:56.406 job6: (groupid=0, jobs=1): err= 0: pid=1531844: Thu Apr 25 03:20:29 2024 00:19:56.406 read: IOPS=595, BW=149MiB/s (156MB/s)(1511MiB/10151msec) 00:19:56.406 slat (usec): min=9, max=456685, avg=1080.48, stdev=6967.11 00:19:56.406 clat (usec): min=1534, max=750120, avg=106305.46, stdev=77439.25 00:19:56.406 lat (usec): min=1561, max=750137, avg=107385.94, stdev=78098.75 00:19:56.406 clat percentiles (msec): 00:19:56.406 | 1.00th=[ 12], 5.00th=[ 31], 10.00th=[ 55], 20.00th=[ 67], 00:19:56.406 | 30.00th=[ 74], 40.00th=[ 83], 50.00th=[ 93], 60.00th=[ 105], 00:19:56.406 | 70.00th=[ 116], 80.00th=[ 127], 90.00th=[ 155], 95.00th=[ 203], 00:19:56.406 | 99.00th=[ 542], 99.50th=[ 676], 99.90th=[ 743], 99.95th=[ 743], 00:19:56.406 | 99.99th=[ 751] 00:19:56.406 bw ( KiB/s): min=16384, max=213504, per=9.23%, avg=153061.00, stdev=50502.80, samples=20 00:19:56.406 iops : min= 64, max= 834, avg=597.80, stdev=197.24, samples=20 00:19:56.406 lat (msec) : 2=0.02%, 4=0.03%, 10=0.76%, 20=1.34%, 50=6.12% 00:19:56.406 lat (msec) : 100=48.37%, 250=40.36%, 500=1.82%, 750=1.16%, 1000=0.02% 00:19:56.406 cpu : usr=0.28%, sys=1.88%, ctx=1798, majf=0, minf=4097 00:19:56.406 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:19:56.406 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:56.406 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:56.406 issued rwts: total=6045,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:56.406 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:56.406 job7: (groupid=0, jobs=1): err= 0: pid=1531845: Thu Apr 25 03:20:29 2024 00:19:56.406 read: IOPS=492, BW=123MiB/s (129MB/s)(1256MiB/10192msec) 00:19:56.406 slat (usec): min=9, max=843855, avg=1380.99, stdev=16877.42 00:19:56.406 clat (usec): min=1621, max=1096.0k, avg=128392.53, stdev=159063.26 00:19:56.406 lat (usec): min=1652, max=1309.2k, avg=129773.52, stdev=160720.58 00:19:56.406 clat percentiles (msec): 00:19:56.406 | 1.00th=[ 6], 5.00th=[ 18], 10.00th=[ 31], 20.00th=[ 46], 00:19:56.406 | 30.00th=[ 58], 40.00th=[ 68], 50.00th=[ 86], 60.00th=[ 100], 00:19:56.406 | 70.00th=[ 118], 80.00th=[ 144], 90.00th=[ 245], 95.00th=[ 426], 00:19:56.406 | 99.00th=[ 1003], 99.50th=[ 1070], 99.90th=[ 1099], 99.95th=[ 1099], 00:19:56.406 | 99.99th=[ 1099] 00:19:56.407 bw ( KiB/s): min= 4608, max=286208, per=7.65%, avg=126901.40, stdev=75893.53, samples=20 00:19:56.407 iops : min= 18, max= 1118, avg=495.65, stdev=296.43, samples=20 00:19:56.407 lat (msec) : 2=0.14%, 4=0.50%, 10=1.33%, 20=3.82%, 50=15.95% 00:19:56.407 lat (msec) : 100=38.84%, 250=29.84%, 500=5.65%, 750=2.63%, 1000=0.26% 00:19:56.407 lat (msec) : 2000=1.04% 00:19:56.407 cpu : usr=0.26%, sys=1.58%, ctx=1547, majf=0, minf=4097 00:19:56.407 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:19:56.407 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:56.407 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:56.407 issued rwts: total=5023,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:56.407 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:56.407 job8: (groupid=0, jobs=1): err= 0: pid=1531850: Thu Apr 25 03:20:29 2024 00:19:56.407 read: IOPS=426, BW=107MiB/s (112MB/s)(1078MiB/10120msec) 00:19:56.407 slat (usec): min=10, max=687018, avg=1613.96, stdev=17287.10 00:19:56.407 clat (msec): min=2, max=1407, avg=148.44, stdev=216.20 00:19:56.407 lat (msec): min=2, max=1407, avg=150.06, stdev=218.56 00:19:56.407 clat percentiles (msec): 00:19:56.407 | 1.00th=[ 7], 5.00th=[ 16], 10.00th=[ 35], 20.00th=[ 45], 00:19:56.407 | 30.00th=[ 56], 40.00th=[ 63], 50.00th=[ 71], 60.00th=[ 84], 00:19:56.407 | 70.00th=[ 113], 80.00th=[ 161], 90.00th=[ 338], 95.00th=[ 617], 00:19:56.407 | 99.00th=[ 1334], 99.50th=[ 1401], 99.90th=[ 1401], 99.95th=[ 1401], 00:19:56.407 | 99.99th=[ 1401] 00:19:56.407 bw ( KiB/s): min= 512, max=312832, per=6.56%, avg=108759.30, stdev=93286.56, samples=20 00:19:56.407 iops : min= 2, max= 1222, avg=424.75, stdev=364.37, samples=20 00:19:56.407 lat (msec) : 4=0.21%, 10=2.36%, 20=3.50%, 50=18.97%, 100=40.97% 00:19:56.407 lat (msec) : 250=18.85%, 500=7.60%, 750=4.43%, 1000=1.65%, 2000=1.46% 00:19:56.407 cpu : usr=0.24%, sys=1.19%, ctx=1165, majf=0, minf=3721 00:19:56.407 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:19:56.407 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:56.407 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:56.407 issued rwts: total=4313,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:56.407 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:56.407 job9: (groupid=0, jobs=1): err= 0: pid=1531851: Thu Apr 25 03:20:29 2024 00:19:56.407 read: IOPS=775, BW=194MiB/s (203MB/s)(1943MiB/10027msec) 00:19:56.407 slat (usec): min=14, max=64489, avg=1272.88, stdev=3519.32 00:19:56.407 clat (msec): min=26, max=251, avg=81.25, stdev=32.22 00:19:56.407 lat (msec): min=31, max=267, avg=82.53, stdev=32.71 00:19:56.407 clat percentiles (msec): 00:19:56.407 | 1.00th=[ 44], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 51], 00:19:56.407 | 30.00th=[ 58], 40.00th=[ 68], 50.00th=[ 77], 60.00th=[ 86], 00:19:56.407 | 70.00th=[ 95], 80.00th=[ 106], 90.00th=[ 121], 95.00th=[ 131], 00:19:56.407 | 99.00th=[ 203], 99.50th=[ 207], 99.90th=[ 224], 99.95th=[ 226], 00:19:56.407 | 99.99th=[ 253] 00:19:56.407 bw ( KiB/s): min=84992, max=330240, per=11.89%, avg=197289.40, stdev=67615.97, samples=20 00:19:56.407 iops : min= 332, max= 1290, avg=770.55, stdev=264.20, samples=20 00:19:56.407 lat (msec) : 50=19.37%, 100=55.75%, 250=24.87%, 500=0.01% 00:19:56.407 cpu : usr=0.46%, sys=2.61%, ctx=1588, majf=0, minf=4097 00:19:56.407 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:19:56.407 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:56.407 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:56.407 issued rwts: total=7771,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:56.407 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:56.407 job10: (groupid=0, jobs=1): err= 0: pid=1531852: Thu Apr 25 03:20:29 2024 00:19:56.407 read: IOPS=515, BW=129MiB/s (135MB/s)(1295MiB/10058msec) 00:19:56.407 slat (usec): min=9, max=294444, avg=1512.98, stdev=6957.85 00:19:56.407 clat (msec): min=2, max=873, avg=122.66, stdev=112.59 00:19:56.407 lat (msec): min=2, max=873, avg=124.17, stdev=113.80 00:19:56.407 clat percentiles (msec): 00:19:56.407 | 1.00th=[ 10], 5.00th=[ 17], 10.00th=[ 26], 20.00th=[ 71], 00:19:56.407 | 30.00th=[ 81], 40.00th=[ 91], 50.00th=[ 101], 60.00th=[ 113], 00:19:56.407 | 70.00th=[ 124], 80.00th=[ 142], 90.00th=[ 201], 95.00th=[ 296], 00:19:56.407 | 99.00th=[ 684], 99.50th=[ 735], 99.90th=[ 760], 99.95th=[ 768], 00:19:56.407 | 99.99th=[ 877] 00:19:56.407 bw ( KiB/s): min=25088, max=214016, per=7.90%, avg=130993.60, stdev=54946.68, samples=20 00:19:56.407 iops : min= 98, max= 836, avg=511.55, stdev=214.64, samples=20 00:19:56.407 lat (msec) : 4=0.15%, 10=1.37%, 20=5.89%, 50=7.51%, 100=34.90% 00:19:56.407 lat (msec) : 250=43.83%, 500=3.07%, 750=3.03%, 1000=0.25% 00:19:56.407 cpu : usr=0.38%, sys=1.62%, ctx=1458, majf=0, minf=4097 00:19:56.407 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:19:56.407 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:56.407 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:56.407 issued rwts: total=5181,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:56.407 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:56.407 00:19:56.407 Run status group 0 (all jobs): 00:19:56.407 READ: bw=1620MiB/s (1699MB/s), 91.8MiB/s-224MiB/s (96.3MB/s-235MB/s), io=16.1GiB (17.3GB), run=10021-10192msec 00:19:56.407 00:19:56.407 Disk stats (read/write): 00:19:56.407 nvme0n1: ios=17678/0, merge=0/0, ticks=1232802/0, in_queue=1232802, util=97.18% 00:19:56.407 nvme10n1: ios=11440/0, merge=0/0, ticks=1236586/0, in_queue=1236586, util=97.39% 00:19:56.407 nvme1n1: ios=10889/0, merge=0/0, ticks=1213557/0, in_queue=1213557, util=97.65% 00:19:56.407 nvme2n1: ios=9300/0, merge=0/0, ticks=1193430/0, in_queue=1193430, util=97.81% 00:19:56.407 nvme3n1: ios=17711/0, merge=0/0, ticks=1233029/0, in_queue=1233029, util=97.86% 00:19:56.407 nvme4n1: ios=7325/0, merge=0/0, ticks=1221226/0, in_queue=1221226, util=98.19% 00:19:56.407 nvme5n1: ios=11915/0, merge=0/0, ticks=1188946/0, in_queue=1188946, util=98.35% 00:19:56.407 nvme6n1: ios=10045/0, merge=0/0, ticks=1270429/0, in_queue=1270429, util=98.51% 00:19:56.407 nvme7n1: ios=8406/0, merge=0/0, ticks=1207504/0, in_queue=1207504, util=98.90% 00:19:56.407 nvme8n1: ios=15251/0, merge=0/0, ticks=1228735/0, in_queue=1228735, util=99.07% 00:19:56.407 nvme9n1: ios=10122/0, merge=0/0, ticks=1201414/0, in_queue=1201414, util=99.21% 00:19:56.407 03:20:29 -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:19:56.407 [global] 00:19:56.407 thread=1 00:19:56.407 invalidate=1 00:19:56.407 rw=randwrite 00:19:56.407 time_based=1 00:19:56.407 runtime=10 00:19:56.407 ioengine=libaio 00:19:56.407 direct=1 00:19:56.407 bs=262144 00:19:56.407 iodepth=64 00:19:56.407 norandommap=1 00:19:56.407 numjobs=1 00:19:56.407 00:19:56.407 [job0] 00:19:56.407 filename=/dev/nvme0n1 00:19:56.407 [job1] 00:19:56.407 filename=/dev/nvme10n1 00:19:56.407 [job2] 00:19:56.407 filename=/dev/nvme1n1 00:19:56.407 [job3] 00:19:56.407 filename=/dev/nvme2n1 00:19:56.407 [job4] 00:19:56.407 filename=/dev/nvme3n1 00:19:56.407 [job5] 00:19:56.407 filename=/dev/nvme4n1 00:19:56.407 [job6] 00:19:56.407 filename=/dev/nvme5n1 00:19:56.407 [job7] 00:19:56.407 filename=/dev/nvme6n1 00:19:56.407 [job8] 00:19:56.407 filename=/dev/nvme7n1 00:19:56.407 [job9] 00:19:56.407 filename=/dev/nvme8n1 00:19:56.407 [job10] 00:19:56.407 filename=/dev/nvme9n1 00:19:56.407 Could not set queue depth (nvme0n1) 00:19:56.407 Could not set queue depth (nvme10n1) 00:19:56.407 Could not set queue depth (nvme1n1) 00:19:56.407 Could not set queue depth (nvme2n1) 00:19:56.407 Could not set queue depth (nvme3n1) 00:19:56.407 Could not set queue depth (nvme4n1) 00:19:56.407 Could not set queue depth (nvme5n1) 00:19:56.407 Could not set queue depth (nvme6n1) 00:19:56.407 Could not set queue depth (nvme7n1) 00:19:56.407 Could not set queue depth (nvme8n1) 00:19:56.407 Could not set queue depth (nvme9n1) 00:19:56.407 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:56.407 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:56.407 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:56.407 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:56.407 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:56.407 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:56.407 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:56.407 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:56.407 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:56.407 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:56.407 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:56.407 fio-3.35 00:19:56.407 Starting 11 threads 00:20:06.382 00:20:06.382 job0: (groupid=0, jobs=1): err= 0: pid=1532825: Thu Apr 25 03:20:40 2024 00:20:06.382 write: IOPS=370, BW=92.7MiB/s (97.2MB/s)(945MiB/10193msec); 0 zone resets 00:20:06.382 slat (usec): min=18, max=492963, avg=1843.25, stdev=9275.29 00:20:06.382 clat (msec): min=3, max=752, avg=170.60, stdev=106.84 00:20:06.382 lat (msec): min=3, max=752, avg=172.45, stdev=107.86 00:20:06.382 clat percentiles (msec): 00:20:06.382 | 1.00th=[ 22], 5.00th=[ 50], 10.00th=[ 72], 20.00th=[ 106], 00:20:06.382 | 30.00th=[ 126], 40.00th=[ 148], 50.00th=[ 165], 60.00th=[ 180], 00:20:06.382 | 70.00th=[ 190], 80.00th=[ 199], 90.00th=[ 232], 95.00th=[ 284], 00:20:06.382 | 99.00th=[ 735], 99.50th=[ 735], 99.90th=[ 751], 99.95th=[ 751], 00:20:06.382 | 99.99th=[ 751] 00:20:06.382 bw ( KiB/s): min=10240, max=164864, per=8.50%, avg=95171.60, stdev=33625.45, samples=20 00:20:06.382 iops : min= 40, max= 644, avg=371.75, stdev=131.35, samples=20 00:20:06.382 lat (msec) : 4=0.03%, 10=0.26%, 20=0.56%, 50=4.28%, 100=12.03% 00:20:06.382 lat (msec) : 250=75.67%, 500=4.36%, 750=2.72%, 1000=0.08% 00:20:06.382 cpu : usr=1.08%, sys=1.09%, ctx=2129, majf=0, minf=1 00:20:06.382 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.3% 00:20:06.382 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:06.382 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:20:06.382 issued rwts: total=0,3781,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:06.382 latency : target=0, window=0, percentile=100.00%, depth=64 00:20:06.382 job1: (groupid=0, jobs=1): err= 0: pid=1532865: Thu Apr 25 03:20:40 2024 00:20:06.382 write: IOPS=392, BW=98.2MiB/s (103MB/s)(995MiB/10128msec); 0 zone resets 00:20:06.382 slat (usec): min=16, max=190190, avg=1610.86, stdev=6227.01 00:20:06.382 clat (msec): min=2, max=812, avg=161.18, stdev=115.12 00:20:06.382 lat (msec): min=2, max=812, avg=162.79, stdev=115.91 00:20:06.382 clat percentiles (msec): 00:20:06.382 | 1.00th=[ 15], 5.00th=[ 34], 10.00th=[ 49], 20.00th=[ 75], 00:20:06.382 | 30.00th=[ 92], 40.00th=[ 120], 50.00th=[ 148], 60.00th=[ 169], 00:20:06.382 | 70.00th=[ 188], 80.00th=[ 220], 90.00th=[ 288], 95.00th=[ 355], 00:20:06.382 | 99.00th=[ 701], 99.50th=[ 793], 99.90th=[ 810], 99.95th=[ 810], 00:20:06.382 | 99.99th=[ 810] 00:20:06.382 bw ( KiB/s): min=43520, max=221696, per=8.96%, avg=100249.60, stdev=44378.32, samples=20 00:20:06.382 iops : min= 170, max= 866, avg=391.60, stdev=173.35, samples=20 00:20:06.382 lat (msec) : 4=0.05%, 10=0.43%, 20=1.61%, 50=8.29%, 100=23.39% 00:20:06.382 lat (msec) : 250=50.95%, 500=13.59%, 750=0.95%, 1000=0.73% 00:20:06.382 cpu : usr=1.18%, sys=1.19%, ctx=2296, majf=0, minf=1 00:20:06.382 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:20:06.382 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:06.382 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:20:06.382 issued rwts: total=0,3980,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:06.382 latency : target=0, window=0, percentile=100.00%, depth=64 00:20:06.382 job2: (groupid=0, jobs=1): err= 0: pid=1532888: Thu Apr 25 03:20:40 2024 00:20:06.382 write: IOPS=341, BW=85.5MiB/s (89.6MB/s)(875MiB/10242msec); 0 zone resets 00:20:06.382 slat (usec): min=23, max=554126, avg=2345.65, stdev=13523.51 00:20:06.382 clat (msec): min=4, max=1578, avg=184.74, stdev=179.49 00:20:06.382 lat (msec): min=6, max=1578, avg=187.08, stdev=181.61 00:20:06.382 clat percentiles (msec): 00:20:06.382 | 1.00th=[ 22], 5.00th=[ 47], 10.00th=[ 69], 20.00th=[ 91], 00:20:06.382 | 30.00th=[ 120], 40.00th=[ 144], 50.00th=[ 159], 60.00th=[ 174], 00:20:06.382 | 70.00th=[ 190], 80.00th=[ 215], 90.00th=[ 253], 95.00th=[ 401], 00:20:06.382 | 99.00th=[ 1116], 99.50th=[ 1536], 99.90th=[ 1552], 99.95th=[ 1569], 00:20:06.382 | 99.99th=[ 1586] 00:20:06.382 bw ( KiB/s): min= 4598, max=176128, per=7.86%, avg=88012.30, stdev=42432.83, samples=20 00:20:06.382 iops : min= 17, max= 688, avg=343.75, stdev=165.85, samples=20 00:20:06.382 lat (msec) : 10=0.17%, 20=0.71%, 50=4.94%, 100=17.08%, 250=66.21% 00:20:06.382 lat (msec) : 500=7.57%, 750=1.26%, 1000=0.51%, 2000=1.54% 00:20:06.382 cpu : usr=0.94%, sys=1.04%, ctx=1700, majf=0, minf=1 00:20:06.382 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:20:06.382 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:06.382 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:20:06.382 issued rwts: total=0,3501,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:06.382 latency : target=0, window=0, percentile=100.00%, depth=64 00:20:06.382 job3: (groupid=0, jobs=1): err= 0: pid=1532889: Thu Apr 25 03:20:40 2024 00:20:06.382 write: IOPS=433, BW=108MiB/s (114MB/s)(1103MiB/10181msec); 0 zone resets 00:20:06.382 slat (usec): min=22, max=493171, avg=1559.01, stdev=8365.34 00:20:06.382 clat (msec): min=4, max=777, avg=146.12, stdev=92.62 00:20:06.382 lat (msec): min=4, max=778, avg=147.67, stdev=93.70 00:20:06.382 clat percentiles (msec): 00:20:06.382 | 1.00th=[ 32], 5.00th=[ 58], 10.00th=[ 72], 20.00th=[ 92], 00:20:06.382 | 30.00th=[ 104], 40.00th=[ 122], 50.00th=[ 130], 60.00th=[ 144], 00:20:06.382 | 70.00th=[ 163], 80.00th=[ 182], 90.00th=[ 230], 95.00th=[ 264], 00:20:06.382 | 99.00th=[ 743], 99.50th=[ 760], 99.90th=[ 760], 99.95th=[ 776], 00:20:06.382 | 99.99th=[ 776] 00:20:06.382 bw ( KiB/s): min=10240, max=179712, per=9.94%, avg=111282.40, stdev=39341.68, samples=20 00:20:06.382 iops : min= 40, max= 702, avg=434.65, stdev=153.68, samples=20 00:20:06.382 lat (msec) : 10=0.02%, 20=0.36%, 50=2.54%, 100=25.78%, 250=63.88% 00:20:06.382 lat (msec) : 500=5.99%, 750=0.68%, 1000=0.75% 00:20:06.382 cpu : usr=1.19%, sys=1.59%, ctx=2405, majf=0, minf=1 00:20:06.382 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:20:06.382 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:06.382 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:20:06.382 issued rwts: total=0,4410,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:06.382 latency : target=0, window=0, percentile=100.00%, depth=64 00:20:06.382 job4: (groupid=0, jobs=1): err= 0: pid=1532890: Thu Apr 25 03:20:40 2024 00:20:06.382 write: IOPS=335, BW=83.8MiB/s (87.9MB/s)(849MiB/10126msec); 0 zone resets 00:20:06.382 slat (usec): min=18, max=493258, avg=1996.96, stdev=10458.11 00:20:06.382 clat (msec): min=3, max=1043, avg=188.69, stdev=161.91 00:20:06.382 lat (msec): min=4, max=1043, avg=190.68, stdev=163.13 00:20:06.382 clat percentiles (msec): 00:20:06.382 | 1.00th=[ 12], 5.00th=[ 25], 10.00th=[ 34], 20.00th=[ 61], 00:20:06.382 | 30.00th=[ 112], 40.00th=[ 153], 50.00th=[ 171], 60.00th=[ 192], 00:20:06.382 | 70.00th=[ 213], 80.00th=[ 239], 90.00th=[ 317], 95.00th=[ 435], 00:20:06.382 | 99.00th=[ 995], 99.50th=[ 1028], 99.90th=[ 1036], 99.95th=[ 1036], 00:20:06.382 | 99.99th=[ 1045] 00:20:06.382 bw ( KiB/s): min=13312, max=140288, per=7.62%, avg=85323.10, stdev=28905.81, samples=20 00:20:06.382 iops : min= 52, max= 548, avg=333.25, stdev=112.93, samples=20 00:20:06.382 lat (msec) : 4=0.03%, 10=0.62%, 20=2.47%, 50=14.72%, 100=8.19% 00:20:06.382 lat (msec) : 250=56.07%, 500=13.87%, 750=2.53%, 1000=0.80%, 2000=0.71% 00:20:06.382 cpu : usr=0.95%, sys=1.14%, ctx=1934, majf=0, minf=1 00:20:06.382 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.1% 00:20:06.382 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:06.382 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:20:06.382 issued rwts: total=0,3396,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:06.382 latency : target=0, window=0, percentile=100.00%, depth=64 00:20:06.382 job5: (groupid=0, jobs=1): err= 0: pid=1532895: Thu Apr 25 03:20:40 2024 00:20:06.382 write: IOPS=438, BW=110MiB/s (115MB/s)(1120MiB/10207msec); 0 zone resets 00:20:06.382 slat (usec): min=18, max=492781, avg=1472.13, stdev=8362.41 00:20:06.382 clat (usec): min=1849, max=770641, avg=144318.44, stdev=103692.84 00:20:06.382 lat (usec): min=1889, max=774523, avg=145790.56, stdev=104573.24 00:20:06.382 clat percentiles (msec): 00:20:06.382 | 1.00th=[ 12], 5.00th=[ 28], 10.00th=[ 47], 20.00th=[ 69], 00:20:06.382 | 30.00th=[ 105], 40.00th=[ 123], 50.00th=[ 138], 60.00th=[ 150], 00:20:06.382 | 70.00th=[ 163], 80.00th=[ 184], 90.00th=[ 220], 95.00th=[ 275], 00:20:06.382 | 99.00th=[ 735], 99.50th=[ 751], 99.90th=[ 768], 99.95th=[ 768], 00:20:06.383 | 99.99th=[ 768] 00:20:06.383 bw ( KiB/s): min=17408, max=220160, per=10.10%, avg=113024.00, stdev=48085.52, samples=20 00:20:06.383 iops : min= 68, max= 860, avg=441.50, stdev=187.83, samples=20 00:20:06.383 lat (msec) : 2=0.02%, 4=0.04%, 10=0.58%, 20=1.85%, 50=8.20% 00:20:06.383 lat (msec) : 100=17.95%, 250=63.64%, 500=5.83%, 750=1.38%, 1000=0.49% 00:20:06.383 cpu : usr=1.20%, sys=1.16%, ctx=2546, majf=0, minf=1 00:20:06.383 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:20:06.383 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:06.383 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:20:06.383 issued rwts: total=0,4478,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:06.383 latency : target=0, window=0, percentile=100.00%, depth=64 00:20:06.383 job6: (groupid=0, jobs=1): err= 0: pid=1532896: Thu Apr 25 03:20:40 2024 00:20:06.383 write: IOPS=314, BW=78.6MiB/s (82.4MB/s)(801MiB/10193msec); 0 zone resets 00:20:06.383 slat (usec): min=25, max=493204, avg=2815.16, stdev=10959.51 00:20:06.383 clat (msec): min=3, max=755, avg=200.08, stdev=110.48 00:20:06.383 lat (msec): min=3, max=756, avg=202.89, stdev=111.82 00:20:06.383 clat percentiles (msec): 00:20:06.383 | 1.00th=[ 23], 5.00th=[ 79], 10.00th=[ 99], 20.00th=[ 138], 00:20:06.383 | 30.00th=[ 161], 40.00th=[ 174], 50.00th=[ 188], 60.00th=[ 197], 00:20:06.383 | 70.00th=[ 209], 80.00th=[ 226], 90.00th=[ 292], 95.00th=[ 405], 00:20:06.383 | 99.00th=[ 743], 99.50th=[ 751], 99.90th=[ 751], 99.95th=[ 760], 00:20:06.383 | 99.99th=[ 760] 00:20:06.383 bw ( KiB/s): min=10752, max=134656, per=7.19%, avg=80435.20, stdev=30854.85, samples=20 00:20:06.383 iops : min= 42, max= 526, avg=314.20, stdev=120.53, samples=20 00:20:06.383 lat (msec) : 4=0.06%, 10=0.37%, 20=0.50%, 50=2.37%, 100=6.96% 00:20:06.383 lat (msec) : 250=75.32%, 500=11.61%, 750=2.31%, 1000=0.50% 00:20:06.383 cpu : usr=1.07%, sys=1.01%, ctx=1256, majf=0, minf=1 00:20:06.383 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.0% 00:20:06.383 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:06.383 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:20:06.383 issued rwts: total=0,3205,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:06.383 latency : target=0, window=0, percentile=100.00%, depth=64 00:20:06.383 job7: (groupid=0, jobs=1): err= 0: pid=1532897: Thu Apr 25 03:20:40 2024 00:20:06.383 write: IOPS=513, BW=128MiB/s (135MB/s)(1322MiB/10292msec); 0 zone resets 00:20:06.383 slat (usec): min=16, max=150158, avg=1580.54, stdev=4570.31 00:20:06.383 clat (msec): min=6, max=550, avg=122.87, stdev=75.05 00:20:06.383 lat (msec): min=6, max=550, avg=124.46, stdev=75.59 00:20:06.383 clat percentiles (msec): 00:20:06.383 | 1.00th=[ 19], 5.00th=[ 55], 10.00th=[ 66], 20.00th=[ 77], 00:20:06.383 | 30.00th=[ 87], 40.00th=[ 94], 50.00th=[ 106], 60.00th=[ 114], 00:20:06.383 | 70.00th=[ 126], 80.00th=[ 142], 90.00th=[ 213], 95.00th=[ 296], 00:20:06.383 | 99.00th=[ 443], 99.50th=[ 506], 99.90th=[ 527], 99.95th=[ 535], 00:20:06.383 | 99.99th=[ 550] 00:20:06.383 bw ( KiB/s): min=51200, max=206848, per=11.95%, avg=133779.50, stdev=40791.11, samples=20 00:20:06.383 iops : min= 200, max= 808, avg=522.50, stdev=159.39, samples=20 00:20:06.383 lat (msec) : 10=0.13%, 20=0.98%, 50=2.93%, 100=42.14%, 250=47.06% 00:20:06.383 lat (msec) : 500=6.18%, 750=0.57% 00:20:06.383 cpu : usr=1.51%, sys=1.61%, ctx=1953, majf=0, minf=1 00:20:06.383 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:20:06.383 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:06.383 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:20:06.383 issued rwts: total=0,5289,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:06.383 latency : target=0, window=0, percentile=100.00%, depth=64 00:20:06.383 job8: (groupid=0, jobs=1): err= 0: pid=1532898: Thu Apr 25 03:20:40 2024 00:20:06.383 write: IOPS=505, BW=126MiB/s (133MB/s)(1279MiB/10119msec); 0 zone resets 00:20:06.383 slat (usec): min=17, max=91116, avg=1360.03, stdev=3705.05 00:20:06.383 clat (msec): min=2, max=635, avg=125.15, stdev=67.52 00:20:06.383 lat (msec): min=2, max=635, avg=126.51, stdev=67.91 00:20:06.383 clat percentiles (msec): 00:20:06.383 | 1.00th=[ 15], 5.00th=[ 45], 10.00th=[ 67], 20.00th=[ 84], 00:20:06.383 | 30.00th=[ 96], 40.00th=[ 108], 50.00th=[ 116], 60.00th=[ 126], 00:20:06.383 | 70.00th=[ 140], 80.00th=[ 163], 90.00th=[ 188], 95.00th=[ 211], 00:20:06.383 | 99.00th=[ 518], 99.50th=[ 558], 99.90th=[ 625], 99.95th=[ 634], 00:20:06.383 | 99.99th=[ 634] 00:20:06.383 bw ( KiB/s): min=34816, max=208384, per=11.56%, avg=129369.75, stdev=37848.66, samples=20 00:20:06.383 iops : min= 136, max= 814, avg=505.35, stdev=147.85, samples=20 00:20:06.383 lat (msec) : 4=0.18%, 10=0.23%, 20=1.88%, 50=3.77%, 100=27.38% 00:20:06.383 lat (msec) : 250=64.74%, 500=0.66%, 750=1.15% 00:20:06.383 cpu : usr=1.41%, sys=1.43%, ctx=2586, majf=0, minf=1 00:20:06.383 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:20:06.383 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:06.383 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:20:06.383 issued rwts: total=0,5117,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:06.383 latency : target=0, window=0, percentile=100.00%, depth=64 00:20:06.383 job9: (groupid=0, jobs=1): err= 0: pid=1532899: Thu Apr 25 03:20:40 2024 00:20:06.383 write: IOPS=404, BW=101MiB/s (106MB/s)(1022MiB/10096msec); 0 zone resets 00:20:06.383 slat (usec): min=22, max=569426, avg=1351.21, stdev=13121.28 00:20:06.383 clat (msec): min=2, max=1473, avg=156.63, stdev=176.62 00:20:06.383 lat (msec): min=2, max=1473, avg=157.98, stdev=177.37 00:20:06.383 clat percentiles (msec): 00:20:06.383 | 1.00th=[ 11], 5.00th=[ 27], 10.00th=[ 41], 20.00th=[ 65], 00:20:06.383 | 30.00th=[ 82], 40.00th=[ 96], 50.00th=[ 111], 60.00th=[ 131], 00:20:06.383 | 70.00th=[ 174], 80.00th=[ 205], 90.00th=[ 259], 95.00th=[ 384], 00:20:06.383 | 99.00th=[ 877], 99.50th=[ 1435], 99.90th=[ 1469], 99.95th=[ 1469], 00:20:06.383 | 99.99th=[ 1469] 00:20:06.383 bw ( KiB/s): min= 2554, max=217600, per=9.69%, avg=108445.53, stdev=51449.23, samples=19 00:20:06.383 iops : min= 9, max= 850, avg=423.53, stdev=201.10, samples=19 00:20:06.383 lat (msec) : 4=0.27%, 10=0.73%, 20=2.10%, 50=11.06%, 100=29.29% 00:20:06.383 lat (msec) : 250=45.31%, 500=7.27%, 750=2.01%, 1000=1.03%, 2000=0.93% 00:20:06.383 cpu : usr=1.08%, sys=1.43%, ctx=2699, majf=0, minf=1 00:20:06.383 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:20:06.383 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:06.383 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:20:06.383 issued rwts: total=0,4087,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:06.383 latency : target=0, window=0, percentile=100.00%, depth=64 00:20:06.383 job10: (groupid=0, jobs=1): err= 0: pid=1532900: Thu Apr 25 03:20:40 2024 00:20:06.383 write: IOPS=369, BW=92.5MiB/s (96.9MB/s)(937MiB/10135msec); 0 zone resets 00:20:06.383 slat (usec): min=25, max=64701, avg=2577.87, stdev=5434.77 00:20:06.383 clat (msec): min=21, max=324, avg=170.40, stdev=50.84 00:20:06.383 lat (msec): min=21, max=325, avg=172.98, stdev=51.29 00:20:06.383 clat percentiles (msec): 00:20:06.383 | 1.00th=[ 79], 5.00th=[ 91], 10.00th=[ 101], 20.00th=[ 118], 00:20:06.383 | 30.00th=[ 146], 40.00th=[ 163], 50.00th=[ 176], 60.00th=[ 186], 00:20:06.383 | 70.00th=[ 199], 80.00th=[ 213], 90.00th=[ 234], 95.00th=[ 253], 00:20:06.383 | 99.00th=[ 309], 99.50th=[ 313], 99.90th=[ 321], 99.95th=[ 326], 00:20:06.383 | 99.99th=[ 326] 00:20:06.383 bw ( KiB/s): min=61440, max=150016, per=8.43%, avg=94310.40, stdev=25364.80, samples=20 00:20:06.383 iops : min= 240, max= 586, avg=368.40, stdev=99.08, samples=20 00:20:06.383 lat (msec) : 50=0.32%, 100=9.79%, 250=84.28%, 500=5.60% 00:20:06.383 cpu : usr=1.13%, sys=1.09%, ctx=1124, majf=0, minf=1 00:20:06.383 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:20:06.383 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:06.383 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:20:06.383 issued rwts: total=0,3748,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:06.383 latency : target=0, window=0, percentile=100.00%, depth=64 00:20:06.383 00:20:06.383 Run status group 0 (all jobs): 00:20:06.383 WRITE: bw=1093MiB/s (1146MB/s), 78.6MiB/s-128MiB/s (82.4MB/s-135MB/s), io=11.0GiB (11.8GB), run=10096-10292msec 00:20:06.383 00:20:06.383 Disk stats (read/write): 00:20:06.383 nvme0n1: ios=49/7515, merge=0/0, ticks=43/1241394, in_queue=1241437, util=96.99% 00:20:06.383 nvme10n1: ios=46/7727, merge=0/0, ticks=1551/1215335, in_queue=1216886, util=100.00% 00:20:06.383 nvme1n1: ios=45/6924, merge=0/0, ticks=4481/1222083, in_queue=1226564, util=100.00% 00:20:06.383 nvme2n1: ios=27/8788, merge=0/0, ticks=36/1242322, in_queue=1242358, util=97.51% 00:20:06.383 nvme3n1: ios=46/6556, merge=0/0, ticks=2086/1210001, in_queue=1212087, util=100.00% 00:20:06.383 nvme4n1: ios=0/8894, merge=0/0, ticks=0/1240133, in_queue=1240133, util=97.81% 00:20:06.383 nvme5n1: ios=45/6357, merge=0/0, ticks=1998/1226596, in_queue=1228594, util=100.00% 00:20:06.383 nvme6n1: ios=21/10467, merge=0/0, ticks=160/1216344, in_queue=1216504, util=98.79% 00:20:06.383 nvme7n1: ios=0/9965, merge=0/0, ticks=0/1216671, in_queue=1216671, util=98.77% 00:20:06.383 nvme8n1: ios=41/7902, merge=0/0, ticks=1496/1228065, in_queue=1229561, util=100.00% 00:20:06.383 nvme9n1: ios=45/7488, merge=0/0, ticks=1268/1231527, in_queue=1232795, util=100.00% 00:20:06.383 03:20:40 -- target/multiconnection.sh@36 -- # sync 00:20:06.383 03:20:40 -- target/multiconnection.sh@37 -- # seq 1 11 00:20:06.383 03:20:40 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:06.383 03:20:40 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:06.383 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:06.383 03:20:40 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:20:06.383 03:20:40 -- common/autotest_common.sh@1205 -- # local i=0 00:20:06.383 03:20:40 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:20:06.383 03:20:40 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK1 00:20:06.383 03:20:40 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:20:06.383 03:20:40 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK1 00:20:06.383 03:20:40 -- common/autotest_common.sh@1217 -- # return 0 00:20:06.383 03:20:40 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:06.383 03:20:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:06.383 03:20:40 -- common/autotest_common.sh@10 -- # set +x 00:20:06.383 03:20:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:06.384 03:20:40 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:06.384 03:20:40 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:20:06.384 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:20:06.384 03:20:40 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:20:06.384 03:20:40 -- common/autotest_common.sh@1205 -- # local i=0 00:20:06.384 03:20:40 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:20:06.384 03:20:40 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK2 00:20:06.384 03:20:40 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:20:06.384 03:20:40 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK2 00:20:06.384 03:20:40 -- common/autotest_common.sh@1217 -- # return 0 00:20:06.384 03:20:40 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:20:06.384 03:20:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:06.384 03:20:40 -- common/autotest_common.sh@10 -- # set +x 00:20:06.384 03:20:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:06.384 03:20:40 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:06.384 03:20:40 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:20:06.644 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:20:06.644 03:20:41 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:20:06.644 03:20:41 -- common/autotest_common.sh@1205 -- # local i=0 00:20:06.644 03:20:41 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:20:06.644 03:20:41 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK3 00:20:06.644 03:20:41 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:20:06.644 03:20:41 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK3 00:20:06.644 03:20:41 -- common/autotest_common.sh@1217 -- # return 0 00:20:06.644 03:20:41 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:20:06.644 03:20:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:06.644 03:20:41 -- common/autotest_common.sh@10 -- # set +x 00:20:06.644 03:20:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:06.644 03:20:41 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:06.644 03:20:41 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:20:06.904 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:20:06.904 03:20:41 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:20:06.904 03:20:41 -- common/autotest_common.sh@1205 -- # local i=0 00:20:06.904 03:20:41 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:20:06.904 03:20:41 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK4 00:20:06.904 03:20:41 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:20:06.904 03:20:41 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK4 00:20:06.904 03:20:41 -- common/autotest_common.sh@1217 -- # return 0 00:20:06.904 03:20:41 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:20:06.904 03:20:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:06.904 03:20:41 -- common/autotest_common.sh@10 -- # set +x 00:20:06.904 03:20:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:06.904 03:20:41 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:06.904 03:20:41 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:20:07.164 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:20:07.164 03:20:41 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:20:07.164 03:20:41 -- common/autotest_common.sh@1205 -- # local i=0 00:20:07.164 03:20:41 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:20:07.164 03:20:41 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK5 00:20:07.164 03:20:41 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:20:07.164 03:20:41 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK5 00:20:07.164 03:20:41 -- common/autotest_common.sh@1217 -- # return 0 00:20:07.164 03:20:41 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:20:07.164 03:20:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:07.164 03:20:41 -- common/autotest_common.sh@10 -- # set +x 00:20:07.164 03:20:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:07.164 03:20:41 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:07.164 03:20:41 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:20:07.425 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:20:07.425 03:20:41 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:20:07.425 03:20:41 -- common/autotest_common.sh@1205 -- # local i=0 00:20:07.425 03:20:41 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:20:07.425 03:20:41 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK6 00:20:07.425 03:20:41 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:20:07.425 03:20:41 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK6 00:20:07.425 03:20:41 -- common/autotest_common.sh@1217 -- # return 0 00:20:07.425 03:20:41 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:20:07.425 03:20:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:07.425 03:20:41 -- common/autotest_common.sh@10 -- # set +x 00:20:07.425 03:20:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:07.425 03:20:41 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:07.425 03:20:41 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:20:07.683 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:20:07.683 03:20:41 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:20:07.683 03:20:41 -- common/autotest_common.sh@1205 -- # local i=0 00:20:07.683 03:20:41 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:20:07.683 03:20:41 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK7 00:20:07.683 03:20:41 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:20:07.683 03:20:41 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK7 00:20:07.683 03:20:41 -- common/autotest_common.sh@1217 -- # return 0 00:20:07.683 03:20:41 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:20:07.683 03:20:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:07.683 03:20:41 -- common/autotest_common.sh@10 -- # set +x 00:20:07.683 03:20:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:07.683 03:20:41 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:07.683 03:20:41 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:20:07.683 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:20:07.683 03:20:42 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:20:07.683 03:20:42 -- common/autotest_common.sh@1205 -- # local i=0 00:20:07.683 03:20:42 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:20:07.683 03:20:42 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK8 00:20:07.683 03:20:42 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:20:07.683 03:20:42 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK8 00:20:07.683 03:20:42 -- common/autotest_common.sh@1217 -- # return 0 00:20:07.683 03:20:42 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:20:07.683 03:20:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:07.683 03:20:42 -- common/autotest_common.sh@10 -- # set +x 00:20:07.683 03:20:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:07.683 03:20:42 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:07.683 03:20:42 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:20:07.942 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:20:07.942 03:20:42 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:20:07.942 03:20:42 -- common/autotest_common.sh@1205 -- # local i=0 00:20:07.942 03:20:42 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:20:07.942 03:20:42 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK9 00:20:07.942 03:20:42 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:20:07.942 03:20:42 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK9 00:20:07.942 03:20:42 -- common/autotest_common.sh@1217 -- # return 0 00:20:07.942 03:20:42 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:20:07.942 03:20:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:07.942 03:20:42 -- common/autotest_common.sh@10 -- # set +x 00:20:07.942 03:20:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:07.942 03:20:42 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:07.942 03:20:42 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:20:07.942 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:20:07.942 03:20:42 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:20:07.942 03:20:42 -- common/autotest_common.sh@1205 -- # local i=0 00:20:07.942 03:20:42 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:20:07.942 03:20:42 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK10 00:20:07.942 03:20:42 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:20:07.942 03:20:42 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK10 00:20:07.942 03:20:42 -- common/autotest_common.sh@1217 -- # return 0 00:20:07.942 03:20:42 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:20:07.942 03:20:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:07.942 03:20:42 -- common/autotest_common.sh@10 -- # set +x 00:20:07.942 03:20:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:07.942 03:20:42 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:07.942 03:20:42 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:20:08.202 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:20:08.202 03:20:42 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:20:08.202 03:20:42 -- common/autotest_common.sh@1205 -- # local i=0 00:20:08.202 03:20:42 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:20:08.202 03:20:42 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK11 00:20:08.202 03:20:42 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:20:08.202 03:20:42 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK11 00:20:08.202 03:20:42 -- common/autotest_common.sh@1217 -- # return 0 00:20:08.202 03:20:42 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:20:08.202 03:20:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:08.202 03:20:42 -- common/autotest_common.sh@10 -- # set +x 00:20:08.202 03:20:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:08.202 03:20:42 -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:20:08.202 03:20:42 -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:20:08.203 03:20:42 -- target/multiconnection.sh@47 -- # nvmftestfini 00:20:08.203 03:20:42 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:08.203 03:20:42 -- nvmf/common.sh@117 -- # sync 00:20:08.203 03:20:42 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:08.203 03:20:42 -- nvmf/common.sh@120 -- # set +e 00:20:08.203 03:20:42 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:08.203 03:20:42 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:08.203 rmmod nvme_tcp 00:20:08.203 rmmod nvme_fabrics 00:20:08.203 rmmod nvme_keyring 00:20:08.203 03:20:42 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:08.203 03:20:42 -- nvmf/common.sh@124 -- # set -e 00:20:08.203 03:20:42 -- nvmf/common.sh@125 -- # return 0 00:20:08.203 03:20:42 -- nvmf/common.sh@478 -- # '[' -n 1527575 ']' 00:20:08.203 03:20:42 -- nvmf/common.sh@479 -- # killprocess 1527575 00:20:08.203 03:20:42 -- common/autotest_common.sh@936 -- # '[' -z 1527575 ']' 00:20:08.203 03:20:42 -- common/autotest_common.sh@940 -- # kill -0 1527575 00:20:08.203 03:20:42 -- common/autotest_common.sh@941 -- # uname 00:20:08.203 03:20:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:08.203 03:20:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1527575 00:20:08.203 03:20:42 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:08.203 03:20:42 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:08.203 03:20:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1527575' 00:20:08.203 killing process with pid 1527575 00:20:08.203 03:20:42 -- common/autotest_common.sh@955 -- # kill 1527575 00:20:08.203 03:20:42 -- common/autotest_common.sh@960 -- # wait 1527575 00:20:08.771 03:20:43 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:08.771 03:20:43 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:08.771 03:20:43 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:08.771 03:20:43 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:08.771 03:20:43 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:08.771 03:20:43 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:08.771 03:20:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:08.771 03:20:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:11.355 03:20:45 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:11.355 00:20:11.355 real 1m0.243s 00:20:11.355 user 3m21.337s 00:20:11.355 sys 0m22.225s 00:20:11.355 03:20:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:11.355 03:20:45 -- common/autotest_common.sh@10 -- # set +x 00:20:11.355 ************************************ 00:20:11.355 END TEST nvmf_multiconnection 00:20:11.355 ************************************ 00:20:11.355 03:20:45 -- nvmf/nvmf.sh@67 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:20:11.355 03:20:45 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:11.355 03:20:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:11.355 03:20:45 -- common/autotest_common.sh@10 -- # set +x 00:20:11.355 ************************************ 00:20:11.355 START TEST nvmf_initiator_timeout 00:20:11.355 ************************************ 00:20:11.355 03:20:45 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:20:11.355 * Looking for test storage... 00:20:11.355 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:11.356 03:20:45 -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:11.356 03:20:45 -- nvmf/common.sh@7 -- # uname -s 00:20:11.356 03:20:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:11.356 03:20:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:11.356 03:20:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:11.356 03:20:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:11.356 03:20:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:11.356 03:20:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:11.356 03:20:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:11.356 03:20:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:11.356 03:20:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:11.356 03:20:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:11.356 03:20:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:11.356 03:20:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:11.356 03:20:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:11.356 03:20:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:11.356 03:20:45 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:11.356 03:20:45 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:11.356 03:20:45 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:11.356 03:20:45 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:11.356 03:20:45 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:11.356 03:20:45 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:11.356 03:20:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:11.356 03:20:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:11.356 03:20:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:11.356 03:20:45 -- paths/export.sh@5 -- # export PATH 00:20:11.356 03:20:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:11.356 03:20:45 -- nvmf/common.sh@47 -- # : 0 00:20:11.356 03:20:45 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:11.356 03:20:45 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:11.356 03:20:45 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:11.356 03:20:45 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:11.356 03:20:45 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:11.356 03:20:45 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:11.356 03:20:45 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:11.356 03:20:45 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:11.356 03:20:45 -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:11.356 03:20:45 -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:11.356 03:20:45 -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:20:11.356 03:20:45 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:11.356 03:20:45 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:11.356 03:20:45 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:11.356 03:20:45 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:11.356 03:20:45 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:11.356 03:20:45 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:11.356 03:20:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:11.356 03:20:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:11.356 03:20:45 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:20:11.356 03:20:45 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:20:11.356 03:20:45 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:11.356 03:20:45 -- common/autotest_common.sh@10 -- # set +x 00:20:13.262 03:20:47 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:13.262 03:20:47 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:13.262 03:20:47 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:13.262 03:20:47 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:13.262 03:20:47 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:13.262 03:20:47 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:13.262 03:20:47 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:13.262 03:20:47 -- nvmf/common.sh@295 -- # net_devs=() 00:20:13.262 03:20:47 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:13.262 03:20:47 -- nvmf/common.sh@296 -- # e810=() 00:20:13.262 03:20:47 -- nvmf/common.sh@296 -- # local -ga e810 00:20:13.262 03:20:47 -- nvmf/common.sh@297 -- # x722=() 00:20:13.262 03:20:47 -- nvmf/common.sh@297 -- # local -ga x722 00:20:13.262 03:20:47 -- nvmf/common.sh@298 -- # mlx=() 00:20:13.262 03:20:47 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:13.262 03:20:47 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:13.262 03:20:47 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:13.262 03:20:47 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:13.262 03:20:47 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:13.262 03:20:47 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:13.262 03:20:47 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:13.262 03:20:47 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:13.262 03:20:47 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:13.262 03:20:47 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:13.262 03:20:47 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:13.262 03:20:47 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:13.262 03:20:47 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:13.262 03:20:47 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:13.263 03:20:47 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:13.263 03:20:47 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:13.263 03:20:47 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:13.263 03:20:47 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:13.263 03:20:47 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:13.263 03:20:47 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:13.263 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:13.263 03:20:47 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:13.263 03:20:47 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:13.263 03:20:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:13.263 03:20:47 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:13.263 03:20:47 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:13.263 03:20:47 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:13.263 03:20:47 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:13.263 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:13.263 03:20:47 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:13.263 03:20:47 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:13.263 03:20:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:13.263 03:20:47 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:13.263 03:20:47 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:13.263 03:20:47 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:13.263 03:20:47 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:13.263 03:20:47 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:13.263 03:20:47 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:13.263 03:20:47 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:13.263 03:20:47 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:13.263 03:20:47 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:13.263 03:20:47 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:13.263 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:13.263 03:20:47 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:13.263 03:20:47 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:13.263 03:20:47 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:13.263 03:20:47 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:13.263 03:20:47 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:13.263 03:20:47 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:13.263 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:13.263 03:20:47 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:13.263 03:20:47 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:20:13.263 03:20:47 -- nvmf/common.sh@403 -- # is_hw=yes 00:20:13.263 03:20:47 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:20:13.263 03:20:47 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:20:13.263 03:20:47 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:20:13.263 03:20:47 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:13.263 03:20:47 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:13.263 03:20:47 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:13.263 03:20:47 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:13.263 03:20:47 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:13.263 03:20:47 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:13.263 03:20:47 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:13.263 03:20:47 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:13.263 03:20:47 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:13.263 03:20:47 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:13.263 03:20:47 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:13.263 03:20:47 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:13.263 03:20:47 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:13.263 03:20:47 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:13.263 03:20:47 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:13.263 03:20:47 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:13.263 03:20:47 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:13.263 03:20:47 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:13.263 03:20:47 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:13.263 03:20:47 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:13.263 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:13.263 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.228 ms 00:20:13.263 00:20:13.263 --- 10.0.0.2 ping statistics --- 00:20:13.263 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:13.263 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:20:13.263 03:20:47 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:13.263 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:13.263 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:20:13.263 00:20:13.263 --- 10.0.0.1 ping statistics --- 00:20:13.263 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:13.263 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:20:13.263 03:20:47 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:13.263 03:20:47 -- nvmf/common.sh@411 -- # return 0 00:20:13.263 03:20:47 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:13.263 03:20:47 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:13.263 03:20:47 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:13.263 03:20:47 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:13.263 03:20:47 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:13.263 03:20:47 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:13.263 03:20:47 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:13.263 03:20:47 -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:20:13.263 03:20:47 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:13.263 03:20:47 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:13.263 03:20:47 -- common/autotest_common.sh@10 -- # set +x 00:20:13.263 03:20:47 -- nvmf/common.sh@470 -- # nvmfpid=1536094 00:20:13.263 03:20:47 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:13.263 03:20:47 -- nvmf/common.sh@471 -- # waitforlisten 1536094 00:20:13.263 03:20:47 -- common/autotest_common.sh@817 -- # '[' -z 1536094 ']' 00:20:13.263 03:20:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:13.263 03:20:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:13.263 03:20:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:13.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:13.263 03:20:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:13.263 03:20:47 -- common/autotest_common.sh@10 -- # set +x 00:20:13.263 [2024-04-25 03:20:47.571871] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:20:13.263 [2024-04-25 03:20:47.571960] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:13.263 EAL: No free 2048 kB hugepages reported on node 1 00:20:13.263 [2024-04-25 03:20:47.642321] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:13.523 [2024-04-25 03:20:47.769302] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:13.524 [2024-04-25 03:20:47.769363] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:13.524 [2024-04-25 03:20:47.769387] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:13.524 [2024-04-25 03:20:47.769400] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:13.524 [2024-04-25 03:20:47.769427] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:13.524 [2024-04-25 03:20:47.769495] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:13.524 [2024-04-25 03:20:47.769572] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:13.524 [2024-04-25 03:20:47.769689] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:13.524 [2024-04-25 03:20:47.769694] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:14.092 03:20:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:14.092 03:20:48 -- common/autotest_common.sh@850 -- # return 0 00:20:14.092 03:20:48 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:14.092 03:20:48 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:14.092 03:20:48 -- common/autotest_common.sh@10 -- # set +x 00:20:14.351 03:20:48 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:14.351 03:20:48 -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:20:14.351 03:20:48 -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:14.351 03:20:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:14.351 03:20:48 -- common/autotest_common.sh@10 -- # set +x 00:20:14.351 Malloc0 00:20:14.351 03:20:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:14.351 03:20:48 -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:20:14.351 03:20:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:14.351 03:20:48 -- common/autotest_common.sh@10 -- # set +x 00:20:14.351 Delay0 00:20:14.351 03:20:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:14.351 03:20:48 -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:14.351 03:20:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:14.351 03:20:48 -- common/autotest_common.sh@10 -- # set +x 00:20:14.351 [2024-04-25 03:20:48.638809] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:14.351 03:20:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:14.351 03:20:48 -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:20:14.351 03:20:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:14.351 03:20:48 -- common/autotest_common.sh@10 -- # set +x 00:20:14.351 03:20:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:14.351 03:20:48 -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:14.351 03:20:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:14.351 03:20:48 -- common/autotest_common.sh@10 -- # set +x 00:20:14.351 03:20:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:14.351 03:20:48 -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:14.351 03:20:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:14.351 03:20:48 -- common/autotest_common.sh@10 -- # set +x 00:20:14.351 [2024-04-25 03:20:48.667103] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:14.351 03:20:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:14.351 03:20:48 -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:20:14.916 03:20:49 -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:20:14.916 03:20:49 -- common/autotest_common.sh@1184 -- # local i=0 00:20:14.916 03:20:49 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:20:14.916 03:20:49 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:20:14.916 03:20:49 -- common/autotest_common.sh@1191 -- # sleep 2 00:20:16.821 03:20:51 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:20:16.821 03:20:51 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:20:16.821 03:20:51 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:20:16.821 03:20:51 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:20:16.821 03:20:51 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:20:16.821 03:20:51 -- common/autotest_common.sh@1194 -- # return 0 00:20:16.821 03:20:51 -- target/initiator_timeout.sh@35 -- # fio_pid=1536645 00:20:16.821 03:20:51 -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:20:16.821 03:20:51 -- target/initiator_timeout.sh@37 -- # sleep 3 00:20:16.821 [global] 00:20:16.821 thread=1 00:20:16.821 invalidate=1 00:20:16.821 rw=write 00:20:16.821 time_based=1 00:20:16.821 runtime=60 00:20:16.821 ioengine=libaio 00:20:16.821 direct=1 00:20:16.821 bs=4096 00:20:16.821 iodepth=1 00:20:16.821 norandommap=0 00:20:16.821 numjobs=1 00:20:16.821 00:20:16.821 verify_dump=1 00:20:16.821 verify_backlog=512 00:20:16.821 verify_state_save=0 00:20:16.821 do_verify=1 00:20:16.821 verify=crc32c-intel 00:20:16.821 [job0] 00:20:16.821 filename=/dev/nvme0n1 00:20:16.821 Could not set queue depth (nvme0n1) 00:20:17.078 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:17.078 fio-3.35 00:20:17.078 Starting 1 thread 00:20:20.356 03:20:54 -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:20:20.356 03:20:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:20.356 03:20:54 -- common/autotest_common.sh@10 -- # set +x 00:20:20.356 true 00:20:20.356 03:20:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:20.356 03:20:54 -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:20:20.356 03:20:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:20.356 03:20:54 -- common/autotest_common.sh@10 -- # set +x 00:20:20.356 true 00:20:20.356 03:20:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:20.356 03:20:54 -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:20:20.356 03:20:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:20.356 03:20:54 -- common/autotest_common.sh@10 -- # set +x 00:20:20.356 true 00:20:20.356 03:20:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:20.356 03:20:54 -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:20:20.356 03:20:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:20.356 03:20:54 -- common/autotest_common.sh@10 -- # set +x 00:20:20.356 true 00:20:20.356 03:20:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:20.356 03:20:54 -- target/initiator_timeout.sh@45 -- # sleep 3 00:20:22.883 03:20:57 -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:20:22.883 03:20:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:22.883 03:20:57 -- common/autotest_common.sh@10 -- # set +x 00:20:22.883 true 00:20:22.883 03:20:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:22.883 03:20:57 -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:20:22.883 03:20:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:22.883 03:20:57 -- common/autotest_common.sh@10 -- # set +x 00:20:22.883 true 00:20:22.883 03:20:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:22.883 03:20:57 -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:20:22.883 03:20:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:22.883 03:20:57 -- common/autotest_common.sh@10 -- # set +x 00:20:22.883 true 00:20:22.883 03:20:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:22.883 03:20:57 -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:20:22.883 03:20:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:22.883 03:20:57 -- common/autotest_common.sh@10 -- # set +x 00:20:22.883 true 00:20:22.883 03:20:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:22.883 03:20:57 -- target/initiator_timeout.sh@53 -- # fio_status=0 00:20:22.883 03:20:57 -- target/initiator_timeout.sh@54 -- # wait 1536645 00:21:19.129 00:21:19.129 job0: (groupid=0, jobs=1): err= 0: pid=1536715: Thu Apr 25 03:21:51 2024 00:21:19.129 read: IOPS=157, BW=630KiB/s (645kB/s)(36.9MiB/60037msec) 00:21:19.129 slat (usec): min=5, max=36732, avg=23.72, stdev=382.89 00:21:19.129 clat (usec): min=387, max=41096k, avg=5924.13, stdev=422624.01 00:21:19.129 lat (usec): min=393, max=41096k, avg=5947.85, stdev=422624.09 00:21:19.129 clat percentiles (usec): 00:21:19.129 | 1.00th=[ 416], 5.00th=[ 478], 10.00th=[ 494], 00:21:19.129 | 20.00th=[ 506], 30.00th=[ 519], 40.00th=[ 537], 00:21:19.129 | 50.00th=[ 553], 60.00th=[ 611], 70.00th=[ 660], 00:21:19.129 | 80.00th=[ 693], 90.00th=[ 734], 95.00th=[ 775], 00:21:19.129 | 99.00th=[ 41157], 99.50th=[ 41157], 99.90th=[ 42206], 00:21:19.129 | 99.95th=[ 42206], 99.99th=[17112761] 00:21:19.129 write: IOPS=162, BW=648KiB/s (664kB/s)(38.0MiB/60037msec); 0 zone resets 00:21:19.129 slat (nsec): min=6907, max=79662, avg=21205.80, stdev=11736.15 00:21:19.129 clat (usec): min=247, max=2836, avg=357.25, stdev=72.79 00:21:19.129 lat (usec): min=255, max=2858, avg=378.46, stdev=79.02 00:21:19.129 clat percentiles (usec): 00:21:19.129 | 1.00th=[ 258], 5.00th=[ 265], 10.00th=[ 273], 20.00th=[ 289], 00:21:19.129 | 30.00th=[ 310], 40.00th=[ 326], 50.00th=[ 355], 60.00th=[ 375], 00:21:19.129 | 70.00th=[ 392], 80.00th=[ 416], 90.00th=[ 461], 95.00th=[ 478], 00:21:19.129 | 99.00th=[ 498], 99.50th=[ 510], 99.90th=[ 529], 99.95th=[ 553], 00:21:19.129 | 99.99th=[ 2835] 00:21:19.129 bw ( KiB/s): min= 1392, max= 5800, per=100.00%, avg=4096.00, stdev=933.01, samples=19 00:21:19.129 iops : min= 348, max= 1450, avg=1024.00, stdev=233.25, samples=19 00:21:19.129 lat (usec) : 250=0.01%, 500=57.60%, 750=38.78%, 1000=2.37% 00:21:19.129 lat (msec) : 2=0.02%, 4=0.01%, 10=0.01%, 50=1.20%, >=2000=0.01% 00:21:19.129 cpu : usr=0.49%, sys=0.82%, ctx=19187, majf=0, minf=2 00:21:19.129 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:19.129 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:19.129 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:19.129 issued rwts: total=9457,9728,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:19.129 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:19.129 00:21:19.129 Run status group 0 (all jobs): 00:21:19.129 READ: bw=630KiB/s (645kB/s), 630KiB/s-630KiB/s (645kB/s-645kB/s), io=36.9MiB (38.7MB), run=60037-60037msec 00:21:19.129 WRITE: bw=648KiB/s (664kB/s), 648KiB/s-648KiB/s (664kB/s-664kB/s), io=38.0MiB (39.8MB), run=60037-60037msec 00:21:19.129 00:21:19.129 Disk stats (read/write): 00:21:19.129 nvme0n1: ios=9552/9728, merge=0/0, ticks=15930/3231, in_queue=19161, util=99.68% 00:21:19.129 03:21:51 -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:21:19.129 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:19.129 03:21:51 -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:21:19.129 03:21:51 -- common/autotest_common.sh@1205 -- # local i=0 00:21:19.129 03:21:51 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:21:19.129 03:21:51 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:19.129 03:21:51 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:21:19.129 03:21:51 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:19.129 03:21:51 -- common/autotest_common.sh@1217 -- # return 0 00:21:19.129 03:21:51 -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:21:19.129 03:21:51 -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:21:19.129 nvmf hotplug test: fio successful as expected 00:21:19.129 03:21:51 -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:19.129 03:21:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:19.129 03:21:51 -- common/autotest_common.sh@10 -- # set +x 00:21:19.129 03:21:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:19.129 03:21:51 -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:21:19.129 03:21:51 -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:21:19.129 03:21:51 -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:21:19.129 03:21:51 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:19.129 03:21:51 -- nvmf/common.sh@117 -- # sync 00:21:19.129 03:21:51 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:19.129 03:21:51 -- nvmf/common.sh@120 -- # set +e 00:21:19.129 03:21:51 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:19.129 03:21:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:19.129 rmmod nvme_tcp 00:21:19.129 rmmod nvme_fabrics 00:21:19.129 rmmod nvme_keyring 00:21:19.129 03:21:51 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:19.129 03:21:51 -- nvmf/common.sh@124 -- # set -e 00:21:19.129 03:21:51 -- nvmf/common.sh@125 -- # return 0 00:21:19.129 03:21:51 -- nvmf/common.sh@478 -- # '[' -n 1536094 ']' 00:21:19.129 03:21:51 -- nvmf/common.sh@479 -- # killprocess 1536094 00:21:19.129 03:21:51 -- common/autotest_common.sh@936 -- # '[' -z 1536094 ']' 00:21:19.129 03:21:51 -- common/autotest_common.sh@940 -- # kill -0 1536094 00:21:19.129 03:21:51 -- common/autotest_common.sh@941 -- # uname 00:21:19.129 03:21:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:19.129 03:21:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1536094 00:21:19.129 03:21:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:19.129 03:21:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:19.129 03:21:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1536094' 00:21:19.129 killing process with pid 1536094 00:21:19.129 03:21:51 -- common/autotest_common.sh@955 -- # kill 1536094 00:21:19.129 03:21:51 -- common/autotest_common.sh@960 -- # wait 1536094 00:21:19.129 03:21:52 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:19.129 03:21:52 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:19.129 03:21:52 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:19.129 03:21:52 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:19.129 03:21:52 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:19.129 03:21:52 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:19.129 03:21:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:19.129 03:21:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:20.064 03:21:54 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:20.064 00:21:20.064 real 1m8.847s 00:21:20.064 user 4m13.100s 00:21:20.064 sys 0m7.526s 00:21:20.064 03:21:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:20.064 03:21:54 -- common/autotest_common.sh@10 -- # set +x 00:21:20.064 ************************************ 00:21:20.064 END TEST nvmf_initiator_timeout 00:21:20.064 ************************************ 00:21:20.064 03:21:54 -- nvmf/nvmf.sh@70 -- # [[ phy == phy ]] 00:21:20.064 03:21:54 -- nvmf/nvmf.sh@71 -- # '[' tcp = tcp ']' 00:21:20.064 03:21:54 -- nvmf/nvmf.sh@72 -- # gather_supported_nvmf_pci_devs 00:21:20.064 03:21:54 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:20.064 03:21:54 -- common/autotest_common.sh@10 -- # set +x 00:21:21.965 03:21:56 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:21.965 03:21:56 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:21.965 03:21:56 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:21.965 03:21:56 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:21.965 03:21:56 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:21.965 03:21:56 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:21.965 03:21:56 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:21.965 03:21:56 -- nvmf/common.sh@295 -- # net_devs=() 00:21:21.965 03:21:56 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:21.965 03:21:56 -- nvmf/common.sh@296 -- # e810=() 00:21:21.965 03:21:56 -- nvmf/common.sh@296 -- # local -ga e810 00:21:21.965 03:21:56 -- nvmf/common.sh@297 -- # x722=() 00:21:21.965 03:21:56 -- nvmf/common.sh@297 -- # local -ga x722 00:21:21.965 03:21:56 -- nvmf/common.sh@298 -- # mlx=() 00:21:21.965 03:21:56 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:21.965 03:21:56 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:21.965 03:21:56 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:21.965 03:21:56 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:21.965 03:21:56 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:21.965 03:21:56 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:21.965 03:21:56 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:21.965 03:21:56 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:21.965 03:21:56 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:21.965 03:21:56 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:21.965 03:21:56 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:21.965 03:21:56 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:21.966 03:21:56 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:21.966 03:21:56 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:21.966 03:21:56 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:21.966 03:21:56 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:21.966 03:21:56 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:21.966 03:21:56 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:21.966 03:21:56 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:21.966 03:21:56 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:21.966 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:21.966 03:21:56 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:21.966 03:21:56 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:21.966 03:21:56 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:21.966 03:21:56 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:21.966 03:21:56 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:21.966 03:21:56 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:21.966 03:21:56 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:21.966 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:21.966 03:21:56 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:21.966 03:21:56 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:21.966 03:21:56 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:21.966 03:21:56 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:21.966 03:21:56 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:21.966 03:21:56 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:21.966 03:21:56 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:21.966 03:21:56 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:21.966 03:21:56 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:21.966 03:21:56 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:21.966 03:21:56 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:21.966 03:21:56 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:21.966 03:21:56 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:21.966 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:21.966 03:21:56 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:21.966 03:21:56 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:21.966 03:21:56 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:21.966 03:21:56 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:21.966 03:21:56 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:21.966 03:21:56 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:21.966 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:21.966 03:21:56 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:21.966 03:21:56 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:21:21.966 03:21:56 -- nvmf/nvmf.sh@73 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:21.966 03:21:56 -- nvmf/nvmf.sh@74 -- # (( 2 > 0 )) 00:21:21.966 03:21:56 -- nvmf/nvmf.sh@75 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:21.966 03:21:56 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:21.966 03:21:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:21.966 03:21:56 -- common/autotest_common.sh@10 -- # set +x 00:21:21.966 ************************************ 00:21:21.966 START TEST nvmf_perf_adq 00:21:21.966 ************************************ 00:21:21.966 03:21:56 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:21.966 * Looking for test storage... 00:21:21.966 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:21.966 03:21:56 -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:21.966 03:21:56 -- nvmf/common.sh@7 -- # uname -s 00:21:21.966 03:21:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:21.966 03:21:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:21.966 03:21:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:21.966 03:21:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:21.966 03:21:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:21.966 03:21:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:21.966 03:21:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:21.966 03:21:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:21.966 03:21:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:21.966 03:21:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:21.966 03:21:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:21.966 03:21:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:21.966 03:21:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:21.966 03:21:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:21.966 03:21:56 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:21.966 03:21:56 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:21.966 03:21:56 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:21.966 03:21:56 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:21.966 03:21:56 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:21.966 03:21:56 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:21.966 03:21:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.966 03:21:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.966 03:21:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.966 03:21:56 -- paths/export.sh@5 -- # export PATH 00:21:21.966 03:21:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.966 03:21:56 -- nvmf/common.sh@47 -- # : 0 00:21:21.966 03:21:56 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:21.966 03:21:56 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:21.966 03:21:56 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:21.966 03:21:56 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:21.966 03:21:56 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:21.966 03:21:56 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:21.966 03:21:56 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:21.966 03:21:56 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:21.966 03:21:56 -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:21:21.966 03:21:56 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:21.966 03:21:56 -- common/autotest_common.sh@10 -- # set +x 00:21:23.869 03:21:58 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:23.869 03:21:58 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:23.869 03:21:58 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:23.869 03:21:58 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:23.869 03:21:58 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:23.869 03:21:58 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:23.869 03:21:58 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:23.869 03:21:58 -- nvmf/common.sh@295 -- # net_devs=() 00:21:23.869 03:21:58 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:23.869 03:21:58 -- nvmf/common.sh@296 -- # e810=() 00:21:23.869 03:21:58 -- nvmf/common.sh@296 -- # local -ga e810 00:21:23.869 03:21:58 -- nvmf/common.sh@297 -- # x722=() 00:21:23.869 03:21:58 -- nvmf/common.sh@297 -- # local -ga x722 00:21:23.869 03:21:58 -- nvmf/common.sh@298 -- # mlx=() 00:21:23.869 03:21:58 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:23.869 03:21:58 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:23.869 03:21:58 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:23.869 03:21:58 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:23.869 03:21:58 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:23.869 03:21:58 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:23.869 03:21:58 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:23.869 03:21:58 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:23.869 03:21:58 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:23.869 03:21:58 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:23.869 03:21:58 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:23.869 03:21:58 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:23.869 03:21:58 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:23.869 03:21:58 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:23.869 03:21:58 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:23.869 03:21:58 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:23.869 03:21:58 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:23.869 03:21:58 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:23.869 03:21:58 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:23.869 03:21:58 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:23.869 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:23.869 03:21:58 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:23.869 03:21:58 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:23.869 03:21:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:23.869 03:21:58 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:23.869 03:21:58 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:23.869 03:21:58 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:23.869 03:21:58 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:23.869 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:23.869 03:21:58 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:23.869 03:21:58 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:23.869 03:21:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:23.869 03:21:58 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:23.869 03:21:58 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:23.869 03:21:58 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:23.869 03:21:58 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:23.869 03:21:58 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:23.869 03:21:58 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:23.869 03:21:58 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:23.870 03:21:58 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:23.870 03:21:58 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:23.870 03:21:58 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:23.870 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:23.870 03:21:58 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:23.870 03:21:58 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:23.870 03:21:58 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:23.870 03:21:58 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:23.870 03:21:58 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:23.870 03:21:58 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:23.870 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:23.870 03:21:58 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:23.870 03:21:58 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:21:23.870 03:21:58 -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:23.870 03:21:58 -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:21:23.870 03:21:58 -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:23.870 03:21:58 -- target/perf_adq.sh@59 -- # adq_reload_driver 00:21:23.870 03:21:58 -- target/perf_adq.sh@52 -- # rmmod ice 00:21:24.437 03:21:58 -- target/perf_adq.sh@53 -- # modprobe ice 00:21:26.337 03:22:00 -- target/perf_adq.sh@54 -- # sleep 5 00:21:31.610 03:22:05 -- target/perf_adq.sh@67 -- # nvmftestinit 00:21:31.610 03:22:05 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:31.610 03:22:05 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:31.610 03:22:05 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:31.610 03:22:05 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:31.610 03:22:05 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:31.610 03:22:05 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:31.610 03:22:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:31.610 03:22:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:31.610 03:22:05 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:21:31.610 03:22:05 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:21:31.610 03:22:05 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:31.610 03:22:05 -- common/autotest_common.sh@10 -- # set +x 00:21:31.610 03:22:05 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:31.610 03:22:05 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:31.610 03:22:05 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:31.610 03:22:05 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:31.610 03:22:05 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:31.610 03:22:05 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:31.610 03:22:05 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:31.610 03:22:05 -- nvmf/common.sh@295 -- # net_devs=() 00:21:31.610 03:22:05 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:31.610 03:22:05 -- nvmf/common.sh@296 -- # e810=() 00:21:31.610 03:22:05 -- nvmf/common.sh@296 -- # local -ga e810 00:21:31.610 03:22:05 -- nvmf/common.sh@297 -- # x722=() 00:21:31.610 03:22:05 -- nvmf/common.sh@297 -- # local -ga x722 00:21:31.610 03:22:05 -- nvmf/common.sh@298 -- # mlx=() 00:21:31.610 03:22:05 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:31.610 03:22:05 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:31.610 03:22:05 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:31.610 03:22:05 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:31.610 03:22:05 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:31.610 03:22:05 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:31.610 03:22:05 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:31.610 03:22:05 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:31.610 03:22:05 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:31.610 03:22:05 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:31.610 03:22:05 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:31.610 03:22:05 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:31.610 03:22:05 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:31.610 03:22:05 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:31.610 03:22:05 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:31.610 03:22:05 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:31.610 03:22:05 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:31.610 03:22:05 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:31.610 03:22:05 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:31.610 03:22:05 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:31.610 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:31.610 03:22:05 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:31.610 03:22:05 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:31.611 03:22:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:31.611 03:22:05 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:31.611 03:22:05 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:31.611 03:22:05 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:31.611 03:22:05 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:31.611 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:31.611 03:22:05 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:31.611 03:22:05 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:31.611 03:22:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:31.611 03:22:05 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:31.611 03:22:05 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:31.611 03:22:05 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:31.611 03:22:05 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:31.611 03:22:05 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:31.611 03:22:05 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:31.611 03:22:05 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:31.611 03:22:05 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:31.611 03:22:05 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:31.611 03:22:05 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:31.611 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:31.611 03:22:05 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:31.611 03:22:05 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:31.611 03:22:05 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:31.611 03:22:05 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:31.611 03:22:05 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:31.611 03:22:05 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:31.611 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:31.611 03:22:05 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:31.611 03:22:05 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:21:31.611 03:22:05 -- nvmf/common.sh@403 -- # is_hw=yes 00:21:31.611 03:22:05 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:21:31.611 03:22:05 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:21:31.611 03:22:05 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:21:31.611 03:22:05 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:31.611 03:22:05 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:31.611 03:22:05 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:31.611 03:22:05 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:31.611 03:22:05 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:31.611 03:22:05 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:31.611 03:22:05 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:31.611 03:22:05 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:31.611 03:22:05 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:31.611 03:22:05 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:31.611 03:22:05 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:31.611 03:22:05 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:31.611 03:22:05 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:31.611 03:22:05 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:31.611 03:22:05 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:31.611 03:22:05 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:31.611 03:22:05 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:31.611 03:22:05 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:31.611 03:22:05 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:31.611 03:22:05 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:31.611 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:31.611 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.223 ms 00:21:31.611 00:21:31.611 --- 10.0.0.2 ping statistics --- 00:21:31.611 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:31.611 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:21:31.611 03:22:05 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:31.611 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:31.611 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms 00:21:31.611 00:21:31.611 --- 10.0.0.1 ping statistics --- 00:21:31.611 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:31.611 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:21:31.611 03:22:05 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:31.611 03:22:05 -- nvmf/common.sh@411 -- # return 0 00:21:31.611 03:22:05 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:31.611 03:22:05 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:31.611 03:22:05 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:31.611 03:22:05 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:31.611 03:22:05 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:31.611 03:22:05 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:31.611 03:22:05 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:31.611 03:22:05 -- target/perf_adq.sh@68 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:31.611 03:22:05 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:31.611 03:22:05 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:31.611 03:22:05 -- common/autotest_common.sh@10 -- # set +x 00:21:31.611 03:22:05 -- nvmf/common.sh@470 -- # nvmfpid=1548869 00:21:31.611 03:22:05 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:31.611 03:22:05 -- nvmf/common.sh@471 -- # waitforlisten 1548869 00:21:31.611 03:22:05 -- common/autotest_common.sh@817 -- # '[' -z 1548869 ']' 00:21:31.611 03:22:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:31.611 03:22:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:31.611 03:22:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:31.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:31.611 03:22:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:31.611 03:22:05 -- common/autotest_common.sh@10 -- # set +x 00:21:31.611 [2024-04-25 03:22:06.023387] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:21:31.611 [2024-04-25 03:22:06.023459] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:31.611 EAL: No free 2048 kB hugepages reported on node 1 00:21:31.611 [2024-04-25 03:22:06.093223] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:31.871 [2024-04-25 03:22:06.215051] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:31.871 [2024-04-25 03:22:06.215115] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:31.871 [2024-04-25 03:22:06.215129] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:31.871 [2024-04-25 03:22:06.215140] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:31.871 [2024-04-25 03:22:06.215164] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:31.871 [2024-04-25 03:22:06.218656] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:31.871 [2024-04-25 03:22:06.218709] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:31.871 [2024-04-25 03:22:06.218804] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:31.871 [2024-04-25 03:22:06.218808] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:31.871 03:22:06 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:31.871 03:22:06 -- common/autotest_common.sh@850 -- # return 0 00:21:31.871 03:22:06 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:31.871 03:22:06 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:31.871 03:22:06 -- common/autotest_common.sh@10 -- # set +x 00:21:31.871 03:22:06 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:31.871 03:22:06 -- target/perf_adq.sh@69 -- # adq_configure_nvmf_target 0 00:21:31.871 03:22:06 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:21:31.871 03:22:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:31.871 03:22:06 -- common/autotest_common.sh@10 -- # set +x 00:21:31.871 03:22:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:31.871 03:22:06 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:21:31.871 03:22:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:31.871 03:22:06 -- common/autotest_common.sh@10 -- # set +x 00:21:32.129 03:22:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:32.129 03:22:06 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:21:32.129 03:22:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:32.129 03:22:06 -- common/autotest_common.sh@10 -- # set +x 00:21:32.129 [2024-04-25 03:22:06.418569] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:32.129 03:22:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:32.129 03:22:06 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:32.129 03:22:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:32.129 03:22:06 -- common/autotest_common.sh@10 -- # set +x 00:21:32.129 Malloc1 00:21:32.129 03:22:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:32.129 03:22:06 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:32.129 03:22:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:32.129 03:22:06 -- common/autotest_common.sh@10 -- # set +x 00:21:32.129 03:22:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:32.129 03:22:06 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:32.129 03:22:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:32.129 03:22:06 -- common/autotest_common.sh@10 -- # set +x 00:21:32.129 03:22:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:32.129 03:22:06 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:32.129 03:22:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:32.129 03:22:06 -- common/autotest_common.sh@10 -- # set +x 00:21:32.129 [2024-04-25 03:22:06.471722] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:32.129 03:22:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:32.129 03:22:06 -- target/perf_adq.sh@73 -- # perfpid=1548902 00:21:32.129 03:22:06 -- target/perf_adq.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:32.129 03:22:06 -- target/perf_adq.sh@74 -- # sleep 2 00:21:32.129 EAL: No free 2048 kB hugepages reported on node 1 00:21:34.030 03:22:08 -- target/perf_adq.sh@76 -- # rpc_cmd nvmf_get_stats 00:21:34.030 03:22:08 -- target/perf_adq.sh@76 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:21:34.030 03:22:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:34.030 03:22:08 -- target/perf_adq.sh@76 -- # wc -l 00:21:34.030 03:22:08 -- common/autotest_common.sh@10 -- # set +x 00:21:34.030 03:22:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:34.030 03:22:08 -- target/perf_adq.sh@76 -- # count=4 00:21:34.030 03:22:08 -- target/perf_adq.sh@77 -- # [[ 4 -ne 4 ]] 00:21:34.030 03:22:08 -- target/perf_adq.sh@81 -- # wait 1548902 00:21:42.170 Initializing NVMe Controllers 00:21:42.170 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:42.170 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:42.170 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:42.170 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:42.170 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:42.170 Initialization complete. Launching workers. 00:21:42.170 ======================================================== 00:21:42.170 Latency(us) 00:21:42.170 Device Information : IOPS MiB/s Average min max 00:21:42.170 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10509.38 41.05 6090.36 2064.22 9835.87 00:21:42.170 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10417.48 40.69 6144.27 1099.86 9642.34 00:21:42.170 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10471.08 40.90 6111.65 2150.71 9832.80 00:21:42.170 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 7332.55 28.64 8729.77 2079.20 14155.91 00:21:42.170 ======================================================== 00:21:42.170 Total : 38730.49 151.29 6610.31 1099.86 14155.91 00:21:42.170 00:21:42.170 03:22:16 -- target/perf_adq.sh@82 -- # nvmftestfini 00:21:42.170 03:22:16 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:42.170 03:22:16 -- nvmf/common.sh@117 -- # sync 00:21:42.170 03:22:16 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:42.170 03:22:16 -- nvmf/common.sh@120 -- # set +e 00:21:42.170 03:22:16 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:42.170 03:22:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:42.170 rmmod nvme_tcp 00:21:42.170 rmmod nvme_fabrics 00:21:42.170 rmmod nvme_keyring 00:21:42.170 03:22:16 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:42.170 03:22:16 -- nvmf/common.sh@124 -- # set -e 00:21:42.170 03:22:16 -- nvmf/common.sh@125 -- # return 0 00:21:42.428 03:22:16 -- nvmf/common.sh@478 -- # '[' -n 1548869 ']' 00:21:42.428 03:22:16 -- nvmf/common.sh@479 -- # killprocess 1548869 00:21:42.428 03:22:16 -- common/autotest_common.sh@936 -- # '[' -z 1548869 ']' 00:21:42.428 03:22:16 -- common/autotest_common.sh@940 -- # kill -0 1548869 00:21:42.428 03:22:16 -- common/autotest_common.sh@941 -- # uname 00:21:42.428 03:22:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:42.428 03:22:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1548869 00:21:42.428 03:22:16 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:42.428 03:22:16 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:42.428 03:22:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1548869' 00:21:42.428 killing process with pid 1548869 00:21:42.428 03:22:16 -- common/autotest_common.sh@955 -- # kill 1548869 00:21:42.428 03:22:16 -- common/autotest_common.sh@960 -- # wait 1548869 00:21:42.687 03:22:16 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:42.687 03:22:16 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:42.687 03:22:16 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:42.687 03:22:16 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:42.687 03:22:16 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:42.687 03:22:16 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:42.687 03:22:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:42.687 03:22:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:44.589 03:22:19 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:44.589 03:22:19 -- target/perf_adq.sh@84 -- # adq_reload_driver 00:21:44.589 03:22:19 -- target/perf_adq.sh@52 -- # rmmod ice 00:21:45.523 03:22:19 -- target/perf_adq.sh@53 -- # modprobe ice 00:21:47.427 03:22:21 -- target/perf_adq.sh@54 -- # sleep 5 00:21:52.696 03:22:26 -- target/perf_adq.sh@87 -- # nvmftestinit 00:21:52.696 03:22:26 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:52.696 03:22:26 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:52.696 03:22:26 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:52.696 03:22:26 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:52.697 03:22:26 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:52.697 03:22:26 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:52.697 03:22:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:52.697 03:22:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:52.697 03:22:26 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:21:52.697 03:22:26 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:21:52.697 03:22:26 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:52.697 03:22:26 -- common/autotest_common.sh@10 -- # set +x 00:21:52.697 03:22:26 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:52.697 03:22:26 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:52.697 03:22:26 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:52.697 03:22:26 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:52.697 03:22:26 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:52.697 03:22:26 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:52.697 03:22:26 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:52.697 03:22:26 -- nvmf/common.sh@295 -- # net_devs=() 00:21:52.697 03:22:26 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:52.697 03:22:26 -- nvmf/common.sh@296 -- # e810=() 00:21:52.697 03:22:26 -- nvmf/common.sh@296 -- # local -ga e810 00:21:52.697 03:22:26 -- nvmf/common.sh@297 -- # x722=() 00:21:52.697 03:22:26 -- nvmf/common.sh@297 -- # local -ga x722 00:21:52.697 03:22:26 -- nvmf/common.sh@298 -- # mlx=() 00:21:52.697 03:22:26 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:52.697 03:22:26 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:52.697 03:22:26 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:52.697 03:22:26 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:52.697 03:22:26 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:52.697 03:22:26 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:52.697 03:22:26 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:52.697 03:22:26 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:52.697 03:22:26 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:52.697 03:22:26 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:52.697 03:22:26 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:52.697 03:22:26 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:52.697 03:22:26 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:52.697 03:22:26 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:52.697 03:22:26 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:52.697 03:22:26 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:52.697 03:22:26 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:52.697 03:22:26 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:52.697 03:22:26 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:52.697 03:22:26 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:52.697 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:52.697 03:22:26 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:52.697 03:22:26 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:52.697 03:22:26 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:52.697 03:22:26 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:52.697 03:22:26 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:52.697 03:22:26 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:52.697 03:22:26 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:52.697 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:52.697 03:22:26 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:52.697 03:22:26 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:52.697 03:22:26 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:52.697 03:22:26 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:52.697 03:22:26 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:52.697 03:22:26 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:52.697 03:22:26 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:52.697 03:22:26 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:52.697 03:22:26 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:52.697 03:22:26 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:52.697 03:22:26 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:52.697 03:22:26 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:52.697 03:22:26 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:52.697 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:52.697 03:22:26 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:52.697 03:22:26 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:52.697 03:22:26 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:52.697 03:22:26 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:52.697 03:22:26 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:52.697 03:22:26 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:52.697 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:52.697 03:22:26 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:52.697 03:22:26 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:21:52.697 03:22:26 -- nvmf/common.sh@403 -- # is_hw=yes 00:21:52.697 03:22:26 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:21:52.697 03:22:26 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:21:52.697 03:22:26 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:21:52.697 03:22:26 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:52.697 03:22:26 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:52.697 03:22:26 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:52.697 03:22:26 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:52.697 03:22:26 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:52.697 03:22:26 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:52.697 03:22:26 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:52.697 03:22:26 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:52.697 03:22:26 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:52.697 03:22:26 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:52.697 03:22:26 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:52.697 03:22:26 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:52.697 03:22:26 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:52.697 03:22:26 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:52.697 03:22:26 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:52.697 03:22:26 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:52.697 03:22:26 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:52.697 03:22:26 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:52.697 03:22:26 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:52.697 03:22:26 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:52.697 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:52.697 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:21:52.697 00:21:52.697 --- 10.0.0.2 ping statistics --- 00:21:52.697 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:52.697 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:21:52.697 03:22:26 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:52.697 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:52.697 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:21:52.697 00:21:52.697 --- 10.0.0.1 ping statistics --- 00:21:52.697 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:52.697 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:21:52.697 03:22:26 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:52.697 03:22:26 -- nvmf/common.sh@411 -- # return 0 00:21:52.697 03:22:26 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:52.697 03:22:26 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:52.697 03:22:26 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:52.697 03:22:26 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:52.697 03:22:26 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:52.697 03:22:26 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:52.697 03:22:26 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:52.697 03:22:26 -- target/perf_adq.sh@88 -- # adq_configure_driver 00:21:52.697 03:22:26 -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:21:52.697 03:22:26 -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:21:52.697 03:22:26 -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:21:52.697 net.core.busy_poll = 1 00:21:52.697 03:22:26 -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:21:52.697 net.core.busy_read = 1 00:21:52.697 03:22:26 -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:21:52.697 03:22:26 -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:21:52.697 03:22:26 -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:21:52.697 03:22:26 -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:21:52.697 03:22:26 -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:21:52.697 03:22:26 -- target/perf_adq.sh@89 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:52.697 03:22:26 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:52.697 03:22:26 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:52.697 03:22:26 -- common/autotest_common.sh@10 -- # set +x 00:21:52.697 03:22:26 -- nvmf/common.sh@470 -- # nvmfpid=1551522 00:21:52.697 03:22:26 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:52.697 03:22:26 -- nvmf/common.sh@471 -- # waitforlisten 1551522 00:21:52.697 03:22:26 -- common/autotest_common.sh@817 -- # '[' -z 1551522 ']' 00:21:52.697 03:22:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:52.697 03:22:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:52.697 03:22:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:52.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:52.697 03:22:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:52.697 03:22:26 -- common/autotest_common.sh@10 -- # set +x 00:21:52.697 [2024-04-25 03:22:27.026975] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:21:52.697 [2024-04-25 03:22:27.027054] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:52.697 EAL: No free 2048 kB hugepages reported on node 1 00:21:52.698 [2024-04-25 03:22:27.091751] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:52.957 [2024-04-25 03:22:27.202122] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:52.957 [2024-04-25 03:22:27.202175] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:52.957 [2024-04-25 03:22:27.202189] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:52.957 [2024-04-25 03:22:27.202200] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:52.957 [2024-04-25 03:22:27.202211] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:52.957 [2024-04-25 03:22:27.202266] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:52.957 [2024-04-25 03:22:27.202327] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:52.957 [2024-04-25 03:22:27.202372] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:52.957 [2024-04-25 03:22:27.202375] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:52.957 03:22:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:52.957 03:22:27 -- common/autotest_common.sh@850 -- # return 0 00:21:52.957 03:22:27 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:52.957 03:22:27 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:52.957 03:22:27 -- common/autotest_common.sh@10 -- # set +x 00:21:52.957 03:22:27 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:52.957 03:22:27 -- target/perf_adq.sh@90 -- # adq_configure_nvmf_target 1 00:21:52.957 03:22:27 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:21:52.957 03:22:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:52.957 03:22:27 -- common/autotest_common.sh@10 -- # set +x 00:21:52.957 03:22:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:52.957 03:22:27 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:21:52.957 03:22:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:52.957 03:22:27 -- common/autotest_common.sh@10 -- # set +x 00:21:52.957 03:22:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:52.957 03:22:27 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:21:52.957 03:22:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:52.957 03:22:27 -- common/autotest_common.sh@10 -- # set +x 00:21:52.957 [2024-04-25 03:22:27.381543] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:52.957 03:22:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:52.957 03:22:27 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:52.957 03:22:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:52.957 03:22:27 -- common/autotest_common.sh@10 -- # set +x 00:21:52.957 Malloc1 00:21:52.957 03:22:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:52.957 03:22:27 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:52.957 03:22:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:52.957 03:22:27 -- common/autotest_common.sh@10 -- # set +x 00:21:52.957 03:22:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:52.957 03:22:27 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:52.957 03:22:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:52.957 03:22:27 -- common/autotest_common.sh@10 -- # set +x 00:21:52.957 03:22:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:52.957 03:22:27 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:52.957 03:22:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:52.957 03:22:27 -- common/autotest_common.sh@10 -- # set +x 00:21:52.957 [2024-04-25 03:22:27.434697] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:52.957 03:22:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:52.957 03:22:27 -- target/perf_adq.sh@94 -- # perfpid=1551659 00:21:52.957 03:22:27 -- target/perf_adq.sh@95 -- # sleep 2 00:21:52.957 03:22:27 -- target/perf_adq.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:53.216 EAL: No free 2048 kB hugepages reported on node 1 00:21:55.119 03:22:29 -- target/perf_adq.sh@97 -- # rpc_cmd nvmf_get_stats 00:21:55.119 03:22:29 -- target/perf_adq.sh@97 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:21:55.119 03:22:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:55.119 03:22:29 -- target/perf_adq.sh@97 -- # wc -l 00:21:55.119 03:22:29 -- common/autotest_common.sh@10 -- # set +x 00:21:55.119 03:22:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:55.119 03:22:29 -- target/perf_adq.sh@97 -- # count=2 00:21:55.119 03:22:29 -- target/perf_adq.sh@98 -- # [[ 2 -lt 2 ]] 00:21:55.119 03:22:29 -- target/perf_adq.sh@103 -- # wait 1551659 00:22:03.228 Initializing NVMe Controllers 00:22:03.228 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:03.228 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:03.228 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:03.228 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:03.228 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:03.228 Initialization complete. Launching workers. 00:22:03.228 ======================================================== 00:22:03.228 Latency(us) 00:22:03.228 Device Information : IOPS MiB/s Average min max 00:22:03.228 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 4844.00 18.92 13218.43 2081.39 60794.25 00:22:03.228 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 4367.10 17.06 14656.58 1505.66 60207.07 00:22:03.228 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 12990.20 50.74 4926.68 1812.37 7621.74 00:22:03.228 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 4601.40 17.97 13915.95 2042.53 59308.39 00:22:03.228 ======================================================== 00:22:03.228 Total : 26802.70 104.70 9553.82 1505.66 60794.25 00:22:03.228 00:22:03.228 03:22:37 -- target/perf_adq.sh@104 -- # nvmftestfini 00:22:03.228 03:22:37 -- nvmf/common.sh@477 -- # nvmfcleanup 00:22:03.228 03:22:37 -- nvmf/common.sh@117 -- # sync 00:22:03.228 03:22:37 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:03.228 03:22:37 -- nvmf/common.sh@120 -- # set +e 00:22:03.228 03:22:37 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:03.228 03:22:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:03.228 rmmod nvme_tcp 00:22:03.228 rmmod nvme_fabrics 00:22:03.228 rmmod nvme_keyring 00:22:03.228 03:22:37 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:03.228 03:22:37 -- nvmf/common.sh@124 -- # set -e 00:22:03.228 03:22:37 -- nvmf/common.sh@125 -- # return 0 00:22:03.228 03:22:37 -- nvmf/common.sh@478 -- # '[' -n 1551522 ']' 00:22:03.228 03:22:37 -- nvmf/common.sh@479 -- # killprocess 1551522 00:22:03.228 03:22:37 -- common/autotest_common.sh@936 -- # '[' -z 1551522 ']' 00:22:03.228 03:22:37 -- common/autotest_common.sh@940 -- # kill -0 1551522 00:22:03.228 03:22:37 -- common/autotest_common.sh@941 -- # uname 00:22:03.228 03:22:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:03.228 03:22:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1551522 00:22:03.228 03:22:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:03.228 03:22:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:03.228 03:22:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1551522' 00:22:03.228 killing process with pid 1551522 00:22:03.228 03:22:37 -- common/autotest_common.sh@955 -- # kill 1551522 00:22:03.228 03:22:37 -- common/autotest_common.sh@960 -- # wait 1551522 00:22:03.487 03:22:37 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:22:03.487 03:22:37 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:22:03.487 03:22:37 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:22:03.487 03:22:37 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:03.487 03:22:37 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:03.487 03:22:37 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:03.487 03:22:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:03.487 03:22:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:06.066 03:22:40 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:06.066 03:22:40 -- target/perf_adq.sh@106 -- # trap - SIGINT SIGTERM EXIT 00:22:06.066 00:22:06.066 real 0m43.702s 00:22:06.066 user 2m27.454s 00:22:06.066 sys 0m13.913s 00:22:06.066 03:22:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:06.066 03:22:40 -- common/autotest_common.sh@10 -- # set +x 00:22:06.066 ************************************ 00:22:06.066 END TEST nvmf_perf_adq 00:22:06.066 ************************************ 00:22:06.066 03:22:40 -- nvmf/nvmf.sh@81 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:06.066 03:22:40 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:06.066 03:22:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:06.066 03:22:40 -- common/autotest_common.sh@10 -- # set +x 00:22:06.066 ************************************ 00:22:06.066 START TEST nvmf_shutdown 00:22:06.066 ************************************ 00:22:06.066 03:22:40 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:06.066 * Looking for test storage... 00:22:06.066 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:06.066 03:22:40 -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:06.066 03:22:40 -- nvmf/common.sh@7 -- # uname -s 00:22:06.066 03:22:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:06.066 03:22:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:06.066 03:22:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:06.066 03:22:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:06.066 03:22:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:06.066 03:22:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:06.066 03:22:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:06.066 03:22:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:06.066 03:22:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:06.066 03:22:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:06.066 03:22:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:06.066 03:22:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:06.066 03:22:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:06.066 03:22:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:06.066 03:22:40 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:06.066 03:22:40 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:06.066 03:22:40 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:06.066 03:22:40 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:06.066 03:22:40 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:06.066 03:22:40 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:06.066 03:22:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:06.066 03:22:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:06.066 03:22:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:06.066 03:22:40 -- paths/export.sh@5 -- # export PATH 00:22:06.066 03:22:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:06.066 03:22:40 -- nvmf/common.sh@47 -- # : 0 00:22:06.066 03:22:40 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:06.066 03:22:40 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:06.066 03:22:40 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:06.066 03:22:40 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:06.066 03:22:40 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:06.066 03:22:40 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:06.066 03:22:40 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:06.066 03:22:40 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:06.066 03:22:40 -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:06.066 03:22:40 -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:06.066 03:22:40 -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:22:06.066 03:22:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:22:06.066 03:22:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:06.066 03:22:40 -- common/autotest_common.sh@10 -- # set +x 00:22:06.066 ************************************ 00:22:06.066 START TEST nvmf_shutdown_tc1 00:22:06.066 ************************************ 00:22:06.066 03:22:40 -- common/autotest_common.sh@1111 -- # nvmf_shutdown_tc1 00:22:06.066 03:22:40 -- target/shutdown.sh@74 -- # starttarget 00:22:06.066 03:22:40 -- target/shutdown.sh@15 -- # nvmftestinit 00:22:06.066 03:22:40 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:22:06.066 03:22:40 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:06.066 03:22:40 -- nvmf/common.sh@437 -- # prepare_net_devs 00:22:06.066 03:22:40 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:22:06.066 03:22:40 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:22:06.067 03:22:40 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:06.067 03:22:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:06.067 03:22:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:06.067 03:22:40 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:22:06.067 03:22:40 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:22:06.067 03:22:40 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:06.067 03:22:40 -- common/autotest_common.sh@10 -- # set +x 00:22:07.970 03:22:42 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:07.970 03:22:42 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:07.970 03:22:42 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:07.970 03:22:42 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:07.970 03:22:42 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:07.970 03:22:42 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:07.970 03:22:42 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:07.970 03:22:42 -- nvmf/common.sh@295 -- # net_devs=() 00:22:07.970 03:22:42 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:07.970 03:22:42 -- nvmf/common.sh@296 -- # e810=() 00:22:07.970 03:22:42 -- nvmf/common.sh@296 -- # local -ga e810 00:22:07.970 03:22:42 -- nvmf/common.sh@297 -- # x722=() 00:22:07.970 03:22:42 -- nvmf/common.sh@297 -- # local -ga x722 00:22:07.970 03:22:42 -- nvmf/common.sh@298 -- # mlx=() 00:22:07.970 03:22:42 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:07.970 03:22:42 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:07.970 03:22:42 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:07.970 03:22:42 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:07.970 03:22:42 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:07.970 03:22:42 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:07.970 03:22:42 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:07.970 03:22:42 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:07.970 03:22:42 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:07.970 03:22:42 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:07.970 03:22:42 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:07.970 03:22:42 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:07.970 03:22:42 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:07.970 03:22:42 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:07.970 03:22:42 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:07.970 03:22:42 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:07.970 03:22:42 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:07.970 03:22:42 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:07.970 03:22:42 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:07.970 03:22:42 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:07.970 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:07.970 03:22:42 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:07.970 03:22:42 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:07.970 03:22:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:07.970 03:22:42 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:07.970 03:22:42 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:07.970 03:22:42 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:07.970 03:22:42 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:07.970 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:07.970 03:22:42 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:07.970 03:22:42 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:07.970 03:22:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:07.970 03:22:42 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:07.970 03:22:42 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:07.970 03:22:42 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:07.970 03:22:42 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:07.970 03:22:42 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:07.970 03:22:42 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:07.970 03:22:42 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:07.970 03:22:42 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:07.970 03:22:42 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:07.970 03:22:42 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:07.970 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:07.970 03:22:42 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:07.970 03:22:42 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:07.970 03:22:42 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:07.970 03:22:42 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:07.970 03:22:42 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:07.970 03:22:42 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:07.970 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:07.970 03:22:42 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:07.970 03:22:42 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:22:07.970 03:22:42 -- nvmf/common.sh@403 -- # is_hw=yes 00:22:07.970 03:22:42 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:22:07.970 03:22:42 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:22:07.970 03:22:42 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:22:07.970 03:22:42 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:07.970 03:22:42 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:07.970 03:22:42 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:07.970 03:22:42 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:07.970 03:22:42 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:07.970 03:22:42 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:07.970 03:22:42 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:07.970 03:22:42 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:07.970 03:22:42 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:07.970 03:22:42 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:07.970 03:22:42 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:07.970 03:22:42 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:07.970 03:22:42 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:07.970 03:22:42 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:07.970 03:22:42 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:07.970 03:22:42 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:07.970 03:22:42 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:07.970 03:22:42 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:07.970 03:22:42 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:07.970 03:22:42 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:07.970 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:07.970 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.199 ms 00:22:07.970 00:22:07.970 --- 10.0.0.2 ping statistics --- 00:22:07.970 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:07.970 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:22:07.970 03:22:42 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:07.970 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:07.970 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:22:07.970 00:22:07.970 --- 10.0.0.1 ping statistics --- 00:22:07.970 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:07.970 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:22:07.970 03:22:42 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:07.970 03:22:42 -- nvmf/common.sh@411 -- # return 0 00:22:07.970 03:22:42 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:22:07.970 03:22:42 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:07.970 03:22:42 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:22:07.970 03:22:42 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:22:07.970 03:22:42 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:07.970 03:22:42 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:22:07.970 03:22:42 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:22:07.970 03:22:42 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:22:07.971 03:22:42 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:22:07.971 03:22:42 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:07.971 03:22:42 -- common/autotest_common.sh@10 -- # set +x 00:22:07.971 03:22:42 -- nvmf/common.sh@470 -- # nvmfpid=1554838 00:22:07.971 03:22:42 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:07.971 03:22:42 -- nvmf/common.sh@471 -- # waitforlisten 1554838 00:22:07.971 03:22:42 -- common/autotest_common.sh@817 -- # '[' -z 1554838 ']' 00:22:07.971 03:22:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:07.971 03:22:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:07.971 03:22:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:07.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:07.971 03:22:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:07.971 03:22:42 -- common/autotest_common.sh@10 -- # set +x 00:22:07.971 [2024-04-25 03:22:42.400080] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:22:07.971 [2024-04-25 03:22:42.400155] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:07.971 EAL: No free 2048 kB hugepages reported on node 1 00:22:07.971 [2024-04-25 03:22:42.467692] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:08.229 [2024-04-25 03:22:42.573601] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:08.229 [2024-04-25 03:22:42.573660] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:08.229 [2024-04-25 03:22:42.573683] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:08.229 [2024-04-25 03:22:42.573695] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:08.229 [2024-04-25 03:22:42.573705] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:08.229 [2024-04-25 03:22:42.575651] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:08.229 [2024-04-25 03:22:42.575722] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:08.229 [2024-04-25 03:22:42.575773] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:22:08.229 [2024-04-25 03:22:42.575777] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:09.162 03:22:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:09.162 03:22:43 -- common/autotest_common.sh@850 -- # return 0 00:22:09.162 03:22:43 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:22:09.162 03:22:43 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:09.162 03:22:43 -- common/autotest_common.sh@10 -- # set +x 00:22:09.162 03:22:43 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:09.162 03:22:43 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:09.162 03:22:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:09.162 03:22:43 -- common/autotest_common.sh@10 -- # set +x 00:22:09.162 [2024-04-25 03:22:43.355528] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:09.162 03:22:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:09.162 03:22:43 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:22:09.162 03:22:43 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:22:09.162 03:22:43 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:09.162 03:22:43 -- common/autotest_common.sh@10 -- # set +x 00:22:09.162 03:22:43 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:09.162 03:22:43 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:09.162 03:22:43 -- target/shutdown.sh@28 -- # cat 00:22:09.162 03:22:43 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:09.162 03:22:43 -- target/shutdown.sh@28 -- # cat 00:22:09.162 03:22:43 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:09.162 03:22:43 -- target/shutdown.sh@28 -- # cat 00:22:09.162 03:22:43 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:09.162 03:22:43 -- target/shutdown.sh@28 -- # cat 00:22:09.162 03:22:43 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:09.162 03:22:43 -- target/shutdown.sh@28 -- # cat 00:22:09.162 03:22:43 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:09.162 03:22:43 -- target/shutdown.sh@28 -- # cat 00:22:09.162 03:22:43 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:09.162 03:22:43 -- target/shutdown.sh@28 -- # cat 00:22:09.162 03:22:43 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:09.162 03:22:43 -- target/shutdown.sh@28 -- # cat 00:22:09.162 03:22:43 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:09.162 03:22:43 -- target/shutdown.sh@28 -- # cat 00:22:09.162 03:22:43 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:09.162 03:22:43 -- target/shutdown.sh@28 -- # cat 00:22:09.162 03:22:43 -- target/shutdown.sh@35 -- # rpc_cmd 00:22:09.162 03:22:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:09.162 03:22:43 -- common/autotest_common.sh@10 -- # set +x 00:22:09.162 Malloc1 00:22:09.162 [2024-04-25 03:22:43.445377] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:09.162 Malloc2 00:22:09.162 Malloc3 00:22:09.162 Malloc4 00:22:09.162 Malloc5 00:22:09.421 Malloc6 00:22:09.421 Malloc7 00:22:09.421 Malloc8 00:22:09.421 Malloc9 00:22:09.421 Malloc10 00:22:09.421 03:22:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:09.421 03:22:43 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:22:09.421 03:22:43 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:09.421 03:22:43 -- common/autotest_common.sh@10 -- # set +x 00:22:09.680 03:22:43 -- target/shutdown.sh@78 -- # perfpid=1555029 00:22:09.680 03:22:43 -- target/shutdown.sh@79 -- # waitforlisten 1555029 /var/tmp/bdevperf.sock 00:22:09.680 03:22:43 -- common/autotest_common.sh@817 -- # '[' -z 1555029 ']' 00:22:09.680 03:22:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:09.680 03:22:43 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:22:09.680 03:22:43 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:09.680 03:22:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:09.680 03:22:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:09.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:09.680 03:22:43 -- nvmf/common.sh@521 -- # config=() 00:22:09.680 03:22:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:09.680 03:22:43 -- nvmf/common.sh@521 -- # local subsystem config 00:22:09.680 03:22:43 -- common/autotest_common.sh@10 -- # set +x 00:22:09.680 03:22:43 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:09.680 03:22:43 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:09.680 { 00:22:09.680 "params": { 00:22:09.680 "name": "Nvme$subsystem", 00:22:09.680 "trtype": "$TEST_TRANSPORT", 00:22:09.680 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:09.680 "adrfam": "ipv4", 00:22:09.680 "trsvcid": "$NVMF_PORT", 00:22:09.680 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:09.680 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:09.680 "hdgst": ${hdgst:-false}, 00:22:09.680 "ddgst": ${ddgst:-false} 00:22:09.680 }, 00:22:09.680 "method": "bdev_nvme_attach_controller" 00:22:09.680 } 00:22:09.680 EOF 00:22:09.680 )") 00:22:09.680 03:22:43 -- nvmf/common.sh@543 -- # cat 00:22:09.680 03:22:43 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:09.680 03:22:43 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:09.680 { 00:22:09.680 "params": { 00:22:09.680 "name": "Nvme$subsystem", 00:22:09.680 "trtype": "$TEST_TRANSPORT", 00:22:09.680 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:09.680 "adrfam": "ipv4", 00:22:09.680 "trsvcid": "$NVMF_PORT", 00:22:09.680 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:09.680 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:09.680 "hdgst": ${hdgst:-false}, 00:22:09.680 "ddgst": ${ddgst:-false} 00:22:09.680 }, 00:22:09.680 "method": "bdev_nvme_attach_controller" 00:22:09.680 } 00:22:09.680 EOF 00:22:09.680 )") 00:22:09.680 03:22:43 -- nvmf/common.sh@543 -- # cat 00:22:09.680 03:22:43 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:09.680 03:22:43 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:09.680 { 00:22:09.680 "params": { 00:22:09.680 "name": "Nvme$subsystem", 00:22:09.680 "trtype": "$TEST_TRANSPORT", 00:22:09.680 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:09.680 "adrfam": "ipv4", 00:22:09.680 "trsvcid": "$NVMF_PORT", 00:22:09.680 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:09.680 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:09.680 "hdgst": ${hdgst:-false}, 00:22:09.680 "ddgst": ${ddgst:-false} 00:22:09.680 }, 00:22:09.680 "method": "bdev_nvme_attach_controller" 00:22:09.680 } 00:22:09.680 EOF 00:22:09.680 )") 00:22:09.680 03:22:43 -- nvmf/common.sh@543 -- # cat 00:22:09.680 03:22:43 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:09.680 03:22:43 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:09.680 { 00:22:09.680 "params": { 00:22:09.680 "name": "Nvme$subsystem", 00:22:09.680 "trtype": "$TEST_TRANSPORT", 00:22:09.680 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:09.680 "adrfam": "ipv4", 00:22:09.680 "trsvcid": "$NVMF_PORT", 00:22:09.680 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:09.680 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:09.680 "hdgst": ${hdgst:-false}, 00:22:09.680 "ddgst": ${ddgst:-false} 00:22:09.680 }, 00:22:09.680 "method": "bdev_nvme_attach_controller" 00:22:09.680 } 00:22:09.680 EOF 00:22:09.680 )") 00:22:09.680 03:22:43 -- nvmf/common.sh@543 -- # cat 00:22:09.680 03:22:43 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:09.680 03:22:43 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:09.680 { 00:22:09.680 "params": { 00:22:09.680 "name": "Nvme$subsystem", 00:22:09.680 "trtype": "$TEST_TRANSPORT", 00:22:09.680 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:09.680 "adrfam": "ipv4", 00:22:09.680 "trsvcid": "$NVMF_PORT", 00:22:09.680 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:09.680 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:09.680 "hdgst": ${hdgst:-false}, 00:22:09.680 "ddgst": ${ddgst:-false} 00:22:09.680 }, 00:22:09.680 "method": "bdev_nvme_attach_controller" 00:22:09.680 } 00:22:09.680 EOF 00:22:09.680 )") 00:22:09.680 03:22:43 -- nvmf/common.sh@543 -- # cat 00:22:09.680 03:22:43 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:09.680 03:22:43 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:09.680 { 00:22:09.680 "params": { 00:22:09.680 "name": "Nvme$subsystem", 00:22:09.680 "trtype": "$TEST_TRANSPORT", 00:22:09.680 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:09.680 "adrfam": "ipv4", 00:22:09.680 "trsvcid": "$NVMF_PORT", 00:22:09.680 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:09.680 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:09.680 "hdgst": ${hdgst:-false}, 00:22:09.680 "ddgst": ${ddgst:-false} 00:22:09.680 }, 00:22:09.680 "method": "bdev_nvme_attach_controller" 00:22:09.680 } 00:22:09.680 EOF 00:22:09.680 )") 00:22:09.680 03:22:43 -- nvmf/common.sh@543 -- # cat 00:22:09.680 03:22:43 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:09.680 03:22:43 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:09.680 { 00:22:09.680 "params": { 00:22:09.680 "name": "Nvme$subsystem", 00:22:09.680 "trtype": "$TEST_TRANSPORT", 00:22:09.680 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:09.680 "adrfam": "ipv4", 00:22:09.680 "trsvcid": "$NVMF_PORT", 00:22:09.680 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:09.680 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:09.680 "hdgst": ${hdgst:-false}, 00:22:09.680 "ddgst": ${ddgst:-false} 00:22:09.680 }, 00:22:09.680 "method": "bdev_nvme_attach_controller" 00:22:09.680 } 00:22:09.680 EOF 00:22:09.680 )") 00:22:09.680 03:22:43 -- nvmf/common.sh@543 -- # cat 00:22:09.680 03:22:43 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:09.680 03:22:43 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:09.680 { 00:22:09.680 "params": { 00:22:09.680 "name": "Nvme$subsystem", 00:22:09.680 "trtype": "$TEST_TRANSPORT", 00:22:09.680 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:09.680 "adrfam": "ipv4", 00:22:09.680 "trsvcid": "$NVMF_PORT", 00:22:09.680 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:09.680 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:09.680 "hdgst": ${hdgst:-false}, 00:22:09.680 "ddgst": ${ddgst:-false} 00:22:09.680 }, 00:22:09.680 "method": "bdev_nvme_attach_controller" 00:22:09.680 } 00:22:09.680 EOF 00:22:09.680 )") 00:22:09.680 03:22:43 -- nvmf/common.sh@543 -- # cat 00:22:09.680 03:22:43 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:09.680 03:22:43 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:09.680 { 00:22:09.680 "params": { 00:22:09.680 "name": "Nvme$subsystem", 00:22:09.680 "trtype": "$TEST_TRANSPORT", 00:22:09.680 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:09.680 "adrfam": "ipv4", 00:22:09.680 "trsvcid": "$NVMF_PORT", 00:22:09.680 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:09.680 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:09.680 "hdgst": ${hdgst:-false}, 00:22:09.680 "ddgst": ${ddgst:-false} 00:22:09.680 }, 00:22:09.680 "method": "bdev_nvme_attach_controller" 00:22:09.680 } 00:22:09.680 EOF 00:22:09.680 )") 00:22:09.680 03:22:43 -- nvmf/common.sh@543 -- # cat 00:22:09.680 03:22:43 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:09.680 03:22:43 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:09.680 { 00:22:09.680 "params": { 00:22:09.680 "name": "Nvme$subsystem", 00:22:09.680 "trtype": "$TEST_TRANSPORT", 00:22:09.680 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:09.680 "adrfam": "ipv4", 00:22:09.680 "trsvcid": "$NVMF_PORT", 00:22:09.680 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:09.680 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:09.680 "hdgst": ${hdgst:-false}, 00:22:09.680 "ddgst": ${ddgst:-false} 00:22:09.680 }, 00:22:09.680 "method": "bdev_nvme_attach_controller" 00:22:09.680 } 00:22:09.680 EOF 00:22:09.680 )") 00:22:09.680 03:22:43 -- nvmf/common.sh@543 -- # cat 00:22:09.680 03:22:43 -- nvmf/common.sh@545 -- # jq . 00:22:09.680 03:22:43 -- nvmf/common.sh@546 -- # IFS=, 00:22:09.680 03:22:43 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:22:09.680 "params": { 00:22:09.680 "name": "Nvme1", 00:22:09.680 "trtype": "tcp", 00:22:09.680 "traddr": "10.0.0.2", 00:22:09.680 "adrfam": "ipv4", 00:22:09.680 "trsvcid": "4420", 00:22:09.680 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:09.680 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:09.680 "hdgst": false, 00:22:09.681 "ddgst": false 00:22:09.681 }, 00:22:09.681 "method": "bdev_nvme_attach_controller" 00:22:09.681 },{ 00:22:09.681 "params": { 00:22:09.681 "name": "Nvme2", 00:22:09.681 "trtype": "tcp", 00:22:09.681 "traddr": "10.0.0.2", 00:22:09.681 "adrfam": "ipv4", 00:22:09.681 "trsvcid": "4420", 00:22:09.681 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:09.681 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:09.681 "hdgst": false, 00:22:09.681 "ddgst": false 00:22:09.681 }, 00:22:09.681 "method": "bdev_nvme_attach_controller" 00:22:09.681 },{ 00:22:09.681 "params": { 00:22:09.681 "name": "Nvme3", 00:22:09.681 "trtype": "tcp", 00:22:09.681 "traddr": "10.0.0.2", 00:22:09.681 "adrfam": "ipv4", 00:22:09.681 "trsvcid": "4420", 00:22:09.681 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:09.681 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:09.681 "hdgst": false, 00:22:09.681 "ddgst": false 00:22:09.681 }, 00:22:09.681 "method": "bdev_nvme_attach_controller" 00:22:09.681 },{ 00:22:09.681 "params": { 00:22:09.681 "name": "Nvme4", 00:22:09.681 "trtype": "tcp", 00:22:09.681 "traddr": "10.0.0.2", 00:22:09.681 "adrfam": "ipv4", 00:22:09.681 "trsvcid": "4420", 00:22:09.681 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:09.681 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:09.681 "hdgst": false, 00:22:09.681 "ddgst": false 00:22:09.681 }, 00:22:09.681 "method": "bdev_nvme_attach_controller" 00:22:09.681 },{ 00:22:09.681 "params": { 00:22:09.681 "name": "Nvme5", 00:22:09.681 "trtype": "tcp", 00:22:09.681 "traddr": "10.0.0.2", 00:22:09.681 "adrfam": "ipv4", 00:22:09.681 "trsvcid": "4420", 00:22:09.681 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:09.681 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:09.681 "hdgst": false, 00:22:09.681 "ddgst": false 00:22:09.681 }, 00:22:09.681 "method": "bdev_nvme_attach_controller" 00:22:09.681 },{ 00:22:09.681 "params": { 00:22:09.681 "name": "Nvme6", 00:22:09.681 "trtype": "tcp", 00:22:09.681 "traddr": "10.0.0.2", 00:22:09.681 "adrfam": "ipv4", 00:22:09.681 "trsvcid": "4420", 00:22:09.681 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:09.681 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:09.681 "hdgst": false, 00:22:09.681 "ddgst": false 00:22:09.681 }, 00:22:09.681 "method": "bdev_nvme_attach_controller" 00:22:09.681 },{ 00:22:09.681 "params": { 00:22:09.681 "name": "Nvme7", 00:22:09.681 "trtype": "tcp", 00:22:09.681 "traddr": "10.0.0.2", 00:22:09.681 "adrfam": "ipv4", 00:22:09.681 "trsvcid": "4420", 00:22:09.681 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:09.681 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:09.681 "hdgst": false, 00:22:09.681 "ddgst": false 00:22:09.681 }, 00:22:09.681 "method": "bdev_nvme_attach_controller" 00:22:09.681 },{ 00:22:09.681 "params": { 00:22:09.681 "name": "Nvme8", 00:22:09.681 "trtype": "tcp", 00:22:09.681 "traddr": "10.0.0.2", 00:22:09.681 "adrfam": "ipv4", 00:22:09.681 "trsvcid": "4420", 00:22:09.681 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:09.681 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:09.681 "hdgst": false, 00:22:09.681 "ddgst": false 00:22:09.681 }, 00:22:09.681 "method": "bdev_nvme_attach_controller" 00:22:09.681 },{ 00:22:09.681 "params": { 00:22:09.681 "name": "Nvme9", 00:22:09.681 "trtype": "tcp", 00:22:09.681 "traddr": "10.0.0.2", 00:22:09.681 "adrfam": "ipv4", 00:22:09.681 "trsvcid": "4420", 00:22:09.681 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:09.681 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:09.681 "hdgst": false, 00:22:09.681 "ddgst": false 00:22:09.681 }, 00:22:09.681 "method": "bdev_nvme_attach_controller" 00:22:09.681 },{ 00:22:09.681 "params": { 00:22:09.681 "name": "Nvme10", 00:22:09.681 "trtype": "tcp", 00:22:09.681 "traddr": "10.0.0.2", 00:22:09.681 "adrfam": "ipv4", 00:22:09.681 "trsvcid": "4420", 00:22:09.681 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:09.681 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:09.681 "hdgst": false, 00:22:09.681 "ddgst": false 00:22:09.681 }, 00:22:09.681 "method": "bdev_nvme_attach_controller" 00:22:09.681 }' 00:22:09.681 [2024-04-25 03:22:43.960499] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:22:09.681 [2024-04-25 03:22:43.960587] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:22:09.681 EAL: No free 2048 kB hugepages reported on node 1 00:22:09.681 [2024-04-25 03:22:44.025368] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:09.681 [2024-04-25 03:22:44.135848] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:11.580 03:22:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:11.580 03:22:45 -- common/autotest_common.sh@850 -- # return 0 00:22:11.580 03:22:45 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:11.580 03:22:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:11.580 03:22:45 -- common/autotest_common.sh@10 -- # set +x 00:22:11.580 03:22:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:11.580 03:22:45 -- target/shutdown.sh@83 -- # kill -9 1555029 00:22:11.580 03:22:45 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:22:11.580 03:22:45 -- target/shutdown.sh@87 -- # sleep 1 00:22:12.514 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 1555029 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:22:12.514 03:22:46 -- target/shutdown.sh@88 -- # kill -0 1554838 00:22:12.514 03:22:46 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:22:12.514 03:22:46 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:12.514 03:22:46 -- nvmf/common.sh@521 -- # config=() 00:22:12.514 03:22:46 -- nvmf/common.sh@521 -- # local subsystem config 00:22:12.514 03:22:46 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:12.514 03:22:46 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:12.514 { 00:22:12.514 "params": { 00:22:12.514 "name": "Nvme$subsystem", 00:22:12.514 "trtype": "$TEST_TRANSPORT", 00:22:12.514 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:12.514 "adrfam": "ipv4", 00:22:12.514 "trsvcid": "$NVMF_PORT", 00:22:12.514 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:12.514 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:12.514 "hdgst": ${hdgst:-false}, 00:22:12.514 "ddgst": ${ddgst:-false} 00:22:12.514 }, 00:22:12.514 "method": "bdev_nvme_attach_controller" 00:22:12.514 } 00:22:12.514 EOF 00:22:12.514 )") 00:22:12.514 03:22:46 -- nvmf/common.sh@543 -- # cat 00:22:12.514 03:22:46 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:12.514 03:22:46 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:12.514 { 00:22:12.514 "params": { 00:22:12.514 "name": "Nvme$subsystem", 00:22:12.514 "trtype": "$TEST_TRANSPORT", 00:22:12.514 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:12.514 "adrfam": "ipv4", 00:22:12.514 "trsvcid": "$NVMF_PORT", 00:22:12.514 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:12.514 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:12.514 "hdgst": ${hdgst:-false}, 00:22:12.514 "ddgst": ${ddgst:-false} 00:22:12.514 }, 00:22:12.514 "method": "bdev_nvme_attach_controller" 00:22:12.514 } 00:22:12.514 EOF 00:22:12.514 )") 00:22:12.514 03:22:46 -- nvmf/common.sh@543 -- # cat 00:22:12.514 03:22:46 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:12.514 03:22:46 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:12.514 { 00:22:12.514 "params": { 00:22:12.514 "name": "Nvme$subsystem", 00:22:12.514 "trtype": "$TEST_TRANSPORT", 00:22:12.514 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:12.514 "adrfam": "ipv4", 00:22:12.514 "trsvcid": "$NVMF_PORT", 00:22:12.514 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:12.514 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:12.514 "hdgst": ${hdgst:-false}, 00:22:12.514 "ddgst": ${ddgst:-false} 00:22:12.514 }, 00:22:12.514 "method": "bdev_nvme_attach_controller" 00:22:12.514 } 00:22:12.514 EOF 00:22:12.514 )") 00:22:12.514 03:22:46 -- nvmf/common.sh@543 -- # cat 00:22:12.514 03:22:46 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:12.514 03:22:46 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:12.514 { 00:22:12.514 "params": { 00:22:12.514 "name": "Nvme$subsystem", 00:22:12.514 "trtype": "$TEST_TRANSPORT", 00:22:12.514 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:12.514 "adrfam": "ipv4", 00:22:12.514 "trsvcid": "$NVMF_PORT", 00:22:12.514 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:12.514 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:12.514 "hdgst": ${hdgst:-false}, 00:22:12.514 "ddgst": ${ddgst:-false} 00:22:12.514 }, 00:22:12.514 "method": "bdev_nvme_attach_controller" 00:22:12.514 } 00:22:12.514 EOF 00:22:12.514 )") 00:22:12.514 03:22:46 -- nvmf/common.sh@543 -- # cat 00:22:12.514 03:22:46 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:12.514 03:22:46 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:12.514 { 00:22:12.514 "params": { 00:22:12.514 "name": "Nvme$subsystem", 00:22:12.514 "trtype": "$TEST_TRANSPORT", 00:22:12.514 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:12.514 "adrfam": "ipv4", 00:22:12.514 "trsvcid": "$NVMF_PORT", 00:22:12.514 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:12.514 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:12.514 "hdgst": ${hdgst:-false}, 00:22:12.514 "ddgst": ${ddgst:-false} 00:22:12.514 }, 00:22:12.514 "method": "bdev_nvme_attach_controller" 00:22:12.514 } 00:22:12.514 EOF 00:22:12.514 )") 00:22:12.514 03:22:46 -- nvmf/common.sh@543 -- # cat 00:22:12.514 03:22:46 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:12.514 03:22:46 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:12.514 { 00:22:12.514 "params": { 00:22:12.514 "name": "Nvme$subsystem", 00:22:12.514 "trtype": "$TEST_TRANSPORT", 00:22:12.514 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:12.514 "adrfam": "ipv4", 00:22:12.514 "trsvcid": "$NVMF_PORT", 00:22:12.514 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:12.514 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:12.514 "hdgst": ${hdgst:-false}, 00:22:12.514 "ddgst": ${ddgst:-false} 00:22:12.514 }, 00:22:12.514 "method": "bdev_nvme_attach_controller" 00:22:12.514 } 00:22:12.514 EOF 00:22:12.514 )") 00:22:12.514 03:22:46 -- nvmf/common.sh@543 -- # cat 00:22:12.514 03:22:46 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:12.514 03:22:46 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:12.514 { 00:22:12.514 "params": { 00:22:12.514 "name": "Nvme$subsystem", 00:22:12.514 "trtype": "$TEST_TRANSPORT", 00:22:12.514 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:12.514 "adrfam": "ipv4", 00:22:12.514 "trsvcid": "$NVMF_PORT", 00:22:12.514 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:12.514 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:12.514 "hdgst": ${hdgst:-false}, 00:22:12.514 "ddgst": ${ddgst:-false} 00:22:12.514 }, 00:22:12.514 "method": "bdev_nvme_attach_controller" 00:22:12.514 } 00:22:12.514 EOF 00:22:12.514 )") 00:22:12.514 03:22:46 -- nvmf/common.sh@543 -- # cat 00:22:12.514 03:22:46 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:12.514 03:22:46 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:12.514 { 00:22:12.514 "params": { 00:22:12.514 "name": "Nvme$subsystem", 00:22:12.514 "trtype": "$TEST_TRANSPORT", 00:22:12.514 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:12.514 "adrfam": "ipv4", 00:22:12.514 "trsvcid": "$NVMF_PORT", 00:22:12.514 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:12.514 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:12.514 "hdgst": ${hdgst:-false}, 00:22:12.514 "ddgst": ${ddgst:-false} 00:22:12.514 }, 00:22:12.514 "method": "bdev_nvme_attach_controller" 00:22:12.514 } 00:22:12.514 EOF 00:22:12.514 )") 00:22:12.514 03:22:46 -- nvmf/common.sh@543 -- # cat 00:22:12.514 03:22:46 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:12.514 03:22:46 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:12.514 { 00:22:12.514 "params": { 00:22:12.514 "name": "Nvme$subsystem", 00:22:12.514 "trtype": "$TEST_TRANSPORT", 00:22:12.514 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:12.514 "adrfam": "ipv4", 00:22:12.514 "trsvcid": "$NVMF_PORT", 00:22:12.514 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:12.514 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:12.514 "hdgst": ${hdgst:-false}, 00:22:12.514 "ddgst": ${ddgst:-false} 00:22:12.515 }, 00:22:12.515 "method": "bdev_nvme_attach_controller" 00:22:12.515 } 00:22:12.515 EOF 00:22:12.515 )") 00:22:12.515 03:22:46 -- nvmf/common.sh@543 -- # cat 00:22:12.515 03:22:46 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:12.515 03:22:46 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:12.515 { 00:22:12.515 "params": { 00:22:12.515 "name": "Nvme$subsystem", 00:22:12.515 "trtype": "$TEST_TRANSPORT", 00:22:12.515 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:12.515 "adrfam": "ipv4", 00:22:12.515 "trsvcid": "$NVMF_PORT", 00:22:12.515 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:12.515 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:12.515 "hdgst": ${hdgst:-false}, 00:22:12.515 "ddgst": ${ddgst:-false} 00:22:12.515 }, 00:22:12.515 "method": "bdev_nvme_attach_controller" 00:22:12.515 } 00:22:12.515 EOF 00:22:12.515 )") 00:22:12.515 03:22:46 -- nvmf/common.sh@543 -- # cat 00:22:12.515 03:22:46 -- nvmf/common.sh@545 -- # jq . 00:22:12.515 03:22:46 -- nvmf/common.sh@546 -- # IFS=, 00:22:12.515 03:22:46 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:22:12.515 "params": { 00:22:12.515 "name": "Nvme1", 00:22:12.515 "trtype": "tcp", 00:22:12.515 "traddr": "10.0.0.2", 00:22:12.515 "adrfam": "ipv4", 00:22:12.515 "trsvcid": "4420", 00:22:12.515 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:12.515 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:12.515 "hdgst": false, 00:22:12.515 "ddgst": false 00:22:12.515 }, 00:22:12.515 "method": "bdev_nvme_attach_controller" 00:22:12.515 },{ 00:22:12.515 "params": { 00:22:12.515 "name": "Nvme2", 00:22:12.515 "trtype": "tcp", 00:22:12.515 "traddr": "10.0.0.2", 00:22:12.515 "adrfam": "ipv4", 00:22:12.515 "trsvcid": "4420", 00:22:12.515 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:12.515 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:12.515 "hdgst": false, 00:22:12.515 "ddgst": false 00:22:12.515 }, 00:22:12.515 "method": "bdev_nvme_attach_controller" 00:22:12.515 },{ 00:22:12.515 "params": { 00:22:12.515 "name": "Nvme3", 00:22:12.515 "trtype": "tcp", 00:22:12.515 "traddr": "10.0.0.2", 00:22:12.515 "adrfam": "ipv4", 00:22:12.515 "trsvcid": "4420", 00:22:12.515 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:12.515 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:12.515 "hdgst": false, 00:22:12.515 "ddgst": false 00:22:12.515 }, 00:22:12.515 "method": "bdev_nvme_attach_controller" 00:22:12.515 },{ 00:22:12.515 "params": { 00:22:12.515 "name": "Nvme4", 00:22:12.515 "trtype": "tcp", 00:22:12.515 "traddr": "10.0.0.2", 00:22:12.515 "adrfam": "ipv4", 00:22:12.515 "trsvcid": "4420", 00:22:12.515 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:12.515 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:12.515 "hdgst": false, 00:22:12.515 "ddgst": false 00:22:12.515 }, 00:22:12.515 "method": "bdev_nvme_attach_controller" 00:22:12.515 },{ 00:22:12.515 "params": { 00:22:12.515 "name": "Nvme5", 00:22:12.515 "trtype": "tcp", 00:22:12.515 "traddr": "10.0.0.2", 00:22:12.515 "adrfam": "ipv4", 00:22:12.515 "trsvcid": "4420", 00:22:12.515 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:12.515 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:12.515 "hdgst": false, 00:22:12.515 "ddgst": false 00:22:12.515 }, 00:22:12.515 "method": "bdev_nvme_attach_controller" 00:22:12.515 },{ 00:22:12.515 "params": { 00:22:12.515 "name": "Nvme6", 00:22:12.515 "trtype": "tcp", 00:22:12.515 "traddr": "10.0.0.2", 00:22:12.515 "adrfam": "ipv4", 00:22:12.515 "trsvcid": "4420", 00:22:12.515 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:12.515 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:12.515 "hdgst": false, 00:22:12.515 "ddgst": false 00:22:12.515 }, 00:22:12.515 "method": "bdev_nvme_attach_controller" 00:22:12.515 },{ 00:22:12.515 "params": { 00:22:12.515 "name": "Nvme7", 00:22:12.515 "trtype": "tcp", 00:22:12.515 "traddr": "10.0.0.2", 00:22:12.515 "adrfam": "ipv4", 00:22:12.515 "trsvcid": "4420", 00:22:12.515 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:12.515 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:12.515 "hdgst": false, 00:22:12.515 "ddgst": false 00:22:12.515 }, 00:22:12.515 "method": "bdev_nvme_attach_controller" 00:22:12.515 },{ 00:22:12.515 "params": { 00:22:12.515 "name": "Nvme8", 00:22:12.515 "trtype": "tcp", 00:22:12.515 "traddr": "10.0.0.2", 00:22:12.515 "adrfam": "ipv4", 00:22:12.515 "trsvcid": "4420", 00:22:12.515 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:12.515 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:12.515 "hdgst": false, 00:22:12.515 "ddgst": false 00:22:12.515 }, 00:22:12.515 "method": "bdev_nvme_attach_controller" 00:22:12.515 },{ 00:22:12.515 "params": { 00:22:12.515 "name": "Nvme9", 00:22:12.515 "trtype": "tcp", 00:22:12.515 "traddr": "10.0.0.2", 00:22:12.515 "adrfam": "ipv4", 00:22:12.515 "trsvcid": "4420", 00:22:12.515 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:12.515 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:12.515 "hdgst": false, 00:22:12.515 "ddgst": false 00:22:12.515 }, 00:22:12.515 "method": "bdev_nvme_attach_controller" 00:22:12.515 },{ 00:22:12.515 "params": { 00:22:12.515 "name": "Nvme10", 00:22:12.515 "trtype": "tcp", 00:22:12.515 "traddr": "10.0.0.2", 00:22:12.515 "adrfam": "ipv4", 00:22:12.515 "trsvcid": "4420", 00:22:12.515 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:12.515 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:12.515 "hdgst": false, 00:22:12.515 "ddgst": false 00:22:12.515 }, 00:22:12.515 "method": "bdev_nvme_attach_controller" 00:22:12.515 }' 00:22:12.515 [2024-04-25 03:22:46.992902] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:22:12.515 [2024-04-25 03:22:46.992997] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1555447 ] 00:22:12.774 EAL: No free 2048 kB hugepages reported on node 1 00:22:12.774 [2024-04-25 03:22:47.059412] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:12.774 [2024-04-25 03:22:47.170779] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:14.149 Running I/O for 1 seconds... 00:22:15.531 00:22:15.531 Latency(us) 00:22:15.531 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:15.531 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:15.531 Verification LBA range: start 0x0 length 0x400 00:22:15.531 Nvme1n1 : 1.11 231.14 14.45 0.00 0.00 274155.71 22719.15 254765.13 00:22:15.532 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:15.532 Verification LBA range: start 0x0 length 0x400 00:22:15.532 Nvme2n1 : 1.12 228.72 14.30 0.00 0.00 272259.41 22622.06 270299.59 00:22:15.532 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:15.532 Verification LBA range: start 0x0 length 0x400 00:22:15.532 Nvme3n1 : 1.11 236.80 14.80 0.00 0.00 250527.81 5315.70 251658.24 00:22:15.532 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:15.532 Verification LBA range: start 0x0 length 0x400 00:22:15.532 Nvme4n1 : 1.23 207.99 13.00 0.00 0.00 290987.61 22524.97 304475.40 00:22:15.532 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:15.532 Verification LBA range: start 0x0 length 0x400 00:22:15.532 Nvme5n1 : 1.13 227.31 14.21 0.00 0.00 260097.71 20388.98 250104.79 00:22:15.532 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:15.532 Verification LBA range: start 0x0 length 0x400 00:22:15.532 Nvme6n1 : 1.14 225.51 14.09 0.00 0.00 257426.39 23690.05 260978.92 00:22:15.532 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:15.532 Verification LBA range: start 0x0 length 0x400 00:22:15.532 Nvme7n1 : 1.18 270.39 16.90 0.00 0.00 212211.48 19903.53 250104.79 00:22:15.532 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:15.532 Verification LBA range: start 0x0 length 0x400 00:22:15.532 Nvme8n1 : 1.23 207.45 12.97 0.00 0.00 273657.55 25049.32 271853.04 00:22:15.532 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:15.532 Verification LBA range: start 0x0 length 0x400 00:22:15.532 Nvme9n1 : 1.21 263.84 16.49 0.00 0.00 211250.74 17185.00 250104.79 00:22:15.532 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:15.532 Verification LBA range: start 0x0 length 0x400 00:22:15.532 Nvme10n1 : 1.23 261.17 16.32 0.00 0.00 210269.98 15340.28 284280.60 00:22:15.532 =================================================================================================================== 00:22:15.532 Total : 2360.31 147.52 0.00 0.00 248496.09 5315.70 304475.40 00:22:15.790 03:22:50 -- target/shutdown.sh@94 -- # stoptarget 00:22:15.790 03:22:50 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:22:15.790 03:22:50 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:15.790 03:22:50 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:15.790 03:22:50 -- target/shutdown.sh@45 -- # nvmftestfini 00:22:15.790 03:22:50 -- nvmf/common.sh@477 -- # nvmfcleanup 00:22:15.790 03:22:50 -- nvmf/common.sh@117 -- # sync 00:22:15.790 03:22:50 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:15.790 03:22:50 -- nvmf/common.sh@120 -- # set +e 00:22:15.790 03:22:50 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:15.790 03:22:50 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:15.790 rmmod nvme_tcp 00:22:15.790 rmmod nvme_fabrics 00:22:15.790 rmmod nvme_keyring 00:22:15.790 03:22:50 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:15.790 03:22:50 -- nvmf/common.sh@124 -- # set -e 00:22:15.790 03:22:50 -- nvmf/common.sh@125 -- # return 0 00:22:15.790 03:22:50 -- nvmf/common.sh@478 -- # '[' -n 1554838 ']' 00:22:15.790 03:22:50 -- nvmf/common.sh@479 -- # killprocess 1554838 00:22:15.790 03:22:50 -- common/autotest_common.sh@936 -- # '[' -z 1554838 ']' 00:22:15.790 03:22:50 -- common/autotest_common.sh@940 -- # kill -0 1554838 00:22:15.790 03:22:50 -- common/autotest_common.sh@941 -- # uname 00:22:15.790 03:22:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:15.790 03:22:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1554838 00:22:15.790 03:22:50 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:15.790 03:22:50 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:15.790 03:22:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1554838' 00:22:15.790 killing process with pid 1554838 00:22:15.790 03:22:50 -- common/autotest_common.sh@955 -- # kill 1554838 00:22:15.790 03:22:50 -- common/autotest_common.sh@960 -- # wait 1554838 00:22:16.356 03:22:50 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:22:16.356 03:22:50 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:22:16.356 03:22:50 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:22:16.356 03:22:50 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:16.356 03:22:50 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:16.356 03:22:50 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:16.356 03:22:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:16.356 03:22:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:18.257 03:22:52 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:18.257 00:22:18.257 real 0m12.439s 00:22:18.257 user 0m36.699s 00:22:18.257 sys 0m3.326s 00:22:18.257 03:22:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:18.257 03:22:52 -- common/autotest_common.sh@10 -- # set +x 00:22:18.257 ************************************ 00:22:18.257 END TEST nvmf_shutdown_tc1 00:22:18.257 ************************************ 00:22:18.516 03:22:52 -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:22:18.516 03:22:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:22:18.516 03:22:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:18.516 03:22:52 -- common/autotest_common.sh@10 -- # set +x 00:22:18.516 ************************************ 00:22:18.516 START TEST nvmf_shutdown_tc2 00:22:18.516 ************************************ 00:22:18.516 03:22:52 -- common/autotest_common.sh@1111 -- # nvmf_shutdown_tc2 00:22:18.516 03:22:52 -- target/shutdown.sh@99 -- # starttarget 00:22:18.516 03:22:52 -- target/shutdown.sh@15 -- # nvmftestinit 00:22:18.516 03:22:52 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:22:18.516 03:22:52 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:18.516 03:22:52 -- nvmf/common.sh@437 -- # prepare_net_devs 00:22:18.516 03:22:52 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:22:18.516 03:22:52 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:22:18.516 03:22:52 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:18.516 03:22:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:18.516 03:22:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:18.516 03:22:52 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:22:18.516 03:22:52 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:22:18.516 03:22:52 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:18.516 03:22:52 -- common/autotest_common.sh@10 -- # set +x 00:22:18.516 03:22:52 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:18.516 03:22:52 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:18.516 03:22:52 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:18.516 03:22:52 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:18.516 03:22:52 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:18.516 03:22:52 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:18.516 03:22:52 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:18.516 03:22:52 -- nvmf/common.sh@295 -- # net_devs=() 00:22:18.516 03:22:52 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:18.516 03:22:52 -- nvmf/common.sh@296 -- # e810=() 00:22:18.516 03:22:52 -- nvmf/common.sh@296 -- # local -ga e810 00:22:18.516 03:22:52 -- nvmf/common.sh@297 -- # x722=() 00:22:18.516 03:22:52 -- nvmf/common.sh@297 -- # local -ga x722 00:22:18.516 03:22:52 -- nvmf/common.sh@298 -- # mlx=() 00:22:18.516 03:22:52 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:18.516 03:22:52 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:18.516 03:22:52 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:18.516 03:22:52 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:18.516 03:22:52 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:18.516 03:22:52 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:18.516 03:22:52 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:18.516 03:22:52 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:18.516 03:22:52 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:18.516 03:22:52 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:18.516 03:22:52 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:18.516 03:22:52 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:18.516 03:22:52 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:18.516 03:22:52 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:18.516 03:22:52 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:18.516 03:22:52 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:18.516 03:22:52 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:18.516 03:22:52 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:18.516 03:22:52 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:18.516 03:22:52 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:18.516 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:18.517 03:22:52 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:18.517 03:22:52 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:18.517 03:22:52 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:18.517 03:22:52 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:18.517 03:22:52 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:18.517 03:22:52 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:18.517 03:22:52 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:18.517 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:18.517 03:22:52 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:18.517 03:22:52 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:18.517 03:22:52 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:18.517 03:22:52 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:18.517 03:22:52 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:18.517 03:22:52 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:18.517 03:22:52 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:18.517 03:22:52 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:18.517 03:22:52 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:18.517 03:22:52 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:18.517 03:22:52 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:18.517 03:22:52 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:18.517 03:22:52 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:18.517 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:18.517 03:22:52 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:18.517 03:22:52 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:18.517 03:22:52 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:18.517 03:22:52 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:18.517 03:22:52 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:18.517 03:22:52 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:18.517 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:18.517 03:22:52 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:18.517 03:22:52 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:22:18.517 03:22:52 -- nvmf/common.sh@403 -- # is_hw=yes 00:22:18.517 03:22:52 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:22:18.517 03:22:52 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:22:18.517 03:22:52 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:22:18.517 03:22:52 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:18.517 03:22:52 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:18.517 03:22:52 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:18.517 03:22:52 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:18.517 03:22:52 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:18.517 03:22:52 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:18.517 03:22:52 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:18.517 03:22:52 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:18.517 03:22:52 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:18.517 03:22:52 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:18.517 03:22:52 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:18.517 03:22:52 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:18.517 03:22:52 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:18.517 03:22:52 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:18.517 03:22:52 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:18.517 03:22:52 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:18.517 03:22:52 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:18.517 03:22:52 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:18.517 03:22:53 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:18.517 03:22:53 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:18.775 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:18.776 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.130 ms 00:22:18.776 00:22:18.776 --- 10.0.0.2 ping statistics --- 00:22:18.776 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:18.776 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:22:18.776 03:22:53 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:18.776 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:18.776 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.094 ms 00:22:18.776 00:22:18.776 --- 10.0.0.1 ping statistics --- 00:22:18.776 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:18.776 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:22:18.776 03:22:53 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:18.776 03:22:53 -- nvmf/common.sh@411 -- # return 0 00:22:18.776 03:22:53 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:22:18.776 03:22:53 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:18.776 03:22:53 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:22:18.776 03:22:53 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:22:18.776 03:22:53 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:18.776 03:22:53 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:22:18.776 03:22:53 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:22:18.776 03:22:53 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:22:18.776 03:22:53 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:22:18.776 03:22:53 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:18.776 03:22:53 -- common/autotest_common.sh@10 -- # set +x 00:22:18.776 03:22:53 -- nvmf/common.sh@470 -- # nvmfpid=1556235 00:22:18.776 03:22:53 -- nvmf/common.sh@471 -- # waitforlisten 1556235 00:22:18.776 03:22:53 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:18.776 03:22:53 -- common/autotest_common.sh@817 -- # '[' -z 1556235 ']' 00:22:18.776 03:22:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:18.776 03:22:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:18.776 03:22:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:18.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:18.776 03:22:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:18.776 03:22:53 -- common/autotest_common.sh@10 -- # set +x 00:22:18.776 [2024-04-25 03:22:53.096215] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:22:18.776 [2024-04-25 03:22:53.096302] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:18.776 EAL: No free 2048 kB hugepages reported on node 1 00:22:18.776 [2024-04-25 03:22:53.169494] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:19.034 [2024-04-25 03:22:53.277009] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:19.034 [2024-04-25 03:22:53.277064] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:19.034 [2024-04-25 03:22:53.277087] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:19.034 [2024-04-25 03:22:53.277100] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:19.034 [2024-04-25 03:22:53.277111] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:19.034 [2024-04-25 03:22:53.277188] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:19.034 [2024-04-25 03:22:53.277241] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:22:19.034 [2024-04-25 03:22:53.277244] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:19.034 [2024-04-25 03:22:53.277213] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:19.034 03:22:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:19.034 03:22:53 -- common/autotest_common.sh@850 -- # return 0 00:22:19.034 03:22:53 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:22:19.034 03:22:53 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:19.034 03:22:53 -- common/autotest_common.sh@10 -- # set +x 00:22:19.034 03:22:53 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:19.034 03:22:53 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:19.034 03:22:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:19.034 03:22:53 -- common/autotest_common.sh@10 -- # set +x 00:22:19.034 [2024-04-25 03:22:53.413212] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:19.034 03:22:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:19.034 03:22:53 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:22:19.034 03:22:53 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:22:19.034 03:22:53 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:19.034 03:22:53 -- common/autotest_common.sh@10 -- # set +x 00:22:19.034 03:22:53 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:19.034 03:22:53 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:19.034 03:22:53 -- target/shutdown.sh@28 -- # cat 00:22:19.034 03:22:53 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:19.034 03:22:53 -- target/shutdown.sh@28 -- # cat 00:22:19.034 03:22:53 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:19.034 03:22:53 -- target/shutdown.sh@28 -- # cat 00:22:19.035 03:22:53 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:19.035 03:22:53 -- target/shutdown.sh@28 -- # cat 00:22:19.035 03:22:53 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:19.035 03:22:53 -- target/shutdown.sh@28 -- # cat 00:22:19.035 03:22:53 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:19.035 03:22:53 -- target/shutdown.sh@28 -- # cat 00:22:19.035 03:22:53 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:19.035 03:22:53 -- target/shutdown.sh@28 -- # cat 00:22:19.035 03:22:53 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:19.035 03:22:53 -- target/shutdown.sh@28 -- # cat 00:22:19.035 03:22:53 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:19.035 03:22:53 -- target/shutdown.sh@28 -- # cat 00:22:19.035 03:22:53 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:19.035 03:22:53 -- target/shutdown.sh@28 -- # cat 00:22:19.035 03:22:53 -- target/shutdown.sh@35 -- # rpc_cmd 00:22:19.035 03:22:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:19.035 03:22:53 -- common/autotest_common.sh@10 -- # set +x 00:22:19.035 Malloc1 00:22:19.035 [2024-04-25 03:22:53.488203] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:19.035 Malloc2 00:22:19.293 Malloc3 00:22:19.293 Malloc4 00:22:19.293 Malloc5 00:22:19.293 Malloc6 00:22:19.293 Malloc7 00:22:19.551 Malloc8 00:22:19.551 Malloc9 00:22:19.551 Malloc10 00:22:19.551 03:22:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:19.551 03:22:53 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:22:19.551 03:22:53 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:19.551 03:22:53 -- common/autotest_common.sh@10 -- # set +x 00:22:19.551 03:22:53 -- target/shutdown.sh@103 -- # perfpid=1556400 00:22:19.551 03:22:53 -- target/shutdown.sh@104 -- # waitforlisten 1556400 /var/tmp/bdevperf.sock 00:22:19.551 03:22:53 -- common/autotest_common.sh@817 -- # '[' -z 1556400 ']' 00:22:19.551 03:22:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:19.551 03:22:53 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:19.551 03:22:53 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:19.551 03:22:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:19.551 03:22:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:19.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:19.551 03:22:53 -- nvmf/common.sh@521 -- # config=() 00:22:19.551 03:22:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:19.551 03:22:53 -- nvmf/common.sh@521 -- # local subsystem config 00:22:19.551 03:22:53 -- common/autotest_common.sh@10 -- # set +x 00:22:19.551 03:22:53 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:19.551 03:22:53 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:19.551 { 00:22:19.551 "params": { 00:22:19.551 "name": "Nvme$subsystem", 00:22:19.551 "trtype": "$TEST_TRANSPORT", 00:22:19.551 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:19.551 "adrfam": "ipv4", 00:22:19.551 "trsvcid": "$NVMF_PORT", 00:22:19.551 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:19.551 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:19.551 "hdgst": ${hdgst:-false}, 00:22:19.551 "ddgst": ${ddgst:-false} 00:22:19.551 }, 00:22:19.551 "method": "bdev_nvme_attach_controller" 00:22:19.551 } 00:22:19.551 EOF 00:22:19.551 )") 00:22:19.551 03:22:53 -- nvmf/common.sh@543 -- # cat 00:22:19.551 03:22:53 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:19.551 03:22:53 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:19.551 { 00:22:19.551 "params": { 00:22:19.551 "name": "Nvme$subsystem", 00:22:19.551 "trtype": "$TEST_TRANSPORT", 00:22:19.551 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:19.551 "adrfam": "ipv4", 00:22:19.551 "trsvcid": "$NVMF_PORT", 00:22:19.551 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:19.551 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:19.551 "hdgst": ${hdgst:-false}, 00:22:19.551 "ddgst": ${ddgst:-false} 00:22:19.551 }, 00:22:19.551 "method": "bdev_nvme_attach_controller" 00:22:19.551 } 00:22:19.551 EOF 00:22:19.551 )") 00:22:19.551 03:22:53 -- nvmf/common.sh@543 -- # cat 00:22:19.551 03:22:53 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:19.551 03:22:53 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:19.551 { 00:22:19.551 "params": { 00:22:19.551 "name": "Nvme$subsystem", 00:22:19.551 "trtype": "$TEST_TRANSPORT", 00:22:19.551 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:19.551 "adrfam": "ipv4", 00:22:19.551 "trsvcid": "$NVMF_PORT", 00:22:19.552 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:19.552 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:19.552 "hdgst": ${hdgst:-false}, 00:22:19.552 "ddgst": ${ddgst:-false} 00:22:19.552 }, 00:22:19.552 "method": "bdev_nvme_attach_controller" 00:22:19.552 } 00:22:19.552 EOF 00:22:19.552 )") 00:22:19.552 03:22:53 -- nvmf/common.sh@543 -- # cat 00:22:19.552 03:22:53 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:19.552 03:22:53 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:19.552 { 00:22:19.552 "params": { 00:22:19.552 "name": "Nvme$subsystem", 00:22:19.552 "trtype": "$TEST_TRANSPORT", 00:22:19.552 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:19.552 "adrfam": "ipv4", 00:22:19.552 "trsvcid": "$NVMF_PORT", 00:22:19.552 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:19.552 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:19.552 "hdgst": ${hdgst:-false}, 00:22:19.552 "ddgst": ${ddgst:-false} 00:22:19.552 }, 00:22:19.552 "method": "bdev_nvme_attach_controller" 00:22:19.552 } 00:22:19.552 EOF 00:22:19.552 )") 00:22:19.552 03:22:53 -- nvmf/common.sh@543 -- # cat 00:22:19.552 03:22:53 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:19.552 03:22:53 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:19.552 { 00:22:19.552 "params": { 00:22:19.552 "name": "Nvme$subsystem", 00:22:19.552 "trtype": "$TEST_TRANSPORT", 00:22:19.552 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:19.552 "adrfam": "ipv4", 00:22:19.552 "trsvcid": "$NVMF_PORT", 00:22:19.552 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:19.552 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:19.552 "hdgst": ${hdgst:-false}, 00:22:19.552 "ddgst": ${ddgst:-false} 00:22:19.552 }, 00:22:19.552 "method": "bdev_nvme_attach_controller" 00:22:19.552 } 00:22:19.552 EOF 00:22:19.552 )") 00:22:19.552 03:22:53 -- nvmf/common.sh@543 -- # cat 00:22:19.552 03:22:53 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:19.552 03:22:53 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:19.552 { 00:22:19.552 "params": { 00:22:19.552 "name": "Nvme$subsystem", 00:22:19.552 "trtype": "$TEST_TRANSPORT", 00:22:19.552 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:19.552 "adrfam": "ipv4", 00:22:19.552 "trsvcid": "$NVMF_PORT", 00:22:19.552 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:19.552 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:19.552 "hdgst": ${hdgst:-false}, 00:22:19.552 "ddgst": ${ddgst:-false} 00:22:19.552 }, 00:22:19.552 "method": "bdev_nvme_attach_controller" 00:22:19.552 } 00:22:19.552 EOF 00:22:19.552 )") 00:22:19.552 03:22:53 -- nvmf/common.sh@543 -- # cat 00:22:19.552 03:22:53 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:19.552 03:22:53 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:19.552 { 00:22:19.552 "params": { 00:22:19.552 "name": "Nvme$subsystem", 00:22:19.552 "trtype": "$TEST_TRANSPORT", 00:22:19.552 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:19.552 "adrfam": "ipv4", 00:22:19.552 "trsvcid": "$NVMF_PORT", 00:22:19.552 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:19.552 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:19.552 "hdgst": ${hdgst:-false}, 00:22:19.552 "ddgst": ${ddgst:-false} 00:22:19.552 }, 00:22:19.552 "method": "bdev_nvme_attach_controller" 00:22:19.552 } 00:22:19.552 EOF 00:22:19.552 )") 00:22:19.552 03:22:53 -- nvmf/common.sh@543 -- # cat 00:22:19.552 03:22:53 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:19.552 03:22:53 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:19.552 { 00:22:19.552 "params": { 00:22:19.552 "name": "Nvme$subsystem", 00:22:19.552 "trtype": "$TEST_TRANSPORT", 00:22:19.552 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:19.552 "adrfam": "ipv4", 00:22:19.552 "trsvcid": "$NVMF_PORT", 00:22:19.552 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:19.552 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:19.552 "hdgst": ${hdgst:-false}, 00:22:19.552 "ddgst": ${ddgst:-false} 00:22:19.552 }, 00:22:19.552 "method": "bdev_nvme_attach_controller" 00:22:19.552 } 00:22:19.552 EOF 00:22:19.552 )") 00:22:19.552 03:22:53 -- nvmf/common.sh@543 -- # cat 00:22:19.552 03:22:53 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:19.552 03:22:53 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:19.552 { 00:22:19.552 "params": { 00:22:19.552 "name": "Nvme$subsystem", 00:22:19.552 "trtype": "$TEST_TRANSPORT", 00:22:19.552 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:19.552 "adrfam": "ipv4", 00:22:19.552 "trsvcid": "$NVMF_PORT", 00:22:19.552 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:19.552 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:19.552 "hdgst": ${hdgst:-false}, 00:22:19.552 "ddgst": ${ddgst:-false} 00:22:19.552 }, 00:22:19.552 "method": "bdev_nvme_attach_controller" 00:22:19.552 } 00:22:19.552 EOF 00:22:19.552 )") 00:22:19.552 03:22:53 -- nvmf/common.sh@543 -- # cat 00:22:19.552 03:22:53 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:19.552 03:22:53 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:19.552 { 00:22:19.552 "params": { 00:22:19.552 "name": "Nvme$subsystem", 00:22:19.552 "trtype": "$TEST_TRANSPORT", 00:22:19.552 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:19.552 "adrfam": "ipv4", 00:22:19.552 "trsvcid": "$NVMF_PORT", 00:22:19.552 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:19.552 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:19.552 "hdgst": ${hdgst:-false}, 00:22:19.552 "ddgst": ${ddgst:-false} 00:22:19.552 }, 00:22:19.552 "method": "bdev_nvme_attach_controller" 00:22:19.552 } 00:22:19.552 EOF 00:22:19.552 )") 00:22:19.552 03:22:53 -- nvmf/common.sh@543 -- # cat 00:22:19.552 03:22:53 -- nvmf/common.sh@545 -- # jq . 00:22:19.552 03:22:53 -- nvmf/common.sh@546 -- # IFS=, 00:22:19.552 03:22:53 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:22:19.552 "params": { 00:22:19.552 "name": "Nvme1", 00:22:19.552 "trtype": "tcp", 00:22:19.552 "traddr": "10.0.0.2", 00:22:19.552 "adrfam": "ipv4", 00:22:19.552 "trsvcid": "4420", 00:22:19.552 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:19.552 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:19.552 "hdgst": false, 00:22:19.552 "ddgst": false 00:22:19.552 }, 00:22:19.552 "method": "bdev_nvme_attach_controller" 00:22:19.552 },{ 00:22:19.552 "params": { 00:22:19.552 "name": "Nvme2", 00:22:19.552 "trtype": "tcp", 00:22:19.552 "traddr": "10.0.0.2", 00:22:19.552 "adrfam": "ipv4", 00:22:19.552 "trsvcid": "4420", 00:22:19.552 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:19.552 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:19.552 "hdgst": false, 00:22:19.552 "ddgst": false 00:22:19.552 }, 00:22:19.552 "method": "bdev_nvme_attach_controller" 00:22:19.552 },{ 00:22:19.552 "params": { 00:22:19.552 "name": "Nvme3", 00:22:19.552 "trtype": "tcp", 00:22:19.552 "traddr": "10.0.0.2", 00:22:19.552 "adrfam": "ipv4", 00:22:19.552 "trsvcid": "4420", 00:22:19.552 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:19.552 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:19.552 "hdgst": false, 00:22:19.552 "ddgst": false 00:22:19.552 }, 00:22:19.552 "method": "bdev_nvme_attach_controller" 00:22:19.552 },{ 00:22:19.552 "params": { 00:22:19.552 "name": "Nvme4", 00:22:19.552 "trtype": "tcp", 00:22:19.552 "traddr": "10.0.0.2", 00:22:19.552 "adrfam": "ipv4", 00:22:19.552 "trsvcid": "4420", 00:22:19.552 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:19.552 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:19.552 "hdgst": false, 00:22:19.552 "ddgst": false 00:22:19.552 }, 00:22:19.552 "method": "bdev_nvme_attach_controller" 00:22:19.552 },{ 00:22:19.552 "params": { 00:22:19.552 "name": "Nvme5", 00:22:19.552 "trtype": "tcp", 00:22:19.552 "traddr": "10.0.0.2", 00:22:19.552 "adrfam": "ipv4", 00:22:19.552 "trsvcid": "4420", 00:22:19.552 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:19.552 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:19.552 "hdgst": false, 00:22:19.552 "ddgst": false 00:22:19.552 }, 00:22:19.552 "method": "bdev_nvme_attach_controller" 00:22:19.552 },{ 00:22:19.552 "params": { 00:22:19.552 "name": "Nvme6", 00:22:19.552 "trtype": "tcp", 00:22:19.552 "traddr": "10.0.0.2", 00:22:19.552 "adrfam": "ipv4", 00:22:19.552 "trsvcid": "4420", 00:22:19.552 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:19.552 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:19.552 "hdgst": false, 00:22:19.552 "ddgst": false 00:22:19.552 }, 00:22:19.552 "method": "bdev_nvme_attach_controller" 00:22:19.552 },{ 00:22:19.552 "params": { 00:22:19.552 "name": "Nvme7", 00:22:19.552 "trtype": "tcp", 00:22:19.552 "traddr": "10.0.0.2", 00:22:19.552 "adrfam": "ipv4", 00:22:19.552 "trsvcid": "4420", 00:22:19.552 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:19.552 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:19.552 "hdgst": false, 00:22:19.552 "ddgst": false 00:22:19.552 }, 00:22:19.552 "method": "bdev_nvme_attach_controller" 00:22:19.552 },{ 00:22:19.552 "params": { 00:22:19.552 "name": "Nvme8", 00:22:19.552 "trtype": "tcp", 00:22:19.552 "traddr": "10.0.0.2", 00:22:19.552 "adrfam": "ipv4", 00:22:19.553 "trsvcid": "4420", 00:22:19.553 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:19.553 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:19.553 "hdgst": false, 00:22:19.553 "ddgst": false 00:22:19.553 }, 00:22:19.553 "method": "bdev_nvme_attach_controller" 00:22:19.553 },{ 00:22:19.553 "params": { 00:22:19.553 "name": "Nvme9", 00:22:19.553 "trtype": "tcp", 00:22:19.553 "traddr": "10.0.0.2", 00:22:19.553 "adrfam": "ipv4", 00:22:19.553 "trsvcid": "4420", 00:22:19.553 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:19.553 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:19.553 "hdgst": false, 00:22:19.553 "ddgst": false 00:22:19.553 }, 00:22:19.553 "method": "bdev_nvme_attach_controller" 00:22:19.553 },{ 00:22:19.553 "params": { 00:22:19.553 "name": "Nvme10", 00:22:19.553 "trtype": "tcp", 00:22:19.553 "traddr": "10.0.0.2", 00:22:19.553 "adrfam": "ipv4", 00:22:19.553 "trsvcid": "4420", 00:22:19.553 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:19.553 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:19.553 "hdgst": false, 00:22:19.553 "ddgst": false 00:22:19.553 }, 00:22:19.553 "method": "bdev_nvme_attach_controller" 00:22:19.553 }' 00:22:19.553 [2024-04-25 03:22:53.988362] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:22:19.553 [2024-04-25 03:22:53.988447] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1556400 ] 00:22:19.553 EAL: No free 2048 kB hugepages reported on node 1 00:22:19.810 [2024-04-25 03:22:54.051467] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:19.810 [2024-04-25 03:22:54.159341] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:21.760 Running I/O for 10 seconds... 00:22:21.760 03:22:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:21.760 03:22:56 -- common/autotest_common.sh@850 -- # return 0 00:22:21.760 03:22:56 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:21.760 03:22:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:21.760 03:22:56 -- common/autotest_common.sh@10 -- # set +x 00:22:21.760 03:22:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:21.760 03:22:56 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:21.760 03:22:56 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:21.760 03:22:56 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:22:21.760 03:22:56 -- target/shutdown.sh@57 -- # local ret=1 00:22:21.760 03:22:56 -- target/shutdown.sh@58 -- # local i 00:22:21.760 03:22:56 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:22:21.760 03:22:56 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:22:21.760 03:22:56 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:21.760 03:22:56 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:22:21.760 03:22:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:21.760 03:22:56 -- common/autotest_common.sh@10 -- # set +x 00:22:21.760 03:22:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:21.760 03:22:56 -- target/shutdown.sh@60 -- # read_io_count=3 00:22:21.760 03:22:56 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:22:21.760 03:22:56 -- target/shutdown.sh@67 -- # sleep 0.25 00:22:22.018 03:22:56 -- target/shutdown.sh@59 -- # (( i-- )) 00:22:22.018 03:22:56 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:22:22.018 03:22:56 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:22.018 03:22:56 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:22:22.018 03:22:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:22.018 03:22:56 -- common/autotest_common.sh@10 -- # set +x 00:22:22.018 03:22:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:22.018 03:22:56 -- target/shutdown.sh@60 -- # read_io_count=67 00:22:22.019 03:22:56 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:22:22.019 03:22:56 -- target/shutdown.sh@67 -- # sleep 0.25 00:22:22.277 03:22:56 -- target/shutdown.sh@59 -- # (( i-- )) 00:22:22.277 03:22:56 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:22:22.277 03:22:56 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:22.277 03:22:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:22.277 03:22:56 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:22:22.277 03:22:56 -- common/autotest_common.sh@10 -- # set +x 00:22:22.536 03:22:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:22.536 03:22:56 -- target/shutdown.sh@60 -- # read_io_count=131 00:22:22.536 03:22:56 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:22:22.536 03:22:56 -- target/shutdown.sh@64 -- # ret=0 00:22:22.536 03:22:56 -- target/shutdown.sh@65 -- # break 00:22:22.536 03:22:56 -- target/shutdown.sh@69 -- # return 0 00:22:22.536 03:22:56 -- target/shutdown.sh@110 -- # killprocess 1556400 00:22:22.536 03:22:56 -- common/autotest_common.sh@936 -- # '[' -z 1556400 ']' 00:22:22.536 03:22:56 -- common/autotest_common.sh@940 -- # kill -0 1556400 00:22:22.536 03:22:56 -- common/autotest_common.sh@941 -- # uname 00:22:22.536 03:22:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:22.536 03:22:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1556400 00:22:22.536 03:22:56 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:22.536 03:22:56 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:22.536 03:22:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1556400' 00:22:22.536 killing process with pid 1556400 00:22:22.536 03:22:56 -- common/autotest_common.sh@955 -- # kill 1556400 00:22:22.536 03:22:56 -- common/autotest_common.sh@960 -- # wait 1556400 00:22:22.536 Received shutdown signal, test time was about 0.986139 seconds 00:22:22.536 00:22:22.536 Latency(us) 00:22:22.536 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:22.536 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:22.536 Verification LBA range: start 0x0 length 0x400 00:22:22.536 Nvme1n1 : 0.94 204.34 12.77 0.00 0.00 309309.95 23690.05 282727.16 00:22:22.536 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:22.536 Verification LBA range: start 0x0 length 0x400 00:22:22.536 Nvme2n1 : 0.99 259.82 16.24 0.00 0.00 229189.40 22719.15 253211.69 00:22:22.536 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:22.536 Verification LBA range: start 0x0 length 0x400 00:22:22.536 Nvme3n1 : 0.91 211.91 13.24 0.00 0.00 286122.03 20486.07 256318.58 00:22:22.536 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:22.536 Verification LBA range: start 0x0 length 0x400 00:22:22.536 Nvme4n1 : 0.94 271.75 16.98 0.00 0.00 218871.66 20971.52 253211.69 00:22:22.536 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:22.536 Verification LBA range: start 0x0 length 0x400 00:22:22.536 Nvme5n1 : 0.95 202.41 12.65 0.00 0.00 288178.13 22816.24 285834.05 00:22:22.536 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:22.536 Verification LBA range: start 0x0 length 0x400 00:22:22.536 Nvme6n1 : 0.90 214.08 13.38 0.00 0.00 265013.73 22427.88 257872.02 00:22:22.536 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:22.536 Verification LBA range: start 0x0 length 0x400 00:22:22.536 Nvme7n1 : 0.92 209.08 13.07 0.00 0.00 266144.68 23398.78 260978.92 00:22:22.536 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:22.536 Verification LBA range: start 0x0 length 0x400 00:22:22.536 Nvme8n1 : 0.95 268.96 16.81 0.00 0.00 203629.23 28544.57 251658.24 00:22:22.536 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:22.536 Verification LBA range: start 0x0 length 0x400 00:22:22.536 Nvme9n1 : 0.94 205.12 12.82 0.00 0.00 260496.50 23301.69 284280.60 00:22:22.536 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:22.536 Verification LBA range: start 0x0 length 0x400 00:22:22.536 Nvme10n1 : 0.93 207.40 12.96 0.00 0.00 251249.15 23787.14 246997.90 00:22:22.536 =================================================================================================================== 00:22:22.536 Total : 2254.89 140.93 0.00 0.00 254130.41 20486.07 285834.05 00:22:22.795 03:22:57 -- target/shutdown.sh@113 -- # sleep 1 00:22:23.727 03:22:58 -- target/shutdown.sh@114 -- # kill -0 1556235 00:22:23.727 03:22:58 -- target/shutdown.sh@116 -- # stoptarget 00:22:23.727 03:22:58 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:22:23.727 03:22:58 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:23.727 03:22:58 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:23.727 03:22:58 -- target/shutdown.sh@45 -- # nvmftestfini 00:22:23.727 03:22:58 -- nvmf/common.sh@477 -- # nvmfcleanup 00:22:23.727 03:22:58 -- nvmf/common.sh@117 -- # sync 00:22:23.727 03:22:58 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:23.727 03:22:58 -- nvmf/common.sh@120 -- # set +e 00:22:23.727 03:22:58 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:23.727 03:22:58 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:23.727 rmmod nvme_tcp 00:22:23.986 rmmod nvme_fabrics 00:22:23.986 rmmod nvme_keyring 00:22:23.986 03:22:58 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:23.986 03:22:58 -- nvmf/common.sh@124 -- # set -e 00:22:23.986 03:22:58 -- nvmf/common.sh@125 -- # return 0 00:22:23.986 03:22:58 -- nvmf/common.sh@478 -- # '[' -n 1556235 ']' 00:22:23.986 03:22:58 -- nvmf/common.sh@479 -- # killprocess 1556235 00:22:23.986 03:22:58 -- common/autotest_common.sh@936 -- # '[' -z 1556235 ']' 00:22:23.986 03:22:58 -- common/autotest_common.sh@940 -- # kill -0 1556235 00:22:23.986 03:22:58 -- common/autotest_common.sh@941 -- # uname 00:22:23.986 03:22:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:23.986 03:22:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1556235 00:22:23.986 03:22:58 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:23.986 03:22:58 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:23.986 03:22:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1556235' 00:22:23.986 killing process with pid 1556235 00:22:23.986 03:22:58 -- common/autotest_common.sh@955 -- # kill 1556235 00:22:23.986 03:22:58 -- common/autotest_common.sh@960 -- # wait 1556235 00:22:24.552 03:22:58 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:22:24.552 03:22:58 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:22:24.552 03:22:58 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:22:24.553 03:22:58 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:24.553 03:22:58 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:24.553 03:22:58 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:24.553 03:22:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:24.553 03:22:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:26.455 03:23:00 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:26.455 00:22:26.455 real 0m7.979s 00:22:26.455 user 0m24.420s 00:22:26.455 sys 0m1.582s 00:22:26.455 03:23:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:26.455 03:23:00 -- common/autotest_common.sh@10 -- # set +x 00:22:26.455 ************************************ 00:22:26.455 END TEST nvmf_shutdown_tc2 00:22:26.455 ************************************ 00:22:26.455 03:23:00 -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:22:26.455 03:23:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:22:26.455 03:23:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:26.455 03:23:00 -- common/autotest_common.sh@10 -- # set +x 00:22:26.714 ************************************ 00:22:26.714 START TEST nvmf_shutdown_tc3 00:22:26.714 ************************************ 00:22:26.714 03:23:00 -- common/autotest_common.sh@1111 -- # nvmf_shutdown_tc3 00:22:26.714 03:23:00 -- target/shutdown.sh@121 -- # starttarget 00:22:26.714 03:23:00 -- target/shutdown.sh@15 -- # nvmftestinit 00:22:26.714 03:23:00 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:22:26.714 03:23:00 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:26.714 03:23:00 -- nvmf/common.sh@437 -- # prepare_net_devs 00:22:26.714 03:23:00 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:22:26.714 03:23:00 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:22:26.714 03:23:00 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:26.714 03:23:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:26.714 03:23:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:26.714 03:23:00 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:22:26.714 03:23:00 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:22:26.714 03:23:00 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:26.714 03:23:00 -- common/autotest_common.sh@10 -- # set +x 00:22:26.714 03:23:00 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:26.714 03:23:00 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:26.714 03:23:00 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:26.714 03:23:00 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:26.714 03:23:00 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:26.714 03:23:00 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:26.714 03:23:00 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:26.714 03:23:00 -- nvmf/common.sh@295 -- # net_devs=() 00:22:26.714 03:23:00 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:26.714 03:23:00 -- nvmf/common.sh@296 -- # e810=() 00:22:26.714 03:23:00 -- nvmf/common.sh@296 -- # local -ga e810 00:22:26.714 03:23:00 -- nvmf/common.sh@297 -- # x722=() 00:22:26.714 03:23:00 -- nvmf/common.sh@297 -- # local -ga x722 00:22:26.714 03:23:00 -- nvmf/common.sh@298 -- # mlx=() 00:22:26.714 03:23:00 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:26.714 03:23:00 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:26.714 03:23:00 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:26.714 03:23:00 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:26.714 03:23:00 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:26.714 03:23:00 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:26.714 03:23:00 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:26.714 03:23:00 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:26.714 03:23:00 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:26.714 03:23:00 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:26.714 03:23:00 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:26.714 03:23:00 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:26.714 03:23:00 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:26.714 03:23:00 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:26.714 03:23:00 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:26.714 03:23:00 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:26.714 03:23:00 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:26.714 03:23:00 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:26.714 03:23:00 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:26.714 03:23:00 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:26.714 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:26.714 03:23:00 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:26.714 03:23:00 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:26.714 03:23:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:26.714 03:23:00 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:26.714 03:23:00 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:26.714 03:23:00 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:26.714 03:23:00 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:26.714 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:26.714 03:23:00 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:26.714 03:23:00 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:26.714 03:23:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:26.714 03:23:00 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:26.714 03:23:00 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:26.714 03:23:00 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:26.714 03:23:00 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:26.714 03:23:00 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:26.714 03:23:00 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:26.714 03:23:00 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:26.714 03:23:00 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:26.714 03:23:00 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:26.714 03:23:00 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:26.714 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:26.714 03:23:00 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:26.714 03:23:00 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:26.714 03:23:00 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:26.714 03:23:00 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:26.714 03:23:00 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:26.714 03:23:00 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:26.714 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:26.714 03:23:00 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:26.714 03:23:00 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:22:26.714 03:23:00 -- nvmf/common.sh@403 -- # is_hw=yes 00:22:26.714 03:23:00 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:22:26.714 03:23:00 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:22:26.714 03:23:00 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:22:26.714 03:23:00 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:26.714 03:23:00 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:26.714 03:23:00 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:26.714 03:23:00 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:26.714 03:23:00 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:26.714 03:23:00 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:26.714 03:23:00 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:26.714 03:23:00 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:26.714 03:23:00 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:26.714 03:23:00 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:26.714 03:23:00 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:26.714 03:23:00 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:26.714 03:23:00 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:26.714 03:23:01 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:26.714 03:23:01 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:26.714 03:23:01 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:26.714 03:23:01 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:26.714 03:23:01 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:26.714 03:23:01 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:26.714 03:23:01 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:26.714 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:26.714 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.201 ms 00:22:26.714 00:22:26.714 --- 10.0.0.2 ping statistics --- 00:22:26.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:26.714 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:22:26.714 03:23:01 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:26.714 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:26.714 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.078 ms 00:22:26.714 00:22:26.714 --- 10.0.0.1 ping statistics --- 00:22:26.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:26.715 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:22:26.715 03:23:01 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:26.715 03:23:01 -- nvmf/common.sh@411 -- # return 0 00:22:26.715 03:23:01 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:22:26.715 03:23:01 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:26.715 03:23:01 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:22:26.715 03:23:01 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:22:26.715 03:23:01 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:26.715 03:23:01 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:22:26.715 03:23:01 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:22:26.715 03:23:01 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:22:26.715 03:23:01 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:22:26.715 03:23:01 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:26.715 03:23:01 -- common/autotest_common.sh@10 -- # set +x 00:22:26.715 03:23:01 -- nvmf/common.sh@470 -- # nvmfpid=1557441 00:22:26.715 03:23:01 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:26.715 03:23:01 -- nvmf/common.sh@471 -- # waitforlisten 1557441 00:22:26.715 03:23:01 -- common/autotest_common.sh@817 -- # '[' -z 1557441 ']' 00:22:26.715 03:23:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:26.715 03:23:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:26.715 03:23:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:26.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:26.715 03:23:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:26.715 03:23:01 -- common/autotest_common.sh@10 -- # set +x 00:22:26.715 [2024-04-25 03:23:01.194727] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:22:26.715 [2024-04-25 03:23:01.194823] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:26.974 EAL: No free 2048 kB hugepages reported on node 1 00:22:26.974 [2024-04-25 03:23:01.265567] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:26.974 [2024-04-25 03:23:01.380681] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:26.974 [2024-04-25 03:23:01.380738] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:26.974 [2024-04-25 03:23:01.380761] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:26.974 [2024-04-25 03:23:01.380774] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:26.974 [2024-04-25 03:23:01.380785] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:26.974 [2024-04-25 03:23:01.380878] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:26.974 [2024-04-25 03:23:01.380940] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:26.974 [2024-04-25 03:23:01.381001] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:22:26.974 [2024-04-25 03:23:01.381004] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:27.908 03:23:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:27.908 03:23:02 -- common/autotest_common.sh@850 -- # return 0 00:22:27.908 03:23:02 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:22:27.908 03:23:02 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:27.908 03:23:02 -- common/autotest_common.sh@10 -- # set +x 00:22:27.908 03:23:02 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:27.908 03:23:02 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:27.908 03:23:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:27.908 03:23:02 -- common/autotest_common.sh@10 -- # set +x 00:22:27.908 [2024-04-25 03:23:02.154559] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:27.908 03:23:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:27.908 03:23:02 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:22:27.908 03:23:02 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:22:27.908 03:23:02 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:27.908 03:23:02 -- common/autotest_common.sh@10 -- # set +x 00:22:27.908 03:23:02 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:27.908 03:23:02 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:27.908 03:23:02 -- target/shutdown.sh@28 -- # cat 00:22:27.908 03:23:02 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:27.908 03:23:02 -- target/shutdown.sh@28 -- # cat 00:22:27.909 03:23:02 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:27.909 03:23:02 -- target/shutdown.sh@28 -- # cat 00:22:27.909 03:23:02 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:27.909 03:23:02 -- target/shutdown.sh@28 -- # cat 00:22:27.909 03:23:02 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:27.909 03:23:02 -- target/shutdown.sh@28 -- # cat 00:22:27.909 03:23:02 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:27.909 03:23:02 -- target/shutdown.sh@28 -- # cat 00:22:27.909 03:23:02 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:27.909 03:23:02 -- target/shutdown.sh@28 -- # cat 00:22:27.909 03:23:02 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:27.909 03:23:02 -- target/shutdown.sh@28 -- # cat 00:22:27.909 03:23:02 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:27.909 03:23:02 -- target/shutdown.sh@28 -- # cat 00:22:27.909 03:23:02 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:27.909 03:23:02 -- target/shutdown.sh@28 -- # cat 00:22:27.909 03:23:02 -- target/shutdown.sh@35 -- # rpc_cmd 00:22:27.909 03:23:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:27.909 03:23:02 -- common/autotest_common.sh@10 -- # set +x 00:22:27.909 Malloc1 00:22:27.909 [2024-04-25 03:23:02.229999] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:27.909 Malloc2 00:22:27.909 Malloc3 00:22:27.909 Malloc4 00:22:27.909 Malloc5 00:22:28.167 Malloc6 00:22:28.167 Malloc7 00:22:28.167 Malloc8 00:22:28.167 Malloc9 00:22:28.167 Malloc10 00:22:28.167 03:23:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:28.167 03:23:02 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:22:28.167 03:23:02 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:28.167 03:23:02 -- common/autotest_common.sh@10 -- # set +x 00:22:28.425 03:23:02 -- target/shutdown.sh@125 -- # perfpid=1557630 00:22:28.425 03:23:02 -- target/shutdown.sh@126 -- # waitforlisten 1557630 /var/tmp/bdevperf.sock 00:22:28.425 03:23:02 -- common/autotest_common.sh@817 -- # '[' -z 1557630 ']' 00:22:28.426 03:23:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:28.426 03:23:02 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:28.426 03:23:02 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:28.426 03:23:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:28.426 03:23:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:28.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:28.426 03:23:02 -- nvmf/common.sh@521 -- # config=() 00:22:28.426 03:23:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:28.426 03:23:02 -- nvmf/common.sh@521 -- # local subsystem config 00:22:28.426 03:23:02 -- common/autotest_common.sh@10 -- # set +x 00:22:28.426 03:23:02 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:28.426 03:23:02 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:28.426 { 00:22:28.426 "params": { 00:22:28.426 "name": "Nvme$subsystem", 00:22:28.426 "trtype": "$TEST_TRANSPORT", 00:22:28.426 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:28.426 "adrfam": "ipv4", 00:22:28.426 "trsvcid": "$NVMF_PORT", 00:22:28.426 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:28.426 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:28.426 "hdgst": ${hdgst:-false}, 00:22:28.426 "ddgst": ${ddgst:-false} 00:22:28.426 }, 00:22:28.426 "method": "bdev_nvme_attach_controller" 00:22:28.426 } 00:22:28.426 EOF 00:22:28.426 )") 00:22:28.426 03:23:02 -- nvmf/common.sh@543 -- # cat 00:22:28.426 03:23:02 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:28.426 03:23:02 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:28.426 { 00:22:28.426 "params": { 00:22:28.426 "name": "Nvme$subsystem", 00:22:28.426 "trtype": "$TEST_TRANSPORT", 00:22:28.426 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:28.426 "adrfam": "ipv4", 00:22:28.426 "trsvcid": "$NVMF_PORT", 00:22:28.426 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:28.426 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:28.426 "hdgst": ${hdgst:-false}, 00:22:28.426 "ddgst": ${ddgst:-false} 00:22:28.426 }, 00:22:28.426 "method": "bdev_nvme_attach_controller" 00:22:28.426 } 00:22:28.426 EOF 00:22:28.426 )") 00:22:28.426 03:23:02 -- nvmf/common.sh@543 -- # cat 00:22:28.426 03:23:02 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:28.426 03:23:02 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:28.426 { 00:22:28.426 "params": { 00:22:28.426 "name": "Nvme$subsystem", 00:22:28.426 "trtype": "$TEST_TRANSPORT", 00:22:28.426 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:28.426 "adrfam": "ipv4", 00:22:28.426 "trsvcid": "$NVMF_PORT", 00:22:28.426 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:28.426 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:28.426 "hdgst": ${hdgst:-false}, 00:22:28.426 "ddgst": ${ddgst:-false} 00:22:28.426 }, 00:22:28.426 "method": "bdev_nvme_attach_controller" 00:22:28.426 } 00:22:28.426 EOF 00:22:28.426 )") 00:22:28.426 03:23:02 -- nvmf/common.sh@543 -- # cat 00:22:28.426 03:23:02 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:28.426 03:23:02 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:28.426 { 00:22:28.426 "params": { 00:22:28.426 "name": "Nvme$subsystem", 00:22:28.426 "trtype": "$TEST_TRANSPORT", 00:22:28.426 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:28.426 "adrfam": "ipv4", 00:22:28.426 "trsvcid": "$NVMF_PORT", 00:22:28.426 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:28.426 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:28.426 "hdgst": ${hdgst:-false}, 00:22:28.426 "ddgst": ${ddgst:-false} 00:22:28.426 }, 00:22:28.426 "method": "bdev_nvme_attach_controller" 00:22:28.426 } 00:22:28.426 EOF 00:22:28.426 )") 00:22:28.426 03:23:02 -- nvmf/common.sh@543 -- # cat 00:22:28.426 03:23:02 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:28.426 03:23:02 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:28.426 { 00:22:28.426 "params": { 00:22:28.426 "name": "Nvme$subsystem", 00:22:28.426 "trtype": "$TEST_TRANSPORT", 00:22:28.426 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:28.426 "adrfam": "ipv4", 00:22:28.426 "trsvcid": "$NVMF_PORT", 00:22:28.426 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:28.426 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:28.426 "hdgst": ${hdgst:-false}, 00:22:28.426 "ddgst": ${ddgst:-false} 00:22:28.426 }, 00:22:28.426 "method": "bdev_nvme_attach_controller" 00:22:28.426 } 00:22:28.426 EOF 00:22:28.426 )") 00:22:28.426 03:23:02 -- nvmf/common.sh@543 -- # cat 00:22:28.426 03:23:02 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:28.426 03:23:02 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:28.426 { 00:22:28.426 "params": { 00:22:28.426 "name": "Nvme$subsystem", 00:22:28.426 "trtype": "$TEST_TRANSPORT", 00:22:28.426 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:28.426 "adrfam": "ipv4", 00:22:28.426 "trsvcid": "$NVMF_PORT", 00:22:28.426 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:28.426 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:28.426 "hdgst": ${hdgst:-false}, 00:22:28.426 "ddgst": ${ddgst:-false} 00:22:28.426 }, 00:22:28.426 "method": "bdev_nvme_attach_controller" 00:22:28.426 } 00:22:28.426 EOF 00:22:28.426 )") 00:22:28.426 03:23:02 -- nvmf/common.sh@543 -- # cat 00:22:28.426 03:23:02 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:28.426 03:23:02 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:28.426 { 00:22:28.426 "params": { 00:22:28.426 "name": "Nvme$subsystem", 00:22:28.426 "trtype": "$TEST_TRANSPORT", 00:22:28.426 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:28.426 "adrfam": "ipv4", 00:22:28.426 "trsvcid": "$NVMF_PORT", 00:22:28.426 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:28.426 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:28.426 "hdgst": ${hdgst:-false}, 00:22:28.426 "ddgst": ${ddgst:-false} 00:22:28.426 }, 00:22:28.426 "method": "bdev_nvme_attach_controller" 00:22:28.426 } 00:22:28.426 EOF 00:22:28.426 )") 00:22:28.426 03:23:02 -- nvmf/common.sh@543 -- # cat 00:22:28.426 03:23:02 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:28.426 03:23:02 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:28.426 { 00:22:28.426 "params": { 00:22:28.426 "name": "Nvme$subsystem", 00:22:28.426 "trtype": "$TEST_TRANSPORT", 00:22:28.426 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:28.426 "adrfam": "ipv4", 00:22:28.426 "trsvcid": "$NVMF_PORT", 00:22:28.426 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:28.426 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:28.426 "hdgst": ${hdgst:-false}, 00:22:28.426 "ddgst": ${ddgst:-false} 00:22:28.426 }, 00:22:28.426 "method": "bdev_nvme_attach_controller" 00:22:28.426 } 00:22:28.426 EOF 00:22:28.426 )") 00:22:28.426 03:23:02 -- nvmf/common.sh@543 -- # cat 00:22:28.427 03:23:02 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:28.427 03:23:02 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:28.427 { 00:22:28.427 "params": { 00:22:28.427 "name": "Nvme$subsystem", 00:22:28.427 "trtype": "$TEST_TRANSPORT", 00:22:28.427 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:28.427 "adrfam": "ipv4", 00:22:28.427 "trsvcid": "$NVMF_PORT", 00:22:28.427 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:28.427 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:28.427 "hdgst": ${hdgst:-false}, 00:22:28.427 "ddgst": ${ddgst:-false} 00:22:28.427 }, 00:22:28.427 "method": "bdev_nvme_attach_controller" 00:22:28.427 } 00:22:28.427 EOF 00:22:28.427 )") 00:22:28.427 03:23:02 -- nvmf/common.sh@543 -- # cat 00:22:28.427 03:23:02 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:28.427 03:23:02 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:28.427 { 00:22:28.427 "params": { 00:22:28.427 "name": "Nvme$subsystem", 00:22:28.427 "trtype": "$TEST_TRANSPORT", 00:22:28.427 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:28.427 "adrfam": "ipv4", 00:22:28.427 "trsvcid": "$NVMF_PORT", 00:22:28.427 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:28.427 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:28.427 "hdgst": ${hdgst:-false}, 00:22:28.427 "ddgst": ${ddgst:-false} 00:22:28.427 }, 00:22:28.427 "method": "bdev_nvme_attach_controller" 00:22:28.427 } 00:22:28.427 EOF 00:22:28.427 )") 00:22:28.427 03:23:02 -- nvmf/common.sh@543 -- # cat 00:22:28.427 03:23:02 -- nvmf/common.sh@545 -- # jq . 00:22:28.427 03:23:02 -- nvmf/common.sh@546 -- # IFS=, 00:22:28.427 03:23:02 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:22:28.427 "params": { 00:22:28.427 "name": "Nvme1", 00:22:28.427 "trtype": "tcp", 00:22:28.427 "traddr": "10.0.0.2", 00:22:28.427 "adrfam": "ipv4", 00:22:28.427 "trsvcid": "4420", 00:22:28.427 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:28.427 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:28.427 "hdgst": false, 00:22:28.427 "ddgst": false 00:22:28.427 }, 00:22:28.427 "method": "bdev_nvme_attach_controller" 00:22:28.427 },{ 00:22:28.427 "params": { 00:22:28.427 "name": "Nvme2", 00:22:28.427 "trtype": "tcp", 00:22:28.427 "traddr": "10.0.0.2", 00:22:28.427 "adrfam": "ipv4", 00:22:28.427 "trsvcid": "4420", 00:22:28.427 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:28.427 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:28.427 "hdgst": false, 00:22:28.427 "ddgst": false 00:22:28.427 }, 00:22:28.427 "method": "bdev_nvme_attach_controller" 00:22:28.427 },{ 00:22:28.427 "params": { 00:22:28.427 "name": "Nvme3", 00:22:28.427 "trtype": "tcp", 00:22:28.427 "traddr": "10.0.0.2", 00:22:28.427 "adrfam": "ipv4", 00:22:28.427 "trsvcid": "4420", 00:22:28.427 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:28.427 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:28.427 "hdgst": false, 00:22:28.427 "ddgst": false 00:22:28.427 }, 00:22:28.427 "method": "bdev_nvme_attach_controller" 00:22:28.427 },{ 00:22:28.427 "params": { 00:22:28.427 "name": "Nvme4", 00:22:28.427 "trtype": "tcp", 00:22:28.427 "traddr": "10.0.0.2", 00:22:28.427 "adrfam": "ipv4", 00:22:28.427 "trsvcid": "4420", 00:22:28.427 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:28.427 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:28.427 "hdgst": false, 00:22:28.427 "ddgst": false 00:22:28.427 }, 00:22:28.427 "method": "bdev_nvme_attach_controller" 00:22:28.427 },{ 00:22:28.427 "params": { 00:22:28.427 "name": "Nvme5", 00:22:28.427 "trtype": "tcp", 00:22:28.427 "traddr": "10.0.0.2", 00:22:28.427 "adrfam": "ipv4", 00:22:28.427 "trsvcid": "4420", 00:22:28.427 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:28.427 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:28.427 "hdgst": false, 00:22:28.427 "ddgst": false 00:22:28.427 }, 00:22:28.427 "method": "bdev_nvme_attach_controller" 00:22:28.427 },{ 00:22:28.427 "params": { 00:22:28.427 "name": "Nvme6", 00:22:28.427 "trtype": "tcp", 00:22:28.427 "traddr": "10.0.0.2", 00:22:28.427 "adrfam": "ipv4", 00:22:28.427 "trsvcid": "4420", 00:22:28.427 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:28.427 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:28.427 "hdgst": false, 00:22:28.427 "ddgst": false 00:22:28.427 }, 00:22:28.427 "method": "bdev_nvme_attach_controller" 00:22:28.427 },{ 00:22:28.427 "params": { 00:22:28.427 "name": "Nvme7", 00:22:28.427 "trtype": "tcp", 00:22:28.427 "traddr": "10.0.0.2", 00:22:28.427 "adrfam": "ipv4", 00:22:28.427 "trsvcid": "4420", 00:22:28.427 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:28.427 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:28.427 "hdgst": false, 00:22:28.427 "ddgst": false 00:22:28.427 }, 00:22:28.427 "method": "bdev_nvme_attach_controller" 00:22:28.427 },{ 00:22:28.427 "params": { 00:22:28.427 "name": "Nvme8", 00:22:28.427 "trtype": "tcp", 00:22:28.427 "traddr": "10.0.0.2", 00:22:28.427 "adrfam": "ipv4", 00:22:28.427 "trsvcid": "4420", 00:22:28.427 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:28.427 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:28.427 "hdgst": false, 00:22:28.427 "ddgst": false 00:22:28.427 }, 00:22:28.427 "method": "bdev_nvme_attach_controller" 00:22:28.427 },{ 00:22:28.427 "params": { 00:22:28.427 "name": "Nvme9", 00:22:28.427 "trtype": "tcp", 00:22:28.427 "traddr": "10.0.0.2", 00:22:28.427 "adrfam": "ipv4", 00:22:28.427 "trsvcid": "4420", 00:22:28.427 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:28.427 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:28.427 "hdgst": false, 00:22:28.427 "ddgst": false 00:22:28.427 }, 00:22:28.427 "method": "bdev_nvme_attach_controller" 00:22:28.427 },{ 00:22:28.427 "params": { 00:22:28.427 "name": "Nvme10", 00:22:28.427 "trtype": "tcp", 00:22:28.427 "traddr": "10.0.0.2", 00:22:28.427 "adrfam": "ipv4", 00:22:28.427 "trsvcid": "4420", 00:22:28.427 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:28.427 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:28.427 "hdgst": false, 00:22:28.427 "ddgst": false 00:22:28.427 }, 00:22:28.427 "method": "bdev_nvme_attach_controller" 00:22:28.427 }' 00:22:28.427 [2024-04-25 03:23:02.733557] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:22:28.427 [2024-04-25 03:23:02.733682] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1557630 ] 00:22:28.427 EAL: No free 2048 kB hugepages reported on node 1 00:22:28.427 [2024-04-25 03:23:02.798656] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:28.427 [2024-04-25 03:23:02.906624] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:30.324 Running I/O for 10 seconds... 00:22:30.324 03:23:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:30.324 03:23:04 -- common/autotest_common.sh@850 -- # return 0 00:22:30.324 03:23:04 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:30.324 03:23:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:30.324 03:23:04 -- common/autotest_common.sh@10 -- # set +x 00:22:30.324 03:23:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:30.324 03:23:04 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:30.324 03:23:04 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:30.324 03:23:04 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:30.324 03:23:04 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:22:30.324 03:23:04 -- target/shutdown.sh@57 -- # local ret=1 00:22:30.324 03:23:04 -- target/shutdown.sh@58 -- # local i 00:22:30.324 03:23:04 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:22:30.324 03:23:04 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:22:30.324 03:23:04 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:30.324 03:23:04 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:22:30.324 03:23:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:30.324 03:23:04 -- common/autotest_common.sh@10 -- # set +x 00:22:30.324 03:23:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:30.324 03:23:04 -- target/shutdown.sh@60 -- # read_io_count=3 00:22:30.324 03:23:04 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:22:30.324 03:23:04 -- target/shutdown.sh@67 -- # sleep 0.25 00:22:30.584 03:23:04 -- target/shutdown.sh@59 -- # (( i-- )) 00:22:30.584 03:23:04 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:22:30.584 03:23:04 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:30.584 03:23:04 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:22:30.584 03:23:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:30.584 03:23:04 -- common/autotest_common.sh@10 -- # set +x 00:22:30.584 03:23:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:30.584 03:23:04 -- target/shutdown.sh@60 -- # read_io_count=67 00:22:30.584 03:23:04 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:22:30.584 03:23:04 -- target/shutdown.sh@67 -- # sleep 0.25 00:22:30.857 03:23:05 -- target/shutdown.sh@59 -- # (( i-- )) 00:22:30.857 03:23:05 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:22:30.857 03:23:05 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:30.857 03:23:05 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:22:30.857 03:23:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:30.857 03:23:05 -- common/autotest_common.sh@10 -- # set +x 00:22:30.857 03:23:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:30.857 03:23:05 -- target/shutdown.sh@60 -- # read_io_count=131 00:22:30.857 03:23:05 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:22:30.857 03:23:05 -- target/shutdown.sh@64 -- # ret=0 00:22:30.857 03:23:05 -- target/shutdown.sh@65 -- # break 00:22:30.857 03:23:05 -- target/shutdown.sh@69 -- # return 0 00:22:30.857 03:23:05 -- target/shutdown.sh@135 -- # killprocess 1557441 00:22:30.857 03:23:05 -- common/autotest_common.sh@936 -- # '[' -z 1557441 ']' 00:22:30.857 03:23:05 -- common/autotest_common.sh@940 -- # kill -0 1557441 00:22:30.857 03:23:05 -- common/autotest_common.sh@941 -- # uname 00:22:30.857 03:23:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:30.857 03:23:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1557441 00:22:30.857 03:23:05 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:30.857 03:23:05 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:30.857 03:23:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1557441' 00:22:30.857 killing process with pid 1557441 00:22:30.857 03:23:05 -- common/autotest_common.sh@955 -- # kill 1557441 00:22:30.857 03:23:05 -- common/autotest_common.sh@960 -- # wait 1557441 00:22:30.857 [2024-04-25 03:23:05.280968] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0ce60 is same with the state(5) to be set 00:22:30.857 [2024-04-25 03:23:05.282004] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0ce60 is same with the state(5) to be set 00:22:30.858 [2024-04-25 03:23:05.282024] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0ce60 is same with the state(5) to be set 00:22:30.858 [2024-04-25 03:23:05.282037] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0ce60 is same with the state(5) to be set 00:22:30.858 [2024-04-25 03:23:05.282049] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0ce60 is same with the state(5) to be set 00:22:30.858 [2024-04-25 03:23:05.282061] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0ce60 is same with the state(5) to be set 00:22:30.858 [2024-04-25 03:23:05.282073] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0ce60 is same with the state(5) to be set 00:22:30.858 [2024-04-25 03:23:05.282085] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0ce60 is same with the state(5) to be set 00:22:30.858 [2024-04-25 03:23:05.282101] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0ce60 is same with the state(5) to be set 00:22:30.858 [2024-04-25 03:23:05.282114] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0ce60 is same with the state(5) to be set 00:22:30.858 [2024-04-25 03:23:05.282127] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0ce60 is same with the state(5) to be set 00:22:30.858 [2024-04-25 03:23:05.282139] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0ce60 is same with the state(5) to be set 00:22:30.858 [2024-04-25 03:23:05.282151] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0ce60 is same with the state(5) to be set 00:22:30.858 [2024-04-25 03:23:05.282163] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0ce60 is same with the state(5) to be set 00:22:30.858 [2024-04-25 03:23:05.282175] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0ce60 is same with the state(5) to be set 00:22:30.858 [2024-04-25 03:23:05.282190] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0ce60 is same with the state(5) to be set 00:22:30.858 [2024-04-25 03:23:05.282203] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0ce60 is same with the state(5) to be set 00:22:30.858 [2024-04-25 03:23:05.282216] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0ce60 is same with the state(5) to be set 00:22:30.858 [2024-04-25 03:23:05.282228] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0ce60 is same with the state(5) to be set 00:22:30.858 [2024-04-25 03:23:05.282240] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0ce60 is same with the state(5) to be set 00:22:30.858 [2024-04-25 03:23:05.282253] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0ce60 is same with the state(5) to be set 00:22:30.858 [2024-04-25 03:23:05.282266] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0ce60 is same with the state(5) to be set 00:22:30.858 [2024-04-25 03:23:05.282278] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0ce60 is same with the state(5) to be set 00:22:30.858 [2024-04-25 03:23:05.282290] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0ce60 is same with the state(5) to be set 00:22:30.858 [2024-04-25 03:23:05.282303] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0ce60 is same with the state(5) to be set 00:22:30.858 [2024-04-25 03:23:05.282315] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0ce60 is same with the state(5) to be set 00:22:30.858 [2024-04-25 03:23:05.282327] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0ce60 is same with the state(5) to be set 00:22:30.858 [2024-04-25 03:23:05.282339] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0ce60 is same with the state(5) to be set 00:22:30.858 [2024-04-25 03:23:05.282359] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0ce60 is same with the state(5) to be set 00:22:30.858 [2024-04-25 03:23:05.282373] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0ce60 is same with the state(5) to be set 00:22:30.858 [2024-04-25 03:23:05.282385] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0ce60 is same with the state(5) to be set 00:22:30.858 [2024-04-25 03:23:05.282397] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0ce60 is same with the state(5) to be set 00:22:30.858 [2024-04-25 03:23:05.282409] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0ce60 is same with the state(5) to be set 00:22:30.858 [2024-04-25 03:23:05.282427] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0ce60 is same with the state(5) to be set 00:22:30.858 [2024-04-25 03:23:05.282441] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0ce60 is same with the state(5) to be set 00:22:30.858 [2024-04-25 03:23:05.282454] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0ce60 is same with the state(5) to be set 00:22:30.858 [2024-04-25 03:23:05.282466] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0ce60 is same with the state(5) to be set 00:22:30.858 [2024-04-25 03:23:05.282479] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0ce60 is same with the state(5) to be set 00:22:30.858 [2024-04-25 03:23:05.282501] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0ce60 is same with the state(5) to be set 00:22:30.858 [2024-04-25 03:23:05.282514] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0ce60 is same with the state(5) to be set 00:22:30.858 [2024-04-25 03:23:05.282527] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0ce60 is same with the state(5) to be set 00:22:30.858 [2024-04-25 03:23:05.282539] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0ce60 is same with the state(5) to be set 00:22:30.858 [2024-04-25 03:23:05.282552] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0ce60 is same with the state(5) to be set 00:22:30.858 [2024-04-25 03:23:05.282564] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0ce60 is same with the state(5) to be set 00:22:30.858 [2024-04-25 03:23:05.282576] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0ce60 is same with the state(5) to be set 00:22:30.858 [2024-04-25 03:23:05.282589] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0ce60 is same with the state(5) to be set 00:22:30.858 [2024-04-25 03:23:05.282600] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0ce60 is same with the state(5) to be set 00:22:30.858 [2024-04-25 03:23:05.282613] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0ce60 is same with the state(5) to be set 00:22:30.858 [2024-04-25 03:23:05.282634] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0ce60 is same with the state(5) to be set 00:22:30.858 [2024-04-25 03:23:05.282649] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0ce60 is same with the state(5) to be set 00:22:30.858 [2024-04-25 03:23:05.282661] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0ce60 is same with the state(5) to be set 00:22:30.858 [2024-04-25 03:23:05.282676] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0ce60 is same with the state(5) to be set 00:22:30.858 [2024-04-25 03:23:05.282688] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0ce60 is same with the state(5) to be set 00:22:30.858 [2024-04-25 03:23:05.282701] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0ce60 is same with the state(5) to be set 00:22:30.858 [2024-04-25 03:23:05.282713] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0ce60 is same with the state(5) to be set 00:22:30.858 [2024-04-25 03:23:05.282728] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0ce60 is same with the state(5) to be set 00:22:30.858 [2024-04-25 03:23:05.282741] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0ce60 is same with the state(5) to be set 00:22:30.858 [2024-04-25 03:23:05.282753] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0ce60 is same with the state(5) to be set 00:22:30.858 [2024-04-25 03:23:05.282765] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0ce60 is same with the state(5) to be set 00:22:30.858 [2024-04-25 03:23:05.284470] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0f080 is same with the state(5) to be set 00:22:30.858 [2024-04-25 03:23:05.284504] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0f080 is same with the state(5) to be set 00:22:30.858 [2024-04-25 03:23:05.284523] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0f080 is same with the state(5) to be set 00:22:30.858 [2024-04-25 03:23:05.284536] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0f080 is same with the state(5) to be set 00:22:30.858 [2024-04-25 03:23:05.284548] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0f080 is same with the state(5) to be set 00:22:30.858 [2024-04-25 03:23:05.284562] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0f080 is same with the state(5) to be set 00:22:30.858 [2024-04-25 03:23:05.284577] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0f080 is same with the state(5) to be set 00:22:30.858 [2024-04-25 03:23:05.284590] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0f080 is same with the state(5) to be set 00:22:30.858 [2024-04-25 03:23:05.284602] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0f080 is same with the state(5) to be set 00:22:30.858 [2024-04-25 03:23:05.284617] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0f080 is same with the state(5) to be set 00:22:30.858 [2024-04-25 03:23:05.284640] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0f080 is same with the state(5) to be set 00:22:30.858 [2024-04-25 03:23:05.284667] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0f080 is same with the state(5) to be set 00:22:30.858 [2024-04-25 03:23:05.284682] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0f080 is same with the state(5) to be set 00:22:30.858 [2024-04-25 03:23:05.284694] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0f080 is same with the state(5) to be set 00:22:30.858 [2024-04-25 03:23:05.284707] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0f080 is same with the state(5) to be set 00:22:30.858 [2024-04-25 03:23:05.284723] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0f080 is same with the state(5) to be set 00:22:30.858 [2024-04-25 03:23:05.284737] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0f080 is same with the state(5) to be set 00:22:30.858 [2024-04-25 03:23:05.284749] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0f080 is same with the state(5) to be set 00:22:30.858 [2024-04-25 03:23:05.284762] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0f080 is same with the state(5) to be set 00:22:30.858 [2024-04-25 03:23:05.284778] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0f080 is same with the state(5) to be set 00:22:30.858 [2024-04-25 03:23:05.284790] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0f080 is same with the state(5) to be set 00:22:30.858 [2024-04-25 03:23:05.284802] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0f080 is same with the state(5) to be set 00:22:30.858 [2024-04-25 03:23:05.284816] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0f080 is same with the state(5) to be set 00:22:30.858 [2024-04-25 03:23:05.284830] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0f080 is same with the state(5) to be set 00:22:30.858 [2024-04-25 03:23:05.284855] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0f080 is same with the state(5) to be set 00:22:30.858 [2024-04-25 03:23:05.284871] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0f080 is same with the state(5) to be set 00:22:30.858 [2024-04-25 03:23:05.284890] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0f080 is same with the state(5) to be set 00:22:30.859 [2024-04-25 03:23:05.284902] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0f080 is same with the state(5) to be set 00:22:30.859 [2024-04-25 03:23:05.284917] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0f080 is same with the state(5) to be set 00:22:30.859 [2024-04-25 03:23:05.284934] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0f080 is same with the state(5) to be set 00:22:30.859 [2024-04-25 03:23:05.284947] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0f080 is same with the state(5) to be set 00:22:30.859 [2024-04-25 03:23:05.284960] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0f080 is same with the state(5) to be set 00:22:30.859 [2024-04-25 03:23:05.284977] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0f080 is same with the state(5) to be set 00:22:30.859 [2024-04-25 03:23:05.284991] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0f080 is same with the state(5) to be set 00:22:30.859 [2024-04-25 03:23:05.285003] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0f080 is same with the state(5) to be set 00:22:30.859 [2024-04-25 03:23:05.285018] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0f080 is same with the state(5) to be set 00:22:30.859 [2024-04-25 03:23:05.285032] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0f080 is same with the state(5) to be set 00:22:30.859 [2024-04-25 03:23:05.285045] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0f080 is same with the state(5) to be set 00:22:30.859 [2024-04-25 03:23:05.285058] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0f080 is same with the state(5) to be set 00:22:30.859 [2024-04-25 03:23:05.285074] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0f080 is same with the state(5) to be set 00:22:30.859 [2024-04-25 03:23:05.285088] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0f080 is same with the state(5) to be set 00:22:30.859 [2024-04-25 03:23:05.285102] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0f080 is same with the state(5) to be set 00:22:30.859 [2024-04-25 03:23:05.285116] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0f080 is same with the state(5) to be set 00:22:30.859 [2024-04-25 03:23:05.285131] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0f080 is same with the state(5) to be set 00:22:30.859 [2024-04-25 03:23:05.285144] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0f080 is same with the state(5) to be set 00:22:30.859 [2024-04-25 03:23:05.285158] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0f080 is same with the state(5) to be set 00:22:30.859 [2024-04-25 03:23:05.285173] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0f080 is same with the state(5) to be set 00:22:30.859 [2024-04-25 03:23:05.285187] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0f080 is same with the state(5) to be set 00:22:30.859 [2024-04-25 03:23:05.285199] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0f080 is same with the state(5) to be set 00:22:30.859 [2024-04-25 03:23:05.285212] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0f080 is same with the state(5) to be set 00:22:30.859 [2024-04-25 03:23:05.285228] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0f080 is same with the state(5) to be set 00:22:30.859 [2024-04-25 03:23:05.285245] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0f080 is same with the state(5) to be set 00:22:30.859 [2024-04-25 03:23:05.285258] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0f080 is same with the state(5) to be set 00:22:30.859 [2024-04-25 03:23:05.285274] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0f080 is same with the state(5) to be set 00:22:30.859 [2024-04-25 03:23:05.285288] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0f080 is same with the state(5) to be set 00:22:30.859 [2024-04-25 03:23:05.285300] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0f080 is same with the state(5) to be set 00:22:30.859 [2024-04-25 03:23:05.285312] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0f080 is same with the state(5) to be set 00:22:30.859 [2024-04-25 03:23:05.285328] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0f080 is same with the state(5) to be set 00:22:30.859 [2024-04-25 03:23:05.285341] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0f080 is same with the state(5) to be set 00:22:30.859 [2024-04-25 03:23:05.285354] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0f080 is same with the state(5) to be set 00:22:30.859 [2024-04-25 03:23:05.285366] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0f080 is same with the state(5) to be set 00:22:30.859 [2024-04-25 03:23:05.285384] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0f080 is same with the state(5) to be set 00:22:30.859 [2024-04-25 03:23:05.285397] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0f080 is same with the state(5) to be set 00:22:30.859 [2024-04-25 03:23:05.286768] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d2f0 is same with the state(5) to be set 00:22:30.859 [2024-04-25 03:23:05.286794] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d2f0 is same with the state(5) to be set 00:22:30.859 [2024-04-25 03:23:05.286809] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d2f0 is same with the state(5) to be set 00:22:30.859 [2024-04-25 03:23:05.286824] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d2f0 is same with the state(5) to be set 00:22:30.859 [2024-04-25 03:23:05.286840] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d2f0 is same with the state(5) to be set 00:22:30.859 [2024-04-25 03:23:05.286853] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d2f0 is same with the state(5) to be set 00:22:30.859 [2024-04-25 03:23:05.286865] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d2f0 is same with the state(5) to be set 00:22:30.859 [2024-04-25 03:23:05.286883] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d2f0 is same with the state(5) to be set 00:22:30.859 [2024-04-25 03:23:05.286897] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d2f0 is same with the state(5) to be set 00:22:30.859 [2024-04-25 03:23:05.286910] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d2f0 is same with the state(5) to be set 00:22:30.859 [2024-04-25 03:23:05.286922] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d2f0 is same with the state(5) to be set 00:22:30.859 [2024-04-25 03:23:05.286935] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d2f0 is same with the state(5) to be set 00:22:30.859 [2024-04-25 03:23:05.286950] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d2f0 is same with the state(5) to be set 00:22:30.859 [2024-04-25 03:23:05.286964] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d2f0 is same with the state(5) to be set 00:22:30.859 [2024-04-25 03:23:05.286977] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d2f0 is same with the state(5) to be set 00:22:30.859 [2024-04-25 03:23:05.286994] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d2f0 is same with the state(5) to be set 00:22:30.859 [2024-04-25 03:23:05.287010] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d2f0 is same with the state(5) to be set 00:22:30.859 [2024-04-25 03:23:05.287023] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d2f0 is same with the state(5) to be set 00:22:30.859 [2024-04-25 03:23:05.287050] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d2f0 is same with the state(5) to be set 00:22:30.859 [2024-04-25 03:23:05.287065] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d2f0 is same with the state(5) to be set 00:22:30.859 [2024-04-25 03:23:05.287077] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d2f0 is same with the state(5) to be set 00:22:30.859 [2024-04-25 03:23:05.287089] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d2f0 is same with the state(5) to be set 00:22:30.859 [2024-04-25 03:23:05.287102] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d2f0 is same with the state(5) to be set 00:22:30.859 [2024-04-25 03:23:05.287116] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d2f0 is same with the state(5) to be set 00:22:30.859 [2024-04-25 03:23:05.287128] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d2f0 is same with the state(5) to be set 00:22:30.859 [2024-04-25 03:23:05.287140] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d2f0 is same with the state(5) to be set 00:22:30.859 [2024-04-25 03:23:05.287151] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d2f0 is same with the state(5) to be set 00:22:30.859 [2024-04-25 03:23:05.287167] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d2f0 is same with the state(5) to be set 00:22:30.859 [2024-04-25 03:23:05.287179] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d2f0 is same with the state(5) to be set 00:22:30.859 [2024-04-25 03:23:05.287191] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d2f0 is same with the state(5) to be set 00:22:30.859 [2024-04-25 03:23:05.287203] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d2f0 is same with the state(5) to be set 00:22:30.859 [2024-04-25 03:23:05.287218] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d2f0 is same with the state(5) to be set 00:22:30.859 [2024-04-25 03:23:05.287230] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d2f0 is same with the state(5) to be set 00:22:30.859 [2024-04-25 03:23:05.287242] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d2f0 is same with the state(5) to be set 00:22:30.859 [2024-04-25 03:23:05.287254] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d2f0 is same with the state(5) to be set 00:22:30.859 [2024-04-25 03:23:05.287270] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d2f0 is same with the state(5) to be set 00:22:30.859 [2024-04-25 03:23:05.287283] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d2f0 is same with the state(5) to be set 00:22:30.859 [2024-04-25 03:23:05.287295] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d2f0 is same with the state(5) to be set 00:22:30.859 [2024-04-25 03:23:05.287307] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d2f0 is same with the state(5) to be set 00:22:30.859 [2024-04-25 03:23:05.287323] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d2f0 is same with the state(5) to be set 00:22:30.859 [2024-04-25 03:23:05.287335] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d2f0 is same with the state(5) to be set 00:22:30.859 [2024-04-25 03:23:05.287347] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d2f0 is same with the state(5) to be set 00:22:30.859 [2024-04-25 03:23:05.287363] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d2f0 is same with the state(5) to be set 00:22:30.859 [2024-04-25 03:23:05.287378] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d2f0 is same with the state(5) to be set 00:22:30.859 [2024-04-25 03:23:05.287390] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d2f0 is same with the state(5) to be set 00:22:30.859 [2024-04-25 03:23:05.287402] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d2f0 is same with the state(5) to be set 00:22:30.859 [2024-04-25 03:23:05.287416] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d2f0 is same with the state(5) to be set 00:22:30.859 [2024-04-25 03:23:05.287429] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d2f0 is same with the state(5) to be set 00:22:30.860 [2024-04-25 03:23:05.287441] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d2f0 is same with the state(5) to be set 00:22:30.860 [2024-04-25 03:23:05.287452] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d2f0 is same with the state(5) to be set 00:22:30.860 [2024-04-25 03:23:05.287465] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d2f0 is same with the state(5) to be set 00:22:30.860 [2024-04-25 03:23:05.287480] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d2f0 is same with the state(5) to be set 00:22:30.860 [2024-04-25 03:23:05.287502] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d2f0 is same with the state(5) to be set 00:22:30.860 [2024-04-25 03:23:05.287514] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d2f0 is same with the state(5) to be set 00:22:30.860 [2024-04-25 03:23:05.287530] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d2f0 is same with the state(5) to be set 00:22:30.860 [2024-04-25 03:23:05.287542] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d2f0 is same with the state(5) to be set 00:22:30.860 [2024-04-25 03:23:05.287554] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d2f0 is same with the state(5) to be set 00:22:30.860 [2024-04-25 03:23:05.287567] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d2f0 is same with the state(5) to be set 00:22:30.860 [2024-04-25 03:23:05.287582] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d2f0 is same with the state(5) to be set 00:22:30.860 [2024-04-25 03:23:05.287594] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d2f0 is same with the state(5) to be set 00:22:30.860 [2024-04-25 03:23:05.287605] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d2f0 is same with the state(5) to be set 00:22:30.860 [2024-04-25 03:23:05.287617] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d2f0 is same with the state(5) to be set 00:22:30.860 [2024-04-25 03:23:05.287674] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d2f0 is same with the state(5) to be set 00:22:30.860 [2024-04-25 03:23:05.289218] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d780 is same with the state(5) to be set 00:22:30.860 [2024-04-25 03:23:05.289253] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d780 is same with the state(5) to be set 00:22:30.860 [2024-04-25 03:23:05.289268] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d780 is same with the state(5) to be set 00:22:30.860 [2024-04-25 03:23:05.289281] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d780 is same with the state(5) to be set 00:22:30.860 [2024-04-25 03:23:05.289294] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d780 is same with the state(5) to be set 00:22:30.860 [2024-04-25 03:23:05.289307] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d780 is same with the state(5) to be set 00:22:30.860 [2024-04-25 03:23:05.289326] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d780 is same with the state(5) to be set 00:22:30.860 [2024-04-25 03:23:05.289339] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d780 is same with the state(5) to be set 00:22:30.860 [2024-04-25 03:23:05.289352] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d780 is same with the state(5) to be set 00:22:30.860 [2024-04-25 03:23:05.289364] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d780 is same with the state(5) to be set 00:22:30.860 [2024-04-25 03:23:05.289377] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d780 is same with the state(5) to be set 00:22:30.860 [2024-04-25 03:23:05.289389] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d780 is same with the state(5) to be set 00:22:30.860 [2024-04-25 03:23:05.289401] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d780 is same with the state(5) to be set 00:22:30.860 [2024-04-25 03:23:05.289413] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d780 is same with the state(5) to be set 00:22:30.860 [2024-04-25 03:23:05.289426] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d780 is same with the state(5) to be set 00:22:30.860 [2024-04-25 03:23:05.289438] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d780 is same with the state(5) to be set 00:22:30.860 [2024-04-25 03:23:05.289450] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d780 is same with the state(5) to be set 00:22:30.860 [2024-04-25 03:23:05.289462] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d780 is same with the state(5) to be set 00:22:30.860 [2024-04-25 03:23:05.289474] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d780 is same with the state(5) to be set 00:22:30.860 [2024-04-25 03:23:05.289487] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d780 is same with the state(5) to be set 00:22:30.860 [2024-04-25 03:23:05.289499] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d780 is same with the state(5) to be set 00:22:30.860 [2024-04-25 03:23:05.289511] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d780 is same with the state(5) to be set 00:22:30.860 [2024-04-25 03:23:05.289524] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d780 is same with the state(5) to be set 00:22:30.860 [2024-04-25 03:23:05.289536] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d780 is same with the state(5) to be set 00:22:30.860 [2024-04-25 03:23:05.289548] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d780 is same with the state(5) to be set 00:22:30.860 [2024-04-25 03:23:05.289560] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d780 is same with the state(5) to be set 00:22:30.860 [2024-04-25 03:23:05.289573] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d780 is same with the state(5) to be set 00:22:30.860 [2024-04-25 03:23:05.289600] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d780 is same with the state(5) to be set 00:22:30.860 [2024-04-25 03:23:05.289612] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d780 is same with the state(5) to be set 00:22:30.860 [2024-04-25 03:23:05.289624] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d780 is same with the state(5) to be set 00:22:30.860 [2024-04-25 03:23:05.289671] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d780 is same with the state(5) to be set 00:22:30.860 [2024-04-25 03:23:05.289694] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d780 is same with the state(5) to be set 00:22:30.860 [2024-04-25 03:23:05.289716] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d780 is same with the state(5) to be set 00:22:30.860 [2024-04-25 03:23:05.289743] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d780 is same with the state(5) to be set 00:22:30.860 [2024-04-25 03:23:05.289765] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d780 is same with the state(5) to be set 00:22:30.860 [2024-04-25 03:23:05.289787] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d780 is same with the state(5) to be set 00:22:30.860 [2024-04-25 03:23:05.289810] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d780 is same with the state(5) to be set 00:22:30.860 [2024-04-25 03:23:05.289825] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d780 is same with the state(5) to be set 00:22:30.860 [2024-04-25 03:23:05.289837] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d780 is same with the state(5) to be set 00:22:30.860 [2024-04-25 03:23:05.289850] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d780 is same with the state(5) to be set 00:22:30.860 [2024-04-25 03:23:05.289862] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d780 is same with the state(5) to be set 00:22:30.860 [2024-04-25 03:23:05.289880] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d780 is same with the state(5) to be set 00:22:30.860 [2024-04-25 03:23:05.289892] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d780 is same with the state(5) to be set 00:22:30.860 [2024-04-25 03:23:05.289904] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d780 is same with the state(5) to be set 00:22:30.860 [2024-04-25 03:23:05.289916] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d780 is same with the state(5) to be set 00:22:30.860 [2024-04-25 03:23:05.289929] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d780 is same with the state(5) to be set 00:22:30.860 [2024-04-25 03:23:05.289941] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d780 is same with the state(5) to be set 00:22:30.860 [2024-04-25 03:23:05.289954] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d780 is same with the state(5) to be set 00:22:30.860 [2024-04-25 03:23:05.289966] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d780 is same with the state(5) to be set 00:22:30.860 [2024-04-25 03:23:05.289978] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d780 is same with the state(5) to be set 00:22:30.860 [2024-04-25 03:23:05.289990] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d780 is same with the state(5) to be set 00:22:30.860 [2024-04-25 03:23:05.290002] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d780 is same with the state(5) to be set 00:22:30.860 [2024-04-25 03:23:05.290014] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d780 is same with the state(5) to be set 00:22:30.860 [2024-04-25 03:23:05.290026] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d780 is same with the state(5) to be set 00:22:30.860 [2024-04-25 03:23:05.290039] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d780 is same with the state(5) to be set 00:22:30.860 [2024-04-25 03:23:05.290051] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d780 is same with the state(5) to be set 00:22:30.860 [2024-04-25 03:23:05.290063] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d780 is same with the state(5) to be set 00:22:30.860 [2024-04-25 03:23:05.290075] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d780 is same with the state(5) to be set 00:22:30.860 [2024-04-25 03:23:05.290087] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d780 is same with the state(5) to be set 00:22:30.860 [2024-04-25 03:23:05.290100] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d780 is same with the state(5) to be set 00:22:30.860 [2024-04-25 03:23:05.290130] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d780 is same with the state(5) to be set 00:22:30.860 [2024-04-25 03:23:05.290142] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d780 is same with the state(5) to be set 00:22:30.860 [2024-04-25 03:23:05.290154] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d780 is same with the state(5) to be set 00:22:30.860 [2024-04-25 03:23:05.291780] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9cd30 is same with the state(5) to be set 00:22:30.860 [2024-04-25 03:23:05.291808] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9cd30 is same with the state(5) to be set 00:22:30.860 [2024-04-25 03:23:05.291823] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9cd30 is same with the state(5) to be set 00:22:30.860 [2024-04-25 03:23:05.291835] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9cd30 is same with the state(5) to be set 00:22:30.860 [2024-04-25 03:23:05.291847] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9cd30 is same with the state(5) to be set 00:22:30.860 [2024-04-25 03:23:05.291859] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9cd30 is same with the state(5) to be set 00:22:30.861 [2024-04-25 03:23:05.291872] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9cd30 is same with the state(5) to be set 00:22:30.861 [2024-04-25 03:23:05.291884] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9cd30 is same with the state(5) to be set 00:22:30.861 [2024-04-25 03:23:05.291897] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9cd30 is same with the state(5) to be set 00:22:30.861 [2024-04-25 03:23:05.291909] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9cd30 is same with the state(5) to be set 00:22:30.861 [2024-04-25 03:23:05.291921] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9cd30 is same with the state(5) to be set 00:22:30.861 [2024-04-25 03:23:05.291948] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9cd30 is same with the state(5) to be set 00:22:30.861 [2024-04-25 03:23:05.291960] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9cd30 is same with the state(5) to be set 00:22:30.861 [2024-04-25 03:23:05.291972] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9cd30 is same with the state(5) to be set 00:22:30.861 [2024-04-25 03:23:05.291984] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9cd30 is same with the state(5) to be set 00:22:30.861 [2024-04-25 03:23:05.291996] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9cd30 is same with the state(5) to be set 00:22:30.861 [2024-04-25 03:23:05.292007] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9cd30 is same with the state(5) to be set 00:22:30.861 [2024-04-25 03:23:05.292020] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9cd30 is same with the state(5) to be set 00:22:30.861 [2024-04-25 03:23:05.292031] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9cd30 is same with the state(5) to be set 00:22:30.861 [2024-04-25 03:23:05.292043] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9cd30 is same with the state(5) to be set 00:22:30.861 [2024-04-25 03:23:05.292054] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9cd30 is same with the state(5) to be set 00:22:30.861 [2024-04-25 03:23:05.292066] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9cd30 is same with the state(5) to be set 00:22:30.861 [2024-04-25 03:23:05.292078] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9cd30 is same with the state(5) to be set 00:22:30.861 [2024-04-25 03:23:05.292090] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9cd30 is same with the state(5) to be set 00:22:30.861 [2024-04-25 03:23:05.292107] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9cd30 is same with the state(5) to be set 00:22:30.861 [2024-04-25 03:23:05.292120] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9cd30 is same with the state(5) to be set 00:22:30.861 [2024-04-25 03:23:05.292132] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9cd30 is same with the state(5) to be set 00:22:30.861 [2024-04-25 03:23:05.292144] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9cd30 is same with the state(5) to be set 00:22:30.861 [2024-04-25 03:23:05.292156] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9cd30 is same with the state(5) to be set 00:22:30.861 [2024-04-25 03:23:05.292169] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9cd30 is same with the state(5) to be set 00:22:30.861 [2024-04-25 03:23:05.292181] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9cd30 is same with the state(5) to be set 00:22:30.861 [2024-04-25 03:23:05.292193] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9cd30 is same with the state(5) to be set 00:22:30.861 [2024-04-25 03:23:05.292205] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9cd30 is same with the state(5) to be set 00:22:30.861 [2024-04-25 03:23:05.292217] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9cd30 is same with the state(5) to be set 00:22:30.861 [2024-04-25 03:23:05.292229] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9cd30 is same with the state(5) to be set 00:22:30.861 [2024-04-25 03:23:05.292241] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9cd30 is same with the state(5) to be set 00:22:30.861 [2024-04-25 03:23:05.292252] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9cd30 is same with the state(5) to be set 00:22:30.861 [2024-04-25 03:23:05.292264] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9cd30 is same with the state(5) to be set 00:22:30.861 [2024-04-25 03:23:05.292276] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9cd30 is same with the state(5) to be set 00:22:30.861 [2024-04-25 03:23:05.292288] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9cd30 is same with the state(5) to be set 00:22:30.861 [2024-04-25 03:23:05.292299] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9cd30 is same with the state(5) to be set 00:22:30.861 [2024-04-25 03:23:05.292311] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9cd30 is same with the state(5) to be set 00:22:30.861 [2024-04-25 03:23:05.292322] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9cd30 is same with the state(5) to be set 00:22:30.861 [2024-04-25 03:23:05.292334] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9cd30 is same with the state(5) to be set 00:22:30.861 [2024-04-25 03:23:05.292346] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9cd30 is same with the state(5) to be set 00:22:30.861 [2024-04-25 03:23:05.292357] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9cd30 is same with the state(5) to be set 00:22:30.861 [2024-04-25 03:23:05.292369] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9cd30 is same with the state(5) to be set 00:22:30.861 [2024-04-25 03:23:05.292381] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9cd30 is same with the state(5) to be set 00:22:30.861 [2024-04-25 03:23:05.292392] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9cd30 is same with the state(5) to be set 00:22:30.861 [2024-04-25 03:23:05.292404] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9cd30 is same with the state(5) to be set 00:22:30.861 [2024-04-25 03:23:05.292416] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9cd30 is same with the state(5) to be set 00:22:30.861 [2024-04-25 03:23:05.292428] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9cd30 is same with the state(5) to be set 00:22:30.861 [2024-04-25 03:23:05.292443] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9cd30 is same with the state(5) to be set 00:22:30.861 [2024-04-25 03:23:05.292455] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9cd30 is same with the state(5) to be set 00:22:30.861 [2024-04-25 03:23:05.292467] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9cd30 is same with the state(5) to be set 00:22:30.861 [2024-04-25 03:23:05.292478] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9cd30 is same with the state(5) to be set 00:22:30.861 [2024-04-25 03:23:05.292490] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9cd30 is same with the state(5) to be set 00:22:30.861 [2024-04-25 03:23:05.292501] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9cd30 is same with the state(5) to be set 00:22:30.861 [2024-04-25 03:23:05.292512] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9cd30 is same with the state(5) to be set 00:22:30.861 [2024-04-25 03:23:05.292524] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9cd30 is same with the state(5) to be set 00:22:30.861 [2024-04-25 03:23:05.292535] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9cd30 is same with the state(5) to be set 00:22:30.861 [2024-04-25 03:23:05.292547] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9cd30 is same with the state(5) to be set 00:22:30.861 [2024-04-25 03:23:05.292558] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9cd30 is same with the state(5) to be set 00:22:30.861 [2024-04-25 03:23:05.293566] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0df70 is same with the state(5) to be set 00:22:30.861 [2024-04-25 03:23:05.293590] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0df70 is same with the state(5) to be set 00:22:30.861 [2024-04-25 03:23:05.293603] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0df70 is same with the state(5) to be set 00:22:30.861 [2024-04-25 03:23:05.293615] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0df70 is same with the state(5) to be set 00:22:30.861 [2024-04-25 03:23:05.293626] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0df70 is same with the state(5) to be set 00:22:30.861 [2024-04-25 03:23:05.293672] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0df70 is same with the state(5) to be set 00:22:30.861 [2024-04-25 03:23:05.293686] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0df70 is same with the state(5) to be set 00:22:30.861 [2024-04-25 03:23:05.293698] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0df70 is same with the state(5) to be set 00:22:30.861 [2024-04-25 03:23:05.293712] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0df70 is same with the state(5) to be set 00:22:30.861 [2024-04-25 03:23:05.293725] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0df70 is same with the state(5) to be set 00:22:30.861 [2024-04-25 03:23:05.293738] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0df70 is same with the state(5) to be set 00:22:30.861 [2024-04-25 03:23:05.293750] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0df70 is same with the state(5) to be set 00:22:30.861 [2024-04-25 03:23:05.293763] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0df70 is same with the state(5) to be set 00:22:30.861 [2024-04-25 03:23:05.293775] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0df70 is same with the state(5) to be set 00:22:30.861 [2024-04-25 03:23:05.293787] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0df70 is same with the state(5) to be set 00:22:30.861 [2024-04-25 03:23:05.293799] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0df70 is same with the state(5) to be set 00:22:30.861 [2024-04-25 03:23:05.293816] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0df70 is same with the state(5) to be set 00:22:30.861 [2024-04-25 03:23:05.293829] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0df70 is same with the state(5) to be set 00:22:30.861 [2024-04-25 03:23:05.293841] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0df70 is same with the state(5) to be set 00:22:30.861 [2024-04-25 03:23:05.293853] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0df70 is same with the state(5) to be set 00:22:30.861 [2024-04-25 03:23:05.293866] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0df70 is same with the state(5) to be set 00:22:30.861 [2024-04-25 03:23:05.293878] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0df70 is same with the state(5) to be set 00:22:30.861 [2024-04-25 03:23:05.293891] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0df70 is same with the state(5) to be set 00:22:30.861 [2024-04-25 03:23:05.293910] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0df70 is same with the state(5) to be set 00:22:30.861 [2024-04-25 03:23:05.293922] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0df70 is same with the state(5) to be set 00:22:30.861 [2024-04-25 03:23:05.293934] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0df70 is same with the state(5) to be set 00:22:30.861 [2024-04-25 03:23:05.293946] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0df70 is same with the state(5) to be set 00:22:30.862 [2024-04-25 03:23:05.293975] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0df70 is same with the state(5) to be set 00:22:30.862 [2024-04-25 03:23:05.293987] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0df70 is same with the state(5) to be set 00:22:30.862 [2024-04-25 03:23:05.294000] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0df70 is same with the state(5) to be set 00:22:30.862 [2024-04-25 03:23:05.294012] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0df70 is same with the state(5) to be set 00:22:30.862 [2024-04-25 03:23:05.294024] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0df70 is same with the state(5) to be set 00:22:30.862 [2024-04-25 03:23:05.294037] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0df70 is same with the state(5) to be set 00:22:30.862 [2024-04-25 03:23:05.294048] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0df70 is same with the state(5) to be set 00:22:30.862 [2024-04-25 03:23:05.294060] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0df70 is same with the state(5) to be set 00:22:30.862 [2024-04-25 03:23:05.294072] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0df70 is same with the state(5) to be set 00:22:30.862 [2024-04-25 03:23:05.294084] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0df70 is same with the state(5) to be set 00:22:30.862 [2024-04-25 03:23:05.294096] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0df70 is same with the state(5) to be set 00:22:30.862 [2024-04-25 03:23:05.294109] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0df70 is same with the state(5) to be set 00:22:30.862 [2024-04-25 03:23:05.294121] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0df70 is same with the state(5) to be set 00:22:30.862 [2024-04-25 03:23:05.294133] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0df70 is same with the state(5) to be set 00:22:30.862 [2024-04-25 03:23:05.294145] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0df70 is same with the state(5) to be set 00:22:30.862 [2024-04-25 03:23:05.294157] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0df70 is same with the state(5) to be set 00:22:30.862 [2024-04-25 03:23:05.294172] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0df70 is same with the state(5) to be set 00:22:30.862 [2024-04-25 03:23:05.294184] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0df70 is same with the state(5) to be set 00:22:30.862 [2024-04-25 03:23:05.294196] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0df70 is same with the state(5) to be set 00:22:30.862 [2024-04-25 03:23:05.294208] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0df70 is same with the state(5) to be set 00:22:30.862 [2024-04-25 03:23:05.294220] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0df70 is same with the state(5) to be set 00:22:30.862 [2024-04-25 03:23:05.294232] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0df70 is same with the state(5) to be set 00:22:30.862 [2024-04-25 03:23:05.294244] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0df70 is same with the state(5) to be set 00:22:30.862 [2024-04-25 03:23:05.294255] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0df70 is same with the state(5) to be set 00:22:30.862 [2024-04-25 03:23:05.294268] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0df70 is same with the state(5) to be set 00:22:30.862 [2024-04-25 03:23:05.294280] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0df70 is same with the state(5) to be set 00:22:30.862 [2024-04-25 03:23:05.294292] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0df70 is same with the state(5) to be set 00:22:30.862 [2024-04-25 03:23:05.294304] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0df70 is same with the state(5) to be set 00:22:30.862 [2024-04-25 03:23:05.294316] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0df70 is same with the state(5) to be set 00:22:30.862 [2024-04-25 03:23:05.294327] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0df70 is same with the state(5) to be set 00:22:30.862 [2024-04-25 03:23:05.294339] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0df70 is same with the state(5) to be set 00:22:30.862 [2024-04-25 03:23:05.294352] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0df70 is same with the state(5) to be set 00:22:30.862 [2024-04-25 03:23:05.294364] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0df70 is same with the state(5) to be set 00:22:30.862 [2024-04-25 03:23:05.294376] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0df70 is same with the state(5) to be set 00:22:30.862 [2024-04-25 03:23:05.294387] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0df70 is same with the state(5) to be set 00:22:30.862 [2024-04-25 03:23:05.294399] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0df70 is same with the state(5) to be set 00:22:30.862 [2024-04-25 03:23:05.296215] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0e890 is same with the state(5) to be set 00:22:30.862 [2024-04-25 03:23:05.296247] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0e890 is same with the state(5) to be set 00:22:30.862 [2024-04-25 03:23:05.296261] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0e890 is same with the state(5) to be set 00:22:30.862 [2024-04-25 03:23:05.296274] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0e890 is same with the state(5) to be set 00:22:30.862 [2024-04-25 03:23:05.296285] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0e890 is same with the state(5) to be set 00:22:30.862 [2024-04-25 03:23:05.296298] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0e890 is same with the state(5) to be set 00:22:30.862 [2024-04-25 03:23:05.296310] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0e890 is same with the state(5) to be set 00:22:30.862 [2024-04-25 03:23:05.296329] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0e890 is same with the state(5) to be set 00:22:30.862 [2024-04-25 03:23:05.296342] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0e890 is same with the state(5) to be set 00:22:30.862 [2024-04-25 03:23:05.296354] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0e890 is same with the state(5) to be set 00:22:30.862 [2024-04-25 03:23:05.296366] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0e890 is same with the state(5) to be set 00:22:30.862 [2024-04-25 03:23:05.296378] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0e890 is same with the state(5) to be set 00:22:30.862 [2024-04-25 03:23:05.296390] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0e890 is same with the state(5) to be set 00:22:30.862 [2024-04-25 03:23:05.296402] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0e890 is same with the state(5) to be set 00:22:30.862 [2024-04-25 03:23:05.296414] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0e890 is same with the state(5) to be set 00:22:30.862 [2024-04-25 03:23:05.296426] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0e890 is same with the state(5) to be set 00:22:30.862 [2024-04-25 03:23:05.296438] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0e890 is same with the state(5) to be set 00:22:30.862 [2024-04-25 03:23:05.296449] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0e890 is same with the state(5) to be set 00:22:30.862 [2024-04-25 03:23:05.296461] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0e890 is same with the state(5) to be set 00:22:30.862 [2024-04-25 03:23:05.296473] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0e890 is same with the state(5) to be set 00:22:30.862 [2024-04-25 03:23:05.296486] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0e890 is same with the state(5) to be set 00:22:30.862 [2024-04-25 03:23:05.296498] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0e890 is same with the state(5) to be set 00:22:30.862 [2024-04-25 03:23:05.296510] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0e890 is same with the state(5) to be set 00:22:30.862 [2024-04-25 03:23:05.296522] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0e890 is same with the state(5) to be set 00:22:30.862 [2024-04-25 03:23:05.296534] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0e890 is same with the state(5) to be set 00:22:30.862 [2024-04-25 03:23:05.296561] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0e890 is same with the state(5) to be set 00:22:30.862 [2024-04-25 03:23:05.296574] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0e890 is same with the state(5) to be set 00:22:30.862 [2024-04-25 03:23:05.296586] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0e890 is same with the state(5) to be set 00:22:30.862 [2024-04-25 03:23:05.296597] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0e890 is same with the state(5) to be set 00:22:30.862 [2024-04-25 03:23:05.296609] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0e890 is same with the state(5) to be set 00:22:30.862 [2024-04-25 03:23:05.296621] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0e890 is same with the state(5) to be set 00:22:30.863 [2024-04-25 03:23:05.296661] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0e890 is same with the state(5) to be set 00:22:30.863 [2024-04-25 03:23:05.296688] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0e890 is same with the state(5) to be set 00:22:30.863 [2024-04-25 03:23:05.296701] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0e890 is same with the state(5) to be set 00:22:30.863 [2024-04-25 03:23:05.296716] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0e890 is same with the state(5) to be set 00:22:30.863 [2024-04-25 03:23:05.296729] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0e890 is same with the state(5) to be set 00:22:30.863 [2024-04-25 03:23:05.296741] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0e890 is same with the state(5) to be set 00:22:30.863 [2024-04-25 03:23:05.296754] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0e890 is same with the state(5) to be set 00:22:30.863 [2024-04-25 03:23:05.296766] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0e890 is same with the state(5) to be set 00:22:30.863 [2024-04-25 03:23:05.296778] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0e890 is same with the state(5) to be set 00:22:30.863 [2024-04-25 03:23:05.296790] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0e890 is same with the state(5) to be set 00:22:30.863 [2024-04-25 03:23:05.296803] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0e890 is same with the state(5) to be set 00:22:30.863 [2024-04-25 03:23:05.296816] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0e890 is same with the state(5) to be set 00:22:30.863 [2024-04-25 03:23:05.296828] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0e890 is same with the state(5) to be set 00:22:30.863 [2024-04-25 03:23:05.296840] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0e890 is same with the state(5) to be set 00:22:30.863 [2024-04-25 03:23:05.296853] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0e890 is same with the state(5) to be set 00:22:30.863 [2024-04-25 03:23:05.296865] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0e890 is same with the state(5) to be set 00:22:30.863 [2024-04-25 03:23:05.296877] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0e890 is same with the state(5) to be set 00:22:30.863 [2024-04-25 03:23:05.296890] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0e890 is same with the state(5) to be set 00:22:30.863 [2024-04-25 03:23:05.296902] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0e890 is same with the state(5) to be set 00:22:30.863 [2024-04-25 03:23:05.296914] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0e890 is same with the state(5) to be set 00:22:30.863 [2024-04-25 03:23:05.296934] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0e890 is same with the state(5) to be set 00:22:30.863 [2024-04-25 03:23:05.296946] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0e890 is same with the state(5) to be set 00:22:30.863 [2024-04-25 03:23:05.296973] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0e890 is same with the state(5) to be set 00:22:30.863 [2024-04-25 03:23:05.296986] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0e890 is same with the state(5) to be set 00:22:30.863 [2024-04-25 03:23:05.296998] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0e890 is same with the state(5) to be set 00:22:30.863 [2024-04-25 03:23:05.297010] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0e890 is same with the state(5) to be set 00:22:30.863 [2024-04-25 03:23:05.297022] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0e890 is same with the state(5) to be set 00:22:30.863 [2024-04-25 03:23:05.297034] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0e890 is same with the state(5) to be set 00:22:30.863 [2024-04-25 03:23:05.297046] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0e890 is same with the state(5) to be set 00:22:30.863 [2024-04-25 03:23:05.297058] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0e890 is same with the state(5) to be set 00:22:30.863 [2024-04-25 03:23:05.297074] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0e890 is same with the state(5) to be set 00:22:30.863 [2024-04-25 03:23:05.297086] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0e890 is same with the state(5) to be set 00:22:30.863 [2024-04-25 03:23:05.299892] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.863 [2024-04-25 03:23:05.299938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.863 [2024-04-25 03:23:05.299957] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.863 [2024-04-25 03:23:05.299973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.863 [2024-04-25 03:23:05.299989] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.863 [2024-04-25 03:23:05.300003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.863 [2024-04-25 03:23:05.300018] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.863 [2024-04-25 03:23:05.300032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.863 [2024-04-25 03:23:05.300046] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c1ed0 is same with the state(5) to be set 00:22:30.863 [2024-04-25 03:23:05.300103] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.863 [2024-04-25 03:23:05.300124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.863 [2024-04-25 03:23:05.300139] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.863 [2024-04-25 03:23:05.300152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.863 [2024-04-25 03:23:05.300165] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.863 [2024-04-25 03:23:05.300178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.863 [2024-04-25 03:23:05.300193] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.863 [2024-04-25 03:23:05.300206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.863 [2024-04-25 03:23:05.300218] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e2960 is same with the state(5) to be set 00:22:30.863 [2024-04-25 03:23:05.300266] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.863 [2024-04-25 03:23:05.300286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.863 [2024-04-25 03:23:05.300301] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.863 [2024-04-25 03:23:05.300315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.863 [2024-04-25 03:23:05.300330] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.863 [2024-04-25 03:23:05.300344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.863 [2024-04-25 03:23:05.300364] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.863 [2024-04-25 03:23:05.300378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.863 [2024-04-25 03:23:05.300391] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1983000 is same with the state(5) to be set 00:22:30.863 [2024-04-25 03:23:05.300437] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.863 [2024-04-25 03:23:05.300458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.863 [2024-04-25 03:23:05.300472] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.863 [2024-04-25 03:23:05.300486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.863 [2024-04-25 03:23:05.300501] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.863 [2024-04-25 03:23:05.300514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.863 [2024-04-25 03:23:05.300528] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.863 [2024-04-25 03:23:05.300541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.863 [2024-04-25 03:23:05.300554] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18eb6e0 is same with the state(5) to be set 00:22:30.863 [2024-04-25 03:23:05.300600] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.863 [2024-04-25 03:23:05.300638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.863 [2024-04-25 03:23:05.300660] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.863 [2024-04-25 03:23:05.300674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.863 [2024-04-25 03:23:05.300688] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.863 [2024-04-25 03:23:05.300702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.863 [2024-04-25 03:23:05.300716] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.863 [2024-04-25 03:23:05.300730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.863 [2024-04-25 03:23:05.300743] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197b6d0 is same with the state(5) to be set 00:22:30.863 [2024-04-25 03:23:05.300791] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.863 [2024-04-25 03:23:05.300812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.863 [2024-04-25 03:23:05.300827] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.863 [2024-04-25 03:23:05.300841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.864 [2024-04-25 03:23:05.300860] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.864 [2024-04-25 03:23:05.300875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.864 [2024-04-25 03:23:05.300889] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.864 [2024-04-25 03:23:05.300902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.864 [2024-04-25 03:23:05.300916] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ac660 is same with the state(5) to be set 00:22:30.864 [2024-04-25 03:23:05.300960] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.864 [2024-04-25 03:23:05.300981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.864 [2024-04-25 03:23:05.300996] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.864 [2024-04-25 03:23:05.301016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.864 [2024-04-25 03:23:05.301031] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.864 [2024-04-25 03:23:05.301044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.864 [2024-04-25 03:23:05.301058] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.864 [2024-04-25 03:23:05.301071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.864 [2024-04-25 03:23:05.301084] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17eaa90 is same with the state(5) to be set 00:22:30.864 [2024-04-25 03:23:05.301126] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.864 [2024-04-25 03:23:05.301146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.864 [2024-04-25 03:23:05.301161] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.864 [2024-04-25 03:23:05.301175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.864 [2024-04-25 03:23:05.301189] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.864 [2024-04-25 03:23:05.301202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.864 [2024-04-25 03:23:05.301215] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.864 [2024-04-25 03:23:05.301229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.864 [2024-04-25 03:23:05.301242] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cda20 is same with the state(5) to be set 00:22:30.864 [2024-04-25 03:23:05.301289] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.864 [2024-04-25 03:23:05.301309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.864 [2024-04-25 03:23:05.301323] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.864 [2024-04-25 03:23:05.301341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.864 [2024-04-25 03:23:05.301355] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.864 [2024-04-25 03:23:05.301369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.864 [2024-04-25 03:23:05.301382] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.864 [2024-04-25 03:23:05.301395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.864 [2024-04-25 03:23:05.301408] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bd310 is same with the state(5) to be set 00:22:30.864 [2024-04-25 03:23:05.301450] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.864 [2024-04-25 03:23:05.301470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.864 [2024-04-25 03:23:05.301485] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.864 [2024-04-25 03:23:05.301498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.864 [2024-04-25 03:23:05.301512] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.864 [2024-04-25 03:23:05.301525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.864 [2024-04-25 03:23:05.301539] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.864 [2024-04-25 03:23:05.301552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.864 [2024-04-25 03:23:05.301570] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852d20 is same with the state(5) to be set 00:22:30.864 [2024-04-25 03:23:05.302104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.864 [2024-04-25 03:23:05.302130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.864 [2024-04-25 03:23:05.302155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.864 [2024-04-25 03:23:05.302172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.864 [2024-04-25 03:23:05.302189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.864 [2024-04-25 03:23:05.302203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.864 [2024-04-25 03:23:05.302219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.864 [2024-04-25 03:23:05.302234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.864 [2024-04-25 03:23:05.302249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.864 [2024-04-25 03:23:05.302264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.864 [2024-04-25 03:23:05.302289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.864 [2024-04-25 03:23:05.302305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.864 [2024-04-25 03:23:05.302320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.864 [2024-04-25 03:23:05.302335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.864 [2024-04-25 03:23:05.302350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.864 [2024-04-25 03:23:05.302365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.864 [2024-04-25 03:23:05.302380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.864 [2024-04-25 03:23:05.302395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.864 [2024-04-25 03:23:05.302411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.864 [2024-04-25 03:23:05.302425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.864 [2024-04-25 03:23:05.302440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.864 [2024-04-25 03:23:05.302454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.864 [2024-04-25 03:23:05.302470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.864 [2024-04-25 03:23:05.302484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.864 [2024-04-25 03:23:05.302500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.864 [2024-04-25 03:23:05.302513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.864 [2024-04-25 03:23:05.302529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.864 [2024-04-25 03:23:05.302543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.864 [2024-04-25 03:23:05.302559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.864 [2024-04-25 03:23:05.302573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.864 [2024-04-25 03:23:05.302588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.864 [2024-04-25 03:23:05.302603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.864 [2024-04-25 03:23:05.302619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.864 [2024-04-25 03:23:05.302642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.864 [2024-04-25 03:23:05.302659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.864 [2024-04-25 03:23:05.302678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.864 [2024-04-25 03:23:05.302695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.864 [2024-04-25 03:23:05.302709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.864 [2024-04-25 03:23:05.302725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.865 [2024-04-25 03:23:05.302739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.865 [2024-04-25 03:23:05.302755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.865 [2024-04-25 03:23:05.302769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.865 [2024-04-25 03:23:05.302785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.865 [2024-04-25 03:23:05.302799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.865 [2024-04-25 03:23:05.302815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.865 [2024-04-25 03:23:05.302829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.865 [2024-04-25 03:23:05.302844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.865 [2024-04-25 03:23:05.302858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.865 [2024-04-25 03:23:05.302874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.865 [2024-04-25 03:23:05.302888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.865 [2024-04-25 03:23:05.302904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.865 [2024-04-25 03:23:05.302917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.865 [2024-04-25 03:23:05.302933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.865 [2024-04-25 03:23:05.302947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.865 [2024-04-25 03:23:05.302963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.865 [2024-04-25 03:23:05.302977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.865 [2024-04-25 03:23:05.302993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.865 [2024-04-25 03:23:05.303007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.865 [2024-04-25 03:23:05.303023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.865 [2024-04-25 03:23:05.303038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.865 [2024-04-25 03:23:05.303058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.865 [2024-04-25 03:23:05.303072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.865 [2024-04-25 03:23:05.303088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.865 [2024-04-25 03:23:05.303103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.865 [2024-04-25 03:23:05.303119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.865 [2024-04-25 03:23:05.303133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.865 [2024-04-25 03:23:05.303149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.865 [2024-04-25 03:23:05.303164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.865 [2024-04-25 03:23:05.303179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.865 [2024-04-25 03:23:05.303193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.865 [2024-04-25 03:23:05.303209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.865 [2024-04-25 03:23:05.303223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.865 [2024-04-25 03:23:05.303240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.865 [2024-04-25 03:23:05.303253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.865 [2024-04-25 03:23:05.303270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.865 [2024-04-25 03:23:05.303284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.865 [2024-04-25 03:23:05.303301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.865 [2024-04-25 03:23:05.303315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.865 [2024-04-25 03:23:05.303331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.865 [2024-04-25 03:23:05.303345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.865 [2024-04-25 03:23:05.303360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.865 [2024-04-25 03:23:05.303374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.865 [2024-04-25 03:23:05.303390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.865 [2024-04-25 03:23:05.303404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.865 [2024-04-25 03:23:05.303419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.865 [2024-04-25 03:23:05.303437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.865 [2024-04-25 03:23:05.303453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.865 [2024-04-25 03:23:05.303468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.865 [2024-04-25 03:23:05.303484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.865 [2024-04-25 03:23:05.303498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.865 [2024-04-25 03:23:05.303513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.865 [2024-04-25 03:23:05.303527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.865 [2024-04-25 03:23:05.303542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.865 [2024-04-25 03:23:05.303557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.865 [2024-04-25 03:23:05.303573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.865 [2024-04-25 03:23:05.303587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.865 [2024-04-25 03:23:05.303604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.865 [2024-04-25 03:23:05.303618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.865 [2024-04-25 03:23:05.303642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.865 [2024-04-25 03:23:05.303658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.865 [2024-04-25 03:23:05.303674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.865 [2024-04-25 03:23:05.303688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.865 [2024-04-25 03:23:05.303704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.865 [2024-04-25 03:23:05.303718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.865 [2024-04-25 03:23:05.303734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.865 [2024-04-25 03:23:05.303749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.865 [2024-04-25 03:23:05.303765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.865 [2024-04-25 03:23:05.303779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.865 [2024-04-25 03:23:05.303795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.865 [2024-04-25 03:23:05.303809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.865 [2024-04-25 03:23:05.303829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.865 [2024-04-25 03:23:05.303846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.865 [2024-04-25 03:23:05.303863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.865 [2024-04-25 03:23:05.303888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.865 [2024-04-25 03:23:05.303904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.865 [2024-04-25 03:23:05.303917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.865 [2024-04-25 03:23:05.303933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.865 [2024-04-25 03:23:05.303947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.866 [2024-04-25 03:23:05.303963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.866 [2024-04-25 03:23:05.303977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.866 [2024-04-25 03:23:05.303993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.866 [2024-04-25 03:23:05.304006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.866 [2024-04-25 03:23:05.304022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.866 [2024-04-25 03:23:05.304037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.866 [2024-04-25 03:23:05.304052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.866 [2024-04-25 03:23:05.304066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.866 [2024-04-25 03:23:05.304082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.866 [2024-04-25 03:23:05.304096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.866 [2024-04-25 03:23:05.304138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:30.866 [2024-04-25 03:23:05.304223] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x196d4e0 was disconnected and freed. reset controller. 00:22:30.866 [2024-04-25 03:23:05.304414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.866 [2024-04-25 03:23:05.304438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.866 [2024-04-25 03:23:05.304461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.866 [2024-04-25 03:23:05.304483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.866 [2024-04-25 03:23:05.304501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.866 [2024-04-25 03:23:05.304520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.866 [2024-04-25 03:23:05.304539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.866 [2024-04-25 03:23:05.304554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.866 [2024-04-25 03:23:05.304572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.866 [2024-04-25 03:23:05.304587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.866 [2024-04-25 03:23:05.304603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.866 [2024-04-25 03:23:05.304617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.866 [2024-04-25 03:23:05.304642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.866 [2024-04-25 03:23:05.304659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.866 [2024-04-25 03:23:05.304679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.866 [2024-04-25 03:23:05.304694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.866 [2024-04-25 03:23:05.304710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.866 [2024-04-25 03:23:05.304724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.866 [2024-04-25 03:23:05.304741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.866 [2024-04-25 03:23:05.304756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.866 [2024-04-25 03:23:05.304772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.866 [2024-04-25 03:23:05.304786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.866 [2024-04-25 03:23:05.304803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.866 [2024-04-25 03:23:05.304826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.866 [2024-04-25 03:23:05.304843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.866 [2024-04-25 03:23:05.304859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.866 [2024-04-25 03:23:05.304875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.866 [2024-04-25 03:23:05.304900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.866 [2024-04-25 03:23:05.304916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.866 [2024-04-25 03:23:05.304930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.866 [2024-04-25 03:23:05.304946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.866 [2024-04-25 03:23:05.304964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.866 [2024-04-25 03:23:05.304981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.866 [2024-04-25 03:23:05.304995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.866 [2024-04-25 03:23:05.305011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.866 [2024-04-25 03:23:05.305025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.866 [2024-04-25 03:23:05.305040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.866 [2024-04-25 03:23:05.305054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.866 [2024-04-25 03:23:05.305070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.866 [2024-04-25 03:23:05.305084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.866 [2024-04-25 03:23:05.305100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.866 [2024-04-25 03:23:05.305114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.866 [2024-04-25 03:23:05.305130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.866 [2024-04-25 03:23:05.305144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.866 [2024-04-25 03:23:05.305159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.866 [2024-04-25 03:23:05.305174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.866 [2024-04-25 03:23:05.305189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.866 [2024-04-25 03:23:05.305203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.866 [2024-04-25 03:23:05.305219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.866 [2024-04-25 03:23:05.305234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.866 [2024-04-25 03:23:05.305249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.866 [2024-04-25 03:23:05.305264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.866 [2024-04-25 03:23:05.305279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.866 [2024-04-25 03:23:05.305294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.866 [2024-04-25 03:23:05.305309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.867 [2024-04-25 03:23:05.305331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.867 [2024-04-25 03:23:05.305349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.867 [2024-04-25 03:23:05.305363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.867 [2024-04-25 03:23:05.305379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.867 [2024-04-25 03:23:05.305393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.867 [2024-04-25 03:23:05.305409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.867 [2024-04-25 03:23:05.305423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.867 [2024-04-25 03:23:05.305438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.867 [2024-04-25 03:23:05.305453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.867 [2024-04-25 03:23:05.305468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.867 [2024-04-25 03:23:05.305484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.867 [2024-04-25 03:23:05.305500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.867 [2024-04-25 03:23:05.305514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.867 [2024-04-25 03:23:05.305530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.867 [2024-04-25 03:23:05.305544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.867 [2024-04-25 03:23:05.305561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.867 [2024-04-25 03:23:05.305575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.867 [2024-04-25 03:23:05.305590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.867 [2024-04-25 03:23:05.305604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.867 [2024-04-25 03:23:05.305621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.867 [2024-04-25 03:23:05.305643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.867 [2024-04-25 03:23:05.305660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.867 [2024-04-25 03:23:05.305681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.867 [2024-04-25 03:23:05.305697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.867 [2024-04-25 03:23:05.305711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.867 [2024-04-25 03:23:05.305731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.867 [2024-04-25 03:23:05.305745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.867 [2024-04-25 03:23:05.305761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.867 [2024-04-25 03:23:05.305775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.867 [2024-04-25 03:23:05.305790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.867 [2024-04-25 03:23:05.305804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.867 [2024-04-25 03:23:05.305819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.867 [2024-04-25 03:23:05.305840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.867 [2024-04-25 03:23:05.305856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.867 [2024-04-25 03:23:05.305870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.867 [2024-04-25 03:23:05.305885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.867 [2024-04-25 03:23:05.305899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.867 [2024-04-25 03:23:05.305915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.867 [2024-04-25 03:23:05.305928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.867 [2024-04-25 03:23:05.305944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.867 [2024-04-25 03:23:05.305958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.867 [2024-04-25 03:23:05.305973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.867 [2024-04-25 03:23:05.305986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.867 [2024-04-25 03:23:05.306002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.867 [2024-04-25 03:23:05.306016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.867 [2024-04-25 03:23:05.306031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.867 [2024-04-25 03:23:05.306045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.867 [2024-04-25 03:23:05.306060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.867 [2024-04-25 03:23:05.306074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.867 [2024-04-25 03:23:05.306095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.867 [2024-04-25 03:23:05.306113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.867 [2024-04-25 03:23:05.306129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.867 [2024-04-25 03:23:05.306144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.867 [2024-04-25 03:23:05.306159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.867 [2024-04-25 03:23:05.306172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.867 [2024-04-25 03:23:05.306188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.867 [2024-04-25 03:23:05.306202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.867 [2024-04-25 03:23:05.306217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.867 [2024-04-25 03:23:05.306231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.867 [2024-04-25 03:23:05.306246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.867 [2024-04-25 03:23:05.306260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.867 [2024-04-25 03:23:05.306276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.867 [2024-04-25 03:23:05.306290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.867 [2024-04-25 03:23:05.306305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.867 [2024-04-25 03:23:05.306324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.867 [2024-04-25 03:23:05.306340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.867 [2024-04-25 03:23:05.306354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.867 [2024-04-25 03:23:05.306369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.867 [2024-04-25 03:23:05.306383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.867 [2024-04-25 03:23:05.306398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.867 [2024-04-25 03:23:05.306412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.867 [2024-04-25 03:23:05.306428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.867 [2024-04-25 03:23:05.306442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.867 [2024-04-25 03:23:05.306528] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x17b8d00 was disconnected and freed. reset controller. 00:22:30.867 [2024-04-25 03:23:05.306670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.867 [2024-04-25 03:23:05.306697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.867 [2024-04-25 03:23:05.306719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.867 [2024-04-25 03:23:05.306736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.867 [2024-04-25 03:23:05.306752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.868 [2024-04-25 03:23:05.306766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.868 [2024-04-25 03:23:05.306782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.868 [2024-04-25 03:23:05.306798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.868 [2024-04-25 03:23:05.306814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.868 [2024-04-25 03:23:05.306828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.868 [2024-04-25 03:23:05.306844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.868 [2024-04-25 03:23:05.306858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.868 [2024-04-25 03:23:05.306873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.868 [2024-04-25 03:23:05.306887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.868 [2024-04-25 03:23:05.306903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.868 [2024-04-25 03:23:05.306917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.868 [2024-04-25 03:23:05.306936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.868 [2024-04-25 03:23:05.306950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.868 [2024-04-25 03:23:05.306966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.868 [2024-04-25 03:23:05.306980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.868 [2024-04-25 03:23:05.307001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.868 [2024-04-25 03:23:05.307015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.868 [2024-04-25 03:23:05.307032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.868 [2024-04-25 03:23:05.307047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.868 [2024-04-25 03:23:05.307062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.868 [2024-04-25 03:23:05.307077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.868 [2024-04-25 03:23:05.307093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.868 [2024-04-25 03:23:05.307112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.868 [2024-04-25 03:23:05.307129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.868 [2024-04-25 03:23:05.307144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.868 [2024-04-25 03:23:05.307159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.868 [2024-04-25 03:23:05.307174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.868 [2024-04-25 03:23:05.307190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.868 [2024-04-25 03:23:05.307204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.868 [2024-04-25 03:23:05.307220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.868 [2024-04-25 03:23:05.307234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.868 [2024-04-25 03:23:05.307250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.868 [2024-04-25 03:23:05.307264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.868 [2024-04-25 03:23:05.307280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.868 [2024-04-25 03:23:05.307295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.868 [2024-04-25 03:23:05.307311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.868 [2024-04-25 03:23:05.307326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.868 [2024-04-25 03:23:05.307342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.868 [2024-04-25 03:23:05.307356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.868 [2024-04-25 03:23:05.307372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.868 [2024-04-25 03:23:05.307387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.868 [2024-04-25 03:23:05.307403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.868 [2024-04-25 03:23:05.307417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.868 [2024-04-25 03:23:05.307433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.868 [2024-04-25 03:23:05.307453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.868 [2024-04-25 03:23:05.307470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.868 [2024-04-25 03:23:05.307485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.868 [2024-04-25 03:23:05.307504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.868 [2024-04-25 03:23:05.307519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.868 [2024-04-25 03:23:05.307536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.868 [2024-04-25 03:23:05.307550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.868 [2024-04-25 03:23:05.307566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.868 [2024-04-25 03:23:05.307580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.868 [2024-04-25 03:23:05.307596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.868 [2024-04-25 03:23:05.307611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.868 [2024-04-25 03:23:05.307634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.868 [2024-04-25 03:23:05.307651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.868 [2024-04-25 03:23:05.307667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.868 [2024-04-25 03:23:05.307690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.868 [2024-04-25 03:23:05.307707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.868 [2024-04-25 03:23:05.307721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.868 [2024-04-25 03:23:05.307736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.868 [2024-04-25 03:23:05.307750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.868 [2024-04-25 03:23:05.307766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.868 [2024-04-25 03:23:05.307780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.868 [2024-04-25 03:23:05.307796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.868 [2024-04-25 03:23:05.307810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.868 [2024-04-25 03:23:05.307826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.868 [2024-04-25 03:23:05.307840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.868 [2024-04-25 03:23:05.307856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.868 [2024-04-25 03:23:05.307870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.868 [2024-04-25 03:23:05.307895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.868 [2024-04-25 03:23:05.307913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.868 [2024-04-25 03:23:05.307930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.868 [2024-04-25 03:23:05.307944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.868 [2024-04-25 03:23:05.307960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.868 [2024-04-25 03:23:05.307980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.868 [2024-04-25 03:23:05.307996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.869 [2024-04-25 03:23:05.308010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.869 [2024-04-25 03:23:05.308026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.869 [2024-04-25 03:23:05.308041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.869 [2024-04-25 03:23:05.308058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.869 [2024-04-25 03:23:05.308072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.869 [2024-04-25 03:23:05.308088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.869 [2024-04-25 03:23:05.308102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.869 [2024-04-25 03:23:05.308118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.869 [2024-04-25 03:23:05.308132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.869 [2024-04-25 03:23:05.308148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.869 [2024-04-25 03:23:05.308162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.869 [2024-04-25 03:23:05.308178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.869 [2024-04-25 03:23:05.308192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.869 [2024-04-25 03:23:05.308208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.869 [2024-04-25 03:23:05.308222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.869 [2024-04-25 03:23:05.308238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.869 [2024-04-25 03:23:05.308252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.869 [2024-04-25 03:23:05.308268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.869 [2024-04-25 03:23:05.308282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.869 [2024-04-25 03:23:05.308302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.869 [2024-04-25 03:23:05.308316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.869 [2024-04-25 03:23:05.308332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.869 [2024-04-25 03:23:05.308346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.869 [2024-04-25 03:23:05.308362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.869 [2024-04-25 03:23:05.308376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.869 [2024-04-25 03:23:05.308391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.869 [2024-04-25 03:23:05.308406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.869 [2024-04-25 03:23:05.308421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.869 [2024-04-25 03:23:05.308435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.869 [2024-04-25 03:23:05.308451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.869 [2024-04-25 03:23:05.308467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.869 [2024-04-25 03:23:05.308482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.869 [2024-04-25 03:23:05.308497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.869 [2024-04-25 03:23:05.308513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.869 [2024-04-25 03:23:05.308527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.869 [2024-04-25 03:23:05.308543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.869 [2024-04-25 03:23:05.308557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.869 [2024-04-25 03:23:05.308583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.869 [2024-04-25 03:23:05.308597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.869 [2024-04-25 03:23:05.308613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.869 [2024-04-25 03:23:05.308633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.869 [2024-04-25 03:23:05.308651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.869 [2024-04-25 03:23:05.308666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.869 [2024-04-25 03:23:05.308681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.869 [2024-04-25 03:23:05.308703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.869 [2024-04-25 03:23:05.308787] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x17bb530 was disconnected and freed. reset controller. 00:22:30.869 [2024-04-25 03:23:05.312743] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:22:30.869 [2024-04-25 03:23:05.312806] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c1ed0 (9): Bad file descriptor 00:22:30.869 [2024-04-25 03:23:05.312838] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e2960 (9): Bad file descriptor 00:22:30.869 [2024-04-25 03:23:05.312870] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1983000 (9): Bad file descriptor 00:22:30.869 [2024-04-25 03:23:05.312900] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18eb6e0 (9): Bad file descriptor 00:22:30.869 [2024-04-25 03:23:05.312926] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x197b6d0 (9): Bad file descriptor 00:22:30.869 [2024-04-25 03:23:05.312955] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13ac660 (9): Bad file descriptor 00:22:30.869 [2024-04-25 03:23:05.312983] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17eaa90 (9): Bad file descriptor 00:22:30.869 [2024-04-25 03:23:05.313013] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17cda20 (9): Bad file descriptor 00:22:30.869 [2024-04-25 03:23:05.313039] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17bd310 (9): Bad file descriptor 00:22:30.869 [2024-04-25 03:23:05.313066] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1852d20 (9): Bad file descriptor 00:22:30.869 [2024-04-25 03:23:05.313604] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:22:30.869 [2024-04-25 03:23:05.314619] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:30.869 [2024-04-25 03:23:05.314988] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:30.869 [2024-04-25 03:23:05.315018] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:22:30.869 [2024-04-25 03:23:05.315282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:30.869 [2024-04-25 03:23:05.315473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:30.869 [2024-04-25 03:23:05.315499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c1ed0 with addr=10.0.0.2, port=4420 00:22:30.869 [2024-04-25 03:23:05.315519] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c1ed0 is same with the state(5) to be set 00:22:30.869 [2024-04-25 03:23:05.315794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:30.869 [2024-04-25 03:23:05.315992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:30.869 [2024-04-25 03:23:05.316018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18eb6e0 with addr=10.0.0.2, port=4420 00:22:30.869 [2024-04-25 03:23:05.316036] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18eb6e0 is same with the state(5) to be set 00:22:30.869 [2024-04-25 03:23:05.316118] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:30.869 [2024-04-25 03:23:05.316191] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:30.869 [2024-04-25 03:23:05.316281] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:30.869 [2024-04-25 03:23:05.316348] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:30.869 [2024-04-25 03:23:05.316616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:30.869 [2024-04-25 03:23:05.316797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:30.869 [2024-04-25 03:23:05.316833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17cda20 with addr=10.0.0.2, port=4420 00:22:30.869 [2024-04-25 03:23:05.316851] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cda20 is same with the state(5) to be set 00:22:30.870 [2024-04-25 03:23:05.316871] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c1ed0 (9): Bad file descriptor 00:22:30.870 [2024-04-25 03:23:05.316891] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18eb6e0 (9): Bad file descriptor 00:22:30.870 [2024-04-25 03:23:05.316952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.870 [2024-04-25 03:23:05.316975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.870 [2024-04-25 03:23:05.317004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.870 [2024-04-25 03:23:05.317020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.870 [2024-04-25 03:23:05.317037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.870 [2024-04-25 03:23:05.317052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.870 [2024-04-25 03:23:05.317068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.870 [2024-04-25 03:23:05.317083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.870 [2024-04-25 03:23:05.317099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.870 [2024-04-25 03:23:05.317113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.870 [2024-04-25 03:23:05.317129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.870 [2024-04-25 03:23:05.317144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.870 [2024-04-25 03:23:05.317160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.870 [2024-04-25 03:23:05.317174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.870 [2024-04-25 03:23:05.317190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.870 [2024-04-25 03:23:05.317205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.870 [2024-04-25 03:23:05.317221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.870 [2024-04-25 03:23:05.317235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.870 [2024-04-25 03:23:05.317251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.870 [2024-04-25 03:23:05.317265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.870 [2024-04-25 03:23:05.317282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.870 [2024-04-25 03:23:05.317297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.870 [2024-04-25 03:23:05.317318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.870 [2024-04-25 03:23:05.317334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.870 [2024-04-25 03:23:05.317350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.870 [2024-04-25 03:23:05.317364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.870 [2024-04-25 03:23:05.317380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.870 [2024-04-25 03:23:05.317394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.870 [2024-04-25 03:23:05.317410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.870 [2024-04-25 03:23:05.317425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.870 [2024-04-25 03:23:05.317441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.870 [2024-04-25 03:23:05.317455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.870 [2024-04-25 03:23:05.317471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.870 [2024-04-25 03:23:05.317485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.870 [2024-04-25 03:23:05.317502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.870 [2024-04-25 03:23:05.317516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.870 [2024-04-25 03:23:05.317532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.870 [2024-04-25 03:23:05.317547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.870 [2024-04-25 03:23:05.317563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.870 [2024-04-25 03:23:05.317577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.870 [2024-04-25 03:23:05.317593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.870 [2024-04-25 03:23:05.317607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.870 [2024-04-25 03:23:05.317623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.870 [2024-04-25 03:23:05.317646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.870 [2024-04-25 03:23:05.317663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.870 [2024-04-25 03:23:05.317678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.870 [2024-04-25 03:23:05.317694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.870 [2024-04-25 03:23:05.317713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.870 [2024-04-25 03:23:05.317730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.870 [2024-04-25 03:23:05.317744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.870 [2024-04-25 03:23:05.317761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.870 [2024-04-25 03:23:05.317775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.870 [2024-04-25 03:23:05.317791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.870 [2024-04-25 03:23:05.317806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.870 [2024-04-25 03:23:05.317822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.870 [2024-04-25 03:23:05.317835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.870 [2024-04-25 03:23:05.317852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.870 [2024-04-25 03:23:05.317866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.870 [2024-04-25 03:23:05.317882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.870 [2024-04-25 03:23:05.317896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.870 [2024-04-25 03:23:05.317911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.870 [2024-04-25 03:23:05.317926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.870 [2024-04-25 03:23:05.317942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.870 [2024-04-25 03:23:05.317956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.870 [2024-04-25 03:23:05.317974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.870 [2024-04-25 03:23:05.317988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.870 [2024-04-25 03:23:05.318004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.870 [2024-04-25 03:23:05.318018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.870 [2024-04-25 03:23:05.318034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.870 [2024-04-25 03:23:05.318048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.870 [2024-04-25 03:23:05.318064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.870 [2024-04-25 03:23:05.318078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.870 [2024-04-25 03:23:05.318099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.870 [2024-04-25 03:23:05.318115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.870 [2024-04-25 03:23:05.318131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.871 [2024-04-25 03:23:05.318146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.871 [2024-04-25 03:23:05.318161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.871 [2024-04-25 03:23:05.318176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.871 [2024-04-25 03:23:05.318192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.871 [2024-04-25 03:23:05.318206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.871 [2024-04-25 03:23:05.318223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.871 [2024-04-25 03:23:05.318237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.871 [2024-04-25 03:23:05.318253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.871 [2024-04-25 03:23:05.318268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.871 [2024-04-25 03:23:05.318284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.871 [2024-04-25 03:23:05.318299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.871 [2024-04-25 03:23:05.318315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.871 [2024-04-25 03:23:05.318329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.871 [2024-04-25 03:23:05.318345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.871 [2024-04-25 03:23:05.318359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.871 [2024-04-25 03:23:05.318375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.871 [2024-04-25 03:23:05.318389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.871 [2024-04-25 03:23:05.318405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.871 [2024-04-25 03:23:05.318420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.871 [2024-04-25 03:23:05.318435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.871 [2024-04-25 03:23:05.318450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.871 [2024-04-25 03:23:05.318466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.871 [2024-04-25 03:23:05.318484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.871 [2024-04-25 03:23:05.318501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.871 [2024-04-25 03:23:05.318516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.871 [2024-04-25 03:23:05.318532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.871 [2024-04-25 03:23:05.318546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.871 [2024-04-25 03:23:05.318563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.871 [2024-04-25 03:23:05.318579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.871 [2024-04-25 03:23:05.318596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.871 [2024-04-25 03:23:05.318611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.871 [2024-04-25 03:23:05.318633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.871 [2024-04-25 03:23:05.318650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.871 [2024-04-25 03:23:05.318667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.871 [2024-04-25 03:23:05.318682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.871 [2024-04-25 03:23:05.318698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.871 [2024-04-25 03:23:05.318713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.871 [2024-04-25 03:23:05.318729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.871 [2024-04-25 03:23:05.318744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.871 [2024-04-25 03:23:05.318760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.871 [2024-04-25 03:23:05.318776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.871 [2024-04-25 03:23:05.318792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.871 [2024-04-25 03:23:05.318806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.871 [2024-04-25 03:23:05.318823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.871 [2024-04-25 03:23:05.318837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.871 [2024-04-25 03:23:05.318853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.871 [2024-04-25 03:23:05.318868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.871 [2024-04-25 03:23:05.318888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.871 [2024-04-25 03:23:05.318904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.871 [2024-04-25 03:23:05.318920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.871 [2024-04-25 03:23:05.318935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.871 [2024-04-25 03:23:05.318951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.871 [2024-04-25 03:23:05.318966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.871 [2024-04-25 03:23:05.318980] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851140 is same with the state(5) to be set 00:22:30.871 [2024-04-25 03:23:05.319064] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1851140 was disconnected and freed. reset controller. 00:22:30.871 [2024-04-25 03:23:05.319205] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17cda20 (9): Bad file descriptor 00:22:30.871 [2024-04-25 03:23:05.319232] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:22:30.871 [2024-04-25 03:23:05.319247] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:22:30.871 [2024-04-25 03:23:05.319265] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:22:30.871 [2024-04-25 03:23:05.319287] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:22:30.871 [2024-04-25 03:23:05.319302] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:22:30.871 [2024-04-25 03:23:05.319316] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:22:30.871 [2024-04-25 03:23:05.320523] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:30.871 [2024-04-25 03:23:05.320547] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:30.871 [2024-04-25 03:23:05.320564] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:30.871 [2024-04-25 03:23:05.320595] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:22:30.872 [2024-04-25 03:23:05.320613] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:22:30.872 [2024-04-25 03:23:05.320638] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:22:30.872 [2024-04-25 03:23:05.320712] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:30.872 [2024-04-25 03:23:05.320894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:30.872 [2024-04-25 03:23:05.321063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:30.872 [2024-04-25 03:23:05.321088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13ac660 with addr=10.0.0.2, port=4420 00:22:30.872 [2024-04-25 03:23:05.321105] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ac660 is same with the state(5) to be set 00:22:30.872 [2024-04-25 03:23:05.321449] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13ac660 (9): Bad file descriptor 00:22:30.872 [2024-04-25 03:23:05.321518] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:30.872 [2024-04-25 03:23:05.321537] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:30.872 [2024-04-25 03:23:05.321556] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:30.872 [2024-04-25 03:23:05.321617] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:30.872 [2024-04-25 03:23:05.322910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.872 [2024-04-25 03:23:05.322935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.872 [2024-04-25 03:23:05.322959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.872 [2024-04-25 03:23:05.322975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.872 [2024-04-25 03:23:05.322992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.872 [2024-04-25 03:23:05.323007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.872 [2024-04-25 03:23:05.323024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.872 [2024-04-25 03:23:05.323038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.872 [2024-04-25 03:23:05.323055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.872 [2024-04-25 03:23:05.323070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.872 [2024-04-25 03:23:05.323086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.872 [2024-04-25 03:23:05.323100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.872 [2024-04-25 03:23:05.323117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.872 [2024-04-25 03:23:05.323131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.872 [2024-04-25 03:23:05.323147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.872 [2024-04-25 03:23:05.323161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.872 [2024-04-25 03:23:05.323177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.872 [2024-04-25 03:23:05.323192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.872 [2024-04-25 03:23:05.323208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.872 [2024-04-25 03:23:05.323222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.872 [2024-04-25 03:23:05.323238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.872 [2024-04-25 03:23:05.323253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.872 [2024-04-25 03:23:05.323268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.872 [2024-04-25 03:23:05.323282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.872 [2024-04-25 03:23:05.323303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.872 [2024-04-25 03:23:05.323319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.872 [2024-04-25 03:23:05.323335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.872 [2024-04-25 03:23:05.323350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.872 [2024-04-25 03:23:05.323366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.872 [2024-04-25 03:23:05.323380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.872 [2024-04-25 03:23:05.323396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.872 [2024-04-25 03:23:05.323410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.872 [2024-04-25 03:23:05.323426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.872 [2024-04-25 03:23:05.323441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.872 [2024-04-25 03:23:05.323456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.872 [2024-04-25 03:23:05.323470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.872 [2024-04-25 03:23:05.323486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.872 [2024-04-25 03:23:05.323501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.872 [2024-04-25 03:23:05.323516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.872 [2024-04-25 03:23:05.323531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.872 [2024-04-25 03:23:05.323547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.872 [2024-04-25 03:23:05.323561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.872 [2024-04-25 03:23:05.323576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.872 [2024-04-25 03:23:05.323590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.872 [2024-04-25 03:23:05.323605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.872 [2024-04-25 03:23:05.323620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.872 [2024-04-25 03:23:05.323642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.872 [2024-04-25 03:23:05.323658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.872 [2024-04-25 03:23:05.323674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.872 [2024-04-25 03:23:05.323692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.872 [2024-04-25 03:23:05.323709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.872 [2024-04-25 03:23:05.323723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.872 [2024-04-25 03:23:05.323739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.872 [2024-04-25 03:23:05.323753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.872 [2024-04-25 03:23:05.323769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.872 [2024-04-25 03:23:05.323783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.872 [2024-04-25 03:23:05.323798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.872 [2024-04-25 03:23:05.323812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.872 [2024-04-25 03:23:05.323828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.872 [2024-04-25 03:23:05.323842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.872 [2024-04-25 03:23:05.323858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.872 [2024-04-25 03:23:05.323873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.872 [2024-04-25 03:23:05.323889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.872 [2024-04-25 03:23:05.323903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.872 [2024-04-25 03:23:05.323919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.872 [2024-04-25 03:23:05.323934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.872 [2024-04-25 03:23:05.323950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.872 [2024-04-25 03:23:05.323964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.872 [2024-04-25 03:23:05.323980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.872 [2024-04-25 03:23:05.323994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.873 [2024-04-25 03:23:05.324010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.873 [2024-04-25 03:23:05.324024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.873 [2024-04-25 03:23:05.324040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.873 [2024-04-25 03:23:05.324055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.873 [2024-04-25 03:23:05.324075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.873 [2024-04-25 03:23:05.324090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.873 [2024-04-25 03:23:05.324106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.873 [2024-04-25 03:23:05.324119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.873 [2024-04-25 03:23:05.324135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.873 [2024-04-25 03:23:05.324149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.873 [2024-04-25 03:23:05.324165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.873 [2024-04-25 03:23:05.324179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.873 [2024-04-25 03:23:05.324195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.873 [2024-04-25 03:23:05.324209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.873 [2024-04-25 03:23:05.324225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.873 [2024-04-25 03:23:05.324239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.873 [2024-04-25 03:23:05.324255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.873 [2024-04-25 03:23:05.324268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.873 [2024-04-25 03:23:05.324284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.873 [2024-04-25 03:23:05.324299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.873 [2024-04-25 03:23:05.324316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.873 [2024-04-25 03:23:05.324330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.873 [2024-04-25 03:23:05.324345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.873 [2024-04-25 03:23:05.324360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.873 [2024-04-25 03:23:05.324376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.873 [2024-04-25 03:23:05.324390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.873 [2024-04-25 03:23:05.324405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.873 [2024-04-25 03:23:05.324419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.873 [2024-04-25 03:23:05.324435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.873 [2024-04-25 03:23:05.324453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.873 [2024-04-25 03:23:05.324469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.873 [2024-04-25 03:23:05.324484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.873 [2024-04-25 03:23:05.324499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.873 [2024-04-25 03:23:05.324514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.873 [2024-04-25 03:23:05.324529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.873 [2024-04-25 03:23:05.324543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.873 [2024-04-25 03:23:05.324558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.873 [2024-04-25 03:23:05.324572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.873 [2024-04-25 03:23:05.324588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.873 [2024-04-25 03:23:05.324602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.873 [2024-04-25 03:23:05.324617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.873 [2024-04-25 03:23:05.324640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.873 [2024-04-25 03:23:05.324658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.873 [2024-04-25 03:23:05.324672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.873 [2024-04-25 03:23:05.324688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.873 [2024-04-25 03:23:05.324702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.873 [2024-04-25 03:23:05.324718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.873 [2024-04-25 03:23:05.324732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.873 [2024-04-25 03:23:05.324748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.873 [2024-04-25 03:23:05.324762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.873 [2024-04-25 03:23:05.324778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.873 [2024-04-25 03:23:05.324792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.873 [2024-04-25 03:23:05.324809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.873 [2024-04-25 03:23:05.324823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.873 [2024-04-25 03:23:05.324843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.873 [2024-04-25 03:23:05.324858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.873 [2024-04-25 03:23:05.324874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.873 [2024-04-25 03:23:05.324888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.873 [2024-04-25 03:23:05.324903] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18523d0 is same with the state(5) to be set 00:22:30.873 [2024-04-25 03:23:05.326162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.873 [2024-04-25 03:23:05.326185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.873 [2024-04-25 03:23:05.326206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.873 [2024-04-25 03:23:05.326222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.873 [2024-04-25 03:23:05.326238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.873 [2024-04-25 03:23:05.326253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.873 [2024-04-25 03:23:05.326270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.873 [2024-04-25 03:23:05.326284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.873 [2024-04-25 03:23:05.326300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.873 [2024-04-25 03:23:05.326314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.873 [2024-04-25 03:23:05.326330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.873 [2024-04-25 03:23:05.326344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.873 [2024-04-25 03:23:05.326360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.873 [2024-04-25 03:23:05.326374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.873 [2024-04-25 03:23:05.326390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.873 [2024-04-25 03:23:05.326404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.873 [2024-04-25 03:23:05.326420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.873 [2024-04-25 03:23:05.326434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.873 [2024-04-25 03:23:05.326450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.873 [2024-04-25 03:23:05.326465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.873 [2024-04-25 03:23:05.326486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.874 [2024-04-25 03:23:05.326501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.874 [2024-04-25 03:23:05.326516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.874 [2024-04-25 03:23:05.326531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.874 [2024-04-25 03:23:05.326547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.874 [2024-04-25 03:23:05.326562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.874 [2024-04-25 03:23:05.326579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.874 [2024-04-25 03:23:05.326594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.874 [2024-04-25 03:23:05.326609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.874 [2024-04-25 03:23:05.326623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.874 [2024-04-25 03:23:05.326648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.874 [2024-04-25 03:23:05.326663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.874 [2024-04-25 03:23:05.326679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.874 [2024-04-25 03:23:05.326693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.874 [2024-04-25 03:23:05.326709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.874 [2024-04-25 03:23:05.326723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.874 [2024-04-25 03:23:05.326738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.874 [2024-04-25 03:23:05.326752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.874 [2024-04-25 03:23:05.326768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.874 [2024-04-25 03:23:05.326782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.874 [2024-04-25 03:23:05.326798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.874 [2024-04-25 03:23:05.326811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.874 [2024-04-25 03:23:05.326827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.874 [2024-04-25 03:23:05.326841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.874 [2024-04-25 03:23:05.326857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.874 [2024-04-25 03:23:05.326879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.874 [2024-04-25 03:23:05.326896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.874 [2024-04-25 03:23:05.326910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.874 [2024-04-25 03:23:05.326926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.874 [2024-04-25 03:23:05.326940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.874 [2024-04-25 03:23:05.326957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.874 [2024-04-25 03:23:05.326970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.874 [2024-04-25 03:23:05.326986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.874 [2024-04-25 03:23:05.326999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.874 [2024-04-25 03:23:05.327015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.874 [2024-04-25 03:23:05.327029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.874 [2024-04-25 03:23:05.327045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.874 [2024-04-25 03:23:05.327059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.874 [2024-04-25 03:23:05.327075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.874 [2024-04-25 03:23:05.327089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.874 [2024-04-25 03:23:05.327105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.874 [2024-04-25 03:23:05.327119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.874 [2024-04-25 03:23:05.327134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.874 [2024-04-25 03:23:05.327148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.874 [2024-04-25 03:23:05.327165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.874 [2024-04-25 03:23:05.327179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.874 [2024-04-25 03:23:05.327195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.874 [2024-04-25 03:23:05.327208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.874 [2024-04-25 03:23:05.327225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.874 [2024-04-25 03:23:05.327239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.874 [2024-04-25 03:23:05.327259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.874 [2024-04-25 03:23:05.327274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.874 [2024-04-25 03:23:05.327289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.874 [2024-04-25 03:23:05.327303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.874 [2024-04-25 03:23:05.327319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.874 [2024-04-25 03:23:05.327333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.874 [2024-04-25 03:23:05.327349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.874 [2024-04-25 03:23:05.327363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.874 [2024-04-25 03:23:05.327379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.874 [2024-04-25 03:23:05.327394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.874 [2024-04-25 03:23:05.327410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.874 [2024-04-25 03:23:05.327424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.874 [2024-04-25 03:23:05.327440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.874 [2024-04-25 03:23:05.327454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.874 [2024-04-25 03:23:05.327470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.874 [2024-04-25 03:23:05.327483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.874 [2024-04-25 03:23:05.327499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.874 [2024-04-25 03:23:05.327513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.874 [2024-04-25 03:23:05.327529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.874 [2024-04-25 03:23:05.327543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.874 [2024-04-25 03:23:05.327559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.874 [2024-04-25 03:23:05.327573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.874 [2024-04-25 03:23:05.327589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.874 [2024-04-25 03:23:05.327603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.874 [2024-04-25 03:23:05.327618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.874 [2024-04-25 03:23:05.327642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.874 [2024-04-25 03:23:05.327659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.874 [2024-04-25 03:23:05.327674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.874 [2024-04-25 03:23:05.327691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.875 [2024-04-25 03:23:05.327705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.875 [2024-04-25 03:23:05.327720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.875 [2024-04-25 03:23:05.327734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.875 [2024-04-25 03:23:05.327751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.875 [2024-04-25 03:23:05.327765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.875 [2024-04-25 03:23:05.327781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.875 [2024-04-25 03:23:05.327797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.875 [2024-04-25 03:23:05.327813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.875 [2024-04-25 03:23:05.327828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.875 [2024-04-25 03:23:05.327844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.875 [2024-04-25 03:23:05.327859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.875 [2024-04-25 03:23:05.327875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.875 [2024-04-25 03:23:05.327889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.875 [2024-04-25 03:23:05.327905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.875 [2024-04-25 03:23:05.327920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.875 [2024-04-25 03:23:05.327936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.875 [2024-04-25 03:23:05.327950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.875 [2024-04-25 03:23:05.327966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.875 [2024-04-25 03:23:05.327980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.875 [2024-04-25 03:23:05.328005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.875 [2024-04-25 03:23:05.328020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.875 [2024-04-25 03:23:05.328039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.875 [2024-04-25 03:23:05.328054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.875 [2024-04-25 03:23:05.328070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.875 [2024-04-25 03:23:05.328085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.875 [2024-04-25 03:23:05.328101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.875 [2024-04-25 03:23:05.328114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.875 [2024-04-25 03:23:05.328130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.875 [2024-04-25 03:23:05.328144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.875 [2024-04-25 03:23:05.328159] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x196c230 is same with the state(5) to be set 00:22:30.875 [2024-04-25 03:23:05.329397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.875 [2024-04-25 03:23:05.329420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.875 [2024-04-25 03:23:05.329441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.875 [2024-04-25 03:23:05.329458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.875 [2024-04-25 03:23:05.329475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.875 [2024-04-25 03:23:05.329489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.875 [2024-04-25 03:23:05.329505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.875 [2024-04-25 03:23:05.329519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.875 [2024-04-25 03:23:05.329535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.875 [2024-04-25 03:23:05.329548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.875 [2024-04-25 03:23:05.329564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.875 [2024-04-25 03:23:05.329578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.875 [2024-04-25 03:23:05.329594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.875 [2024-04-25 03:23:05.329608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.875 [2024-04-25 03:23:05.329624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.875 [2024-04-25 03:23:05.329647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.875 [2024-04-25 03:23:05.329669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.875 [2024-04-25 03:23:05.329684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.875 [2024-04-25 03:23:05.329700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.875 [2024-04-25 03:23:05.329715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.875 [2024-04-25 03:23:05.329730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.875 [2024-04-25 03:23:05.329745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.875 [2024-04-25 03:23:05.329762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.875 [2024-04-25 03:23:05.329777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.875 [2024-04-25 03:23:05.329794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.875 [2024-04-25 03:23:05.329808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.875 [2024-04-25 03:23:05.329824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.875 [2024-04-25 03:23:05.329838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.875 [2024-04-25 03:23:05.329854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.875 [2024-04-25 03:23:05.329868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.875 [2024-04-25 03:23:05.329884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.875 [2024-04-25 03:23:05.329897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.875 [2024-04-25 03:23:05.329913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.875 [2024-04-25 03:23:05.329927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.875 [2024-04-25 03:23:05.329943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.875 [2024-04-25 03:23:05.329956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.875 [2024-04-25 03:23:05.329972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.875 [2024-04-25 03:23:05.329987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.875 [2024-04-25 03:23:05.330002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.875 [2024-04-25 03:23:05.330016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.875 [2024-04-25 03:23:05.330032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.875 [2024-04-25 03:23:05.330050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.876 [2024-04-25 03:23:05.330066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.876 [2024-04-25 03:23:05.330081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.876 [2024-04-25 03:23:05.330097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.876 [2024-04-25 03:23:05.330112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.876 [2024-04-25 03:23:05.330128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.876 [2024-04-25 03:23:05.330143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.876 [2024-04-25 03:23:05.330159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.876 [2024-04-25 03:23:05.330173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.876 [2024-04-25 03:23:05.330189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.876 [2024-04-25 03:23:05.330204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.876 [2024-04-25 03:23:05.330220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.876 [2024-04-25 03:23:05.330234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.876 [2024-04-25 03:23:05.330250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.876 [2024-04-25 03:23:05.330265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.876 [2024-04-25 03:23:05.330281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.876 [2024-04-25 03:23:05.330295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.876 [2024-04-25 03:23:05.330310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.876 [2024-04-25 03:23:05.330325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.876 [2024-04-25 03:23:05.330340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.876 [2024-04-25 03:23:05.330355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.876 [2024-04-25 03:23:05.330370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.876 [2024-04-25 03:23:05.330384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.876 [2024-04-25 03:23:05.330400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.876 [2024-04-25 03:23:05.330414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.876 [2024-04-25 03:23:05.330434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.876 [2024-04-25 03:23:05.330449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.876 [2024-04-25 03:23:05.330466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.876 [2024-04-25 03:23:05.330480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.876 [2024-04-25 03:23:05.330496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.876 [2024-04-25 03:23:05.330511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.876 [2024-04-25 03:23:05.330526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.876 [2024-04-25 03:23:05.330540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.876 [2024-04-25 03:23:05.330556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.876 [2024-04-25 03:23:05.330570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.876 [2024-04-25 03:23:05.330586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.876 [2024-04-25 03:23:05.330600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.876 [2024-04-25 03:23:05.330616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.876 [2024-04-25 03:23:05.330636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.876 [2024-04-25 03:23:05.330654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.876 [2024-04-25 03:23:05.330668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.876 [2024-04-25 03:23:05.330684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.876 [2024-04-25 03:23:05.330698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.876 [2024-04-25 03:23:05.330714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.876 [2024-04-25 03:23:05.330728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.876 [2024-04-25 03:23:05.330744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.876 [2024-04-25 03:23:05.330758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.876 [2024-04-25 03:23:05.330774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.876 [2024-04-25 03:23:05.330795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.876 [2024-04-25 03:23:05.330812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.876 [2024-04-25 03:23:05.330830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.876 [2024-04-25 03:23:05.330847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.876 [2024-04-25 03:23:05.330861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.876 [2024-04-25 03:23:05.330877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.876 [2024-04-25 03:23:05.330892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.876 [2024-04-25 03:23:05.330907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.876 [2024-04-25 03:23:05.330922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.876 [2024-04-25 03:23:05.330938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.876 [2024-04-25 03:23:05.330952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.876 [2024-04-25 03:23:05.330968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.876 [2024-04-25 03:23:05.330982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.876 [2024-04-25 03:23:05.330998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.876 [2024-04-25 03:23:05.331012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.876 [2024-04-25 03:23:05.331027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.876 [2024-04-25 03:23:05.331042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.876 [2024-04-25 03:23:05.331057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.877 [2024-04-25 03:23:05.331071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.877 [2024-04-25 03:23:05.331087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.877 [2024-04-25 03:23:05.331102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.877 [2024-04-25 03:23:05.331118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.877 [2024-04-25 03:23:05.331132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.877 [2024-04-25 03:23:05.331147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.877 [2024-04-25 03:23:05.331161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.877 [2024-04-25 03:23:05.331178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.877 [2024-04-25 03:23:05.331192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.877 [2024-04-25 03:23:05.331208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.877 [2024-04-25 03:23:05.331226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.877 [2024-04-25 03:23:05.331243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.877 [2024-04-25 03:23:05.331257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.877 [2024-04-25 03:23:05.331273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.877 [2024-04-25 03:23:05.331293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.877 [2024-04-25 03:23:05.331310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.877 [2024-04-25 03:23:05.331324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.877 [2024-04-25 03:23:05.331340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.877 [2024-04-25 03:23:05.331354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.877 [2024-04-25 03:23:05.331370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.877 [2024-04-25 03:23:05.331384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.877 [2024-04-25 03:23:05.331400] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x196e990 is same with the state(5) to be set 00:22:30.877 [2024-04-25 03:23:05.332641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.877 [2024-04-25 03:23:05.332665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.877 [2024-04-25 03:23:05.332686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.877 [2024-04-25 03:23:05.332703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.877 [2024-04-25 03:23:05.332719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.877 [2024-04-25 03:23:05.332734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.877 [2024-04-25 03:23:05.332750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.877 [2024-04-25 03:23:05.332764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.877 [2024-04-25 03:23:05.332781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.877 [2024-04-25 03:23:05.332795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.877 [2024-04-25 03:23:05.332812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.877 [2024-04-25 03:23:05.332826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.877 [2024-04-25 03:23:05.332842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.877 [2024-04-25 03:23:05.332861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.877 [2024-04-25 03:23:05.332878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.877 [2024-04-25 03:23:05.332893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.877 [2024-04-25 03:23:05.332909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.877 [2024-04-25 03:23:05.332923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.877 [2024-04-25 03:23:05.332940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.877 [2024-04-25 03:23:05.332955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.877 [2024-04-25 03:23:05.332971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.877 [2024-04-25 03:23:05.332985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.877 [2024-04-25 03:23:05.333000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.877 [2024-04-25 03:23:05.333014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.877 [2024-04-25 03:23:05.333032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.877 [2024-04-25 03:23:05.333046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.877 [2024-04-25 03:23:05.333062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.877 [2024-04-25 03:23:05.333076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.877 [2024-04-25 03:23:05.333092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.877 [2024-04-25 03:23:05.333106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.877 [2024-04-25 03:23:05.333122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.877 [2024-04-25 03:23:05.333135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.877 [2024-04-25 03:23:05.333151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.877 [2024-04-25 03:23:05.333165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.877 [2024-04-25 03:23:05.333181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.877 [2024-04-25 03:23:05.333196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.877 [2024-04-25 03:23:05.333212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.877 [2024-04-25 03:23:05.333226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.877 [2024-04-25 03:23:05.333246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.877 [2024-04-25 03:23:05.333261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.877 [2024-04-25 03:23:05.333277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.877 [2024-04-25 03:23:05.333291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.877 [2024-04-25 03:23:05.333308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.877 [2024-04-25 03:23:05.333321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.877 [2024-04-25 03:23:05.333337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.877 [2024-04-25 03:23:05.333351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.877 [2024-04-25 03:23:05.333367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.877 [2024-04-25 03:23:05.333381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.877 [2024-04-25 03:23:05.333397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.877 [2024-04-25 03:23:05.333412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.877 [2024-04-25 03:23:05.333428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.877 [2024-04-25 03:23:05.333443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.877 [2024-04-25 03:23:05.333459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.877 [2024-04-25 03:23:05.333474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.877 [2024-04-25 03:23:05.333489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.877 [2024-04-25 03:23:05.333504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.877 [2024-04-25 03:23:05.333521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.878 [2024-04-25 03:23:05.333535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.878 [2024-04-25 03:23:05.333551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.878 [2024-04-25 03:23:05.333566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.878 [2024-04-25 03:23:05.333582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.878 [2024-04-25 03:23:05.333596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.878 [2024-04-25 03:23:05.333612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.878 [2024-04-25 03:23:05.333638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.878 [2024-04-25 03:23:05.333656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.878 [2024-04-25 03:23:05.333671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.878 [2024-04-25 03:23:05.333687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.878 [2024-04-25 03:23:05.333701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.878 [2024-04-25 03:23:05.333717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.878 [2024-04-25 03:23:05.333731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.878 [2024-04-25 03:23:05.333746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.878 [2024-04-25 03:23:05.333760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.878 [2024-04-25 03:23:05.333777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.878 [2024-04-25 03:23:05.333791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.878 [2024-04-25 03:23:05.333806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.878 [2024-04-25 03:23:05.333820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.878 [2024-04-25 03:23:05.333836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.878 [2024-04-25 03:23:05.333851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.878 [2024-04-25 03:23:05.333867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.878 [2024-04-25 03:23:05.333881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.878 [2024-04-25 03:23:05.333897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.878 [2024-04-25 03:23:05.333911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.878 [2024-04-25 03:23:05.333927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.878 [2024-04-25 03:23:05.333941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.878 [2024-04-25 03:23:05.333957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.878 [2024-04-25 03:23:05.333971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.878 [2024-04-25 03:23:05.333987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.878 [2024-04-25 03:23:05.334001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.878 [2024-04-25 03:23:05.334021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.878 [2024-04-25 03:23:05.334035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.878 [2024-04-25 03:23:05.334051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.878 [2024-04-25 03:23:05.334066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.878 [2024-04-25 03:23:05.334081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.878 [2024-04-25 03:23:05.334096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.878 [2024-04-25 03:23:05.334112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.878 [2024-04-25 03:23:05.334126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.878 [2024-04-25 03:23:05.334142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.878 [2024-04-25 03:23:05.334156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.878 [2024-04-25 03:23:05.334172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.878 [2024-04-25 03:23:05.334186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.878 [2024-04-25 03:23:05.334202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.878 [2024-04-25 03:23:05.334216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.878 [2024-04-25 03:23:05.334231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.878 [2024-04-25 03:23:05.334246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.878 [2024-04-25 03:23:05.334262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.878 [2024-04-25 03:23:05.334276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.878 [2024-04-25 03:23:05.334292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.878 [2024-04-25 03:23:05.334306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.878 [2024-04-25 03:23:05.334322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.878 [2024-04-25 03:23:05.334336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.878 [2024-04-25 03:23:05.334352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.878 [2024-04-25 03:23:05.334368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.878 [2024-04-25 03:23:05.334384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.878 [2024-04-25 03:23:05.334402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.878 [2024-04-25 03:23:05.334419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.878 [2024-04-25 03:23:05.334434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.878 [2024-04-25 03:23:05.334450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.878 [2024-04-25 03:23:05.334464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.878 [2024-04-25 03:23:05.334480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.878 [2024-04-25 03:23:05.334494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.878 [2024-04-25 03:23:05.334511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.878 [2024-04-25 03:23:05.334525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.878 [2024-04-25 03:23:05.334541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.878 [2024-04-25 03:23:05.334556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.878 [2024-04-25 03:23:05.334572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.878 [2024-04-25 03:23:05.334586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.878 [2024-04-25 03:23:05.334602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.878 [2024-04-25 03:23:05.334617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.878 [2024-04-25 03:23:05.334636] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b7850 is same with the state(5) to be set 00:22:30.878 [2024-04-25 03:23:05.335882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.878 [2024-04-25 03:23:05.335906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.878 [2024-04-25 03:23:05.335927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.878 [2024-04-25 03:23:05.335943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.878 [2024-04-25 03:23:05.335960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.878 [2024-04-25 03:23:05.335975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.879 [2024-04-25 03:23:05.335992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.879 [2024-04-25 03:23:05.336006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.879 [2024-04-25 03:23:05.336023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.879 [2024-04-25 03:23:05.336042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.879 [2024-04-25 03:23:05.336058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.879 [2024-04-25 03:23:05.336072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.879 [2024-04-25 03:23:05.336088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.879 [2024-04-25 03:23:05.336109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.879 [2024-04-25 03:23:05.336125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.879 [2024-04-25 03:23:05.336140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.879 [2024-04-25 03:23:05.336156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.879 [2024-04-25 03:23:05.336170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.879 [2024-04-25 03:23:05.336186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.879 [2024-04-25 03:23:05.336201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.879 [2024-04-25 03:23:05.336216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.879 [2024-04-25 03:23:05.336230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.879 [2024-04-25 03:23:05.336247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.879 [2024-04-25 03:23:05.336261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.879 [2024-04-25 03:23:05.336276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.879 [2024-04-25 03:23:05.336290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.879 [2024-04-25 03:23:05.336306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.879 [2024-04-25 03:23:05.336320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.879 [2024-04-25 03:23:05.336336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.879 [2024-04-25 03:23:05.336350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.879 [2024-04-25 03:23:05.336366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.879 [2024-04-25 03:23:05.336380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.879 [2024-04-25 03:23:05.336396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.879 [2024-04-25 03:23:05.336410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.879 [2024-04-25 03:23:05.336434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.879 [2024-04-25 03:23:05.336449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.879 [2024-04-25 03:23:05.336465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.879 [2024-04-25 03:23:05.336479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.879 [2024-04-25 03:23:05.336495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.879 [2024-04-25 03:23:05.336509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.879 [2024-04-25 03:23:05.336525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.879 [2024-04-25 03:23:05.336540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.879 [2024-04-25 03:23:05.336556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.879 [2024-04-25 03:23:05.336570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.879 [2024-04-25 03:23:05.336586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.879 [2024-04-25 03:23:05.336605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.879 [2024-04-25 03:23:05.336622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.879 [2024-04-25 03:23:05.336645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.879 [2024-04-25 03:23:05.336662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.879 [2024-04-25 03:23:05.336676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.879 [2024-04-25 03:23:05.336692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.879 [2024-04-25 03:23:05.336706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.879 [2024-04-25 03:23:05.336721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.879 [2024-04-25 03:23:05.336735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.879 [2024-04-25 03:23:05.336751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.879 [2024-04-25 03:23:05.336765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.879 [2024-04-25 03:23:05.336780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.879 [2024-04-25 03:23:05.336794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.879 [2024-04-25 03:23:05.336810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.879 [2024-04-25 03:23:05.336828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.879 [2024-04-25 03:23:05.336844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.879 [2024-04-25 03:23:05.336858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.879 [2024-04-25 03:23:05.336874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.879 [2024-04-25 03:23:05.336888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.879 [2024-04-25 03:23:05.336903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.879 [2024-04-25 03:23:05.336917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.879 [2024-04-25 03:23:05.336933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.879 [2024-04-25 03:23:05.336947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.879 [2024-04-25 03:23:05.336963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.879 [2024-04-25 03:23:05.336977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.879 [2024-04-25 03:23:05.336993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.879 [2024-04-25 03:23:05.337007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.879 [2024-04-25 03:23:05.337023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.879 [2024-04-25 03:23:05.337038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.879 [2024-04-25 03:23:05.337053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.879 [2024-04-25 03:23:05.337067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.879 [2024-04-25 03:23:05.337082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.879 [2024-04-25 03:23:05.337098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.879 [2024-04-25 03:23:05.337113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.879 [2024-04-25 03:23:05.337127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.879 [2024-04-25 03:23:05.337144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.879 [2024-04-25 03:23:05.337158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.879 [2024-04-25 03:23:05.337174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.879 [2024-04-25 03:23:05.337188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.879 [2024-04-25 03:23:05.337207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.880 [2024-04-25 03:23:05.337222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.880 [2024-04-25 03:23:05.337238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.880 [2024-04-25 03:23:05.337253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.880 [2024-04-25 03:23:05.337269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.880 [2024-04-25 03:23:05.337290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.880 [2024-04-25 03:23:05.337306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.880 [2024-04-25 03:23:05.337321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.880 [2024-04-25 03:23:05.337336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.880 [2024-04-25 03:23:05.337351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.880 [2024-04-25 03:23:05.337367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.880 [2024-04-25 03:23:05.337381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.880 [2024-04-25 03:23:05.337397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.880 [2024-04-25 03:23:05.337411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.880 [2024-04-25 03:23:05.337428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.880 [2024-04-25 03:23:05.337442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.880 [2024-04-25 03:23:05.337458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.880 [2024-04-25 03:23:05.337472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.880 [2024-04-25 03:23:05.337487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.880 [2024-04-25 03:23:05.337502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.880 [2024-04-25 03:23:05.337518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.880 [2024-04-25 03:23:05.337532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.880 [2024-04-25 03:23:05.337547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.880 [2024-04-25 03:23:05.337561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.880 [2024-04-25 03:23:05.337576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.880 [2024-04-25 03:23:05.337594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.880 [2024-04-25 03:23:05.337610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.880 [2024-04-25 03:23:05.337625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.880 [2024-04-25 03:23:05.337652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.880 [2024-04-25 03:23:05.337667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.880 [2024-04-25 03:23:05.337683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.880 [2024-04-25 03:23:05.337696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.880 [2024-04-25 03:23:05.337712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.880 [2024-04-25 03:23:05.337726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.880 [2024-04-25 03:23:05.337742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.880 [2024-04-25 03:23:05.337756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.880 [2024-04-25 03:23:05.337771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.880 [2024-04-25 03:23:05.337786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.880 [2024-04-25 03:23:05.337802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.880 [2024-04-25 03:23:05.337816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.880 [2024-04-25 03:23:05.337832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.880 [2024-04-25 03:23:05.337846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.880 [2024-04-25 03:23:05.337861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.880 [2024-04-25 03:23:05.337875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.880 [2024-04-25 03:23:05.337890] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ba1b0 is same with the state(5) to be set 00:22:30.880 [2024-04-25 03:23:05.339908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.880 [2024-04-25 03:23:05.339935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.880 [2024-04-25 03:23:05.339959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.880 [2024-04-25 03:23:05.339976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.880 [2024-04-25 03:23:05.339993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.880 [2024-04-25 03:23:05.340012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.880 [2024-04-25 03:23:05.340030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.880 [2024-04-25 03:23:05.340045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.880 [2024-04-25 03:23:05.340061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.880 [2024-04-25 03:23:05.340075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.880 [2024-04-25 03:23:05.340091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.880 [2024-04-25 03:23:05.340105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.880 [2024-04-25 03:23:05.340121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.880 [2024-04-25 03:23:05.340136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.880 [2024-04-25 03:23:05.340151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.880 [2024-04-25 03:23:05.340166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.880 [2024-04-25 03:23:05.340182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.880 [2024-04-25 03:23:05.340196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.880 [2024-04-25 03:23:05.340212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.880 [2024-04-25 03:23:05.340227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.880 [2024-04-25 03:23:05.340242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.880 [2024-04-25 03:23:05.340256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.880 [2024-04-25 03:23:05.340272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.881 [2024-04-25 03:23:05.340286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.881 [2024-04-25 03:23:05.340302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.881 [2024-04-25 03:23:05.340316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.881 [2024-04-25 03:23:05.340332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.881 [2024-04-25 03:23:05.340346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.881 [2024-04-25 03:23:05.340362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.881 [2024-04-25 03:23:05.340376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.881 [2024-04-25 03:23:05.340392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.881 [2024-04-25 03:23:05.340410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.881 [2024-04-25 03:23:05.340427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.881 [2024-04-25 03:23:05.340441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.881 [2024-04-25 03:23:05.340457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.881 [2024-04-25 03:23:05.340471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.881 [2024-04-25 03:23:05.340487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.881 [2024-04-25 03:23:05.340501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.881 [2024-04-25 03:23:05.340517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.881 [2024-04-25 03:23:05.340531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.881 [2024-04-25 03:23:05.340546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.881 [2024-04-25 03:23:05.340561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.881 [2024-04-25 03:23:05.340577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.881 [2024-04-25 03:23:05.340592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.881 [2024-04-25 03:23:05.340608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.881 [2024-04-25 03:23:05.340622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.881 [2024-04-25 03:23:05.340645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.881 [2024-04-25 03:23:05.340661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.881 [2024-04-25 03:23:05.340677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.881 [2024-04-25 03:23:05.340692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.881 [2024-04-25 03:23:05.340708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.881 [2024-04-25 03:23:05.340722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.881 [2024-04-25 03:23:05.340739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.881 [2024-04-25 03:23:05.340753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.881 [2024-04-25 03:23:05.340768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.881 [2024-04-25 03:23:05.340782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.881 [2024-04-25 03:23:05.340802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.881 [2024-04-25 03:23:05.340817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.881 [2024-04-25 03:23:05.340832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.881 [2024-04-25 03:23:05.340846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.881 [2024-04-25 03:23:05.340862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.881 [2024-04-25 03:23:05.340876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.881 [2024-04-25 03:23:05.340892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.881 [2024-04-25 03:23:05.340905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.881 [2024-04-25 03:23:05.340921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.881 [2024-04-25 03:23:05.340936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.881 [2024-04-25 03:23:05.340951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.881 [2024-04-25 03:23:05.340965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.881 [2024-04-25 03:23:05.340981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.881 [2024-04-25 03:23:05.340995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.881 [2024-04-25 03:23:05.341010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.881 [2024-04-25 03:23:05.341024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.881 [2024-04-25 03:23:05.341040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.881 [2024-04-25 03:23:05.341054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.881 [2024-04-25 03:23:05.341076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.881 [2024-04-25 03:23:05.341091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.881 [2024-04-25 03:23:05.341107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.881 [2024-04-25 03:23:05.341120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.881 [2024-04-25 03:23:05.341135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.881 [2024-04-25 03:23:05.341149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.881 [2024-04-25 03:23:05.341165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.881 [2024-04-25 03:23:05.341183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.881 [2024-04-25 03:23:05.341199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.881 [2024-04-25 03:23:05.341214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.881 [2024-04-25 03:23:05.341229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.881 [2024-04-25 03:23:05.341243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.881 [2024-04-25 03:23:05.341258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.881 [2024-04-25 03:23:05.341272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.881 [2024-04-25 03:23:05.341288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.881 [2024-04-25 03:23:05.341302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.881 [2024-04-25 03:23:05.341319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.881 [2024-04-25 03:23:05.341333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.881 [2024-04-25 03:23:05.341348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.881 [2024-04-25 03:23:05.341362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.881 [2024-04-25 03:23:05.341377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.881 [2024-04-25 03:23:05.341391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.881 [2024-04-25 03:23:05.341406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.881 [2024-04-25 03:23:05.341420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.881 [2024-04-25 03:23:05.341435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.881 [2024-04-25 03:23:05.341449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.881 [2024-04-25 03:23:05.341465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.881 [2024-04-25 03:23:05.341478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.882 [2024-04-25 03:23:05.341493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.882 [2024-04-25 03:23:05.341507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.882 [2024-04-25 03:23:05.341522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.882 [2024-04-25 03:23:05.341536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.882 [2024-04-25 03:23:05.341556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.882 [2024-04-25 03:23:05.341572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.882 [2024-04-25 03:23:05.341588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.882 [2024-04-25 03:23:05.341602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.882 [2024-04-25 03:23:05.341618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.882 [2024-04-25 03:23:05.341646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.882 [2024-04-25 03:23:05.341664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.882 [2024-04-25 03:23:05.341678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.882 [2024-04-25 03:23:05.341694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.882 [2024-04-25 03:23:05.341708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.882 [2024-04-25 03:23:05.341724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.882 [2024-04-25 03:23:05.341738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.882 [2024-04-25 03:23:05.341754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.882 [2024-04-25 03:23:05.341768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.882 [2024-04-25 03:23:05.341784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.882 [2024-04-25 03:23:05.341798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.882 [2024-04-25 03:23:05.341814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.882 [2024-04-25 03:23:05.341828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.882 [2024-04-25 03:23:05.341844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.882 [2024-04-25 03:23:05.341858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.882 [2024-04-25 03:23:05.341873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.882 [2024-04-25 03:23:05.341887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.882 [2024-04-25 03:23:05.341902] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1849ad0 is same with the state(5) to be set 00:22:31.142 [2024-04-25 03:23:05.344087] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:22:31.142 [2024-04-25 03:23:05.344122] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:22:31.142 [2024-04-25 03:23:05.344146] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:22:31.142 [2024-04-25 03:23:05.344166] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:22:31.142 [2024-04-25 03:23:05.344283] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:31.142 [2024-04-25 03:23:05.344316] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:31.142 [2024-04-25 03:23:05.344414] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:22:31.142 task offset: 36480 on job bdev=Nvme4n1 fails 00:22:31.142 00:22:31.142 Latency(us) 00:22:31.142 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:31.142 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:31.142 Job: Nvme1n1 ended in about 0.89 seconds with error 00:22:31.142 Verification LBA range: start 0x0 length 0x400 00:22:31.142 Nvme1n1 : 0.89 143.43 8.96 71.71 0.00 294163.09 21262.79 278066.82 00:22:31.142 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:31.142 Job: Nvme2n1 ended in about 0.90 seconds with error 00:22:31.142 Verification LBA range: start 0x0 length 0x400 00:22:31.142 Nvme2n1 : 0.90 71.27 4.45 71.27 0.00 434936.79 40972.14 383701.14 00:22:31.142 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:31.142 Job: Nvme3n1 ended in about 0.90 seconds with error 00:22:31.142 Verification LBA range: start 0x0 length 0x400 00:22:31.142 Nvme3n1 : 0.90 213.05 13.32 71.02 0.00 213566.01 21359.88 237677.23 00:22:31.142 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:31.142 Job: Nvme4n1 ended in about 0.88 seconds with error 00:22:31.142 Verification LBA range: start 0x0 length 0x400 00:22:31.142 Nvme4n1 : 0.88 290.17 18.14 72.54 0.00 163335.13 7378.87 222142.77 00:22:31.142 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:31.142 Job: Nvme5n1 ended in about 0.90 seconds with error 00:22:31.142 Verification LBA range: start 0x0 length 0x400 00:22:31.142 Nvme5n1 : 0.90 145.95 9.12 70.76 0.00 268145.52 10000.31 268746.15 00:22:31.142 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:31.142 Job: Nvme6n1 ended in about 0.91 seconds with error 00:22:31.142 Verification LBA range: start 0x0 length 0x400 00:22:31.142 Nvme6n1 : 0.91 70.51 4.41 70.51 0.00 403224.65 39224.51 355739.12 00:22:31.142 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:31.142 Job: Nvme7n1 ended in about 0.88 seconds with error 00:22:31.142 Verification LBA range: start 0x0 length 0x400 00:22:31.142 Nvme7n1 : 0.88 216.19 13.51 72.44 0.00 191627.84 11359.57 257872.02 00:22:31.142 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:31.142 Job: Nvme8n1 ended in about 0.91 seconds with error 00:22:31.142 Verification LBA range: start 0x0 length 0x400 00:22:31.142 Nvme8n1 : 0.91 210.78 13.17 70.26 0.00 193186.32 20971.52 253211.69 00:22:31.142 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:31.142 Job: Nvme9n1 ended in about 0.88 seconds with error 00:22:31.142 Verification LBA range: start 0x0 length 0x400 00:22:31.142 Nvme9n1 : 0.88 144.70 9.04 72.35 0.00 242949.56 12039.21 279620.27 00:22:31.142 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:31.142 Job: Nvme10n1 ended in about 0.91 seconds with error 00:22:31.142 Verification LBA range: start 0x0 length 0x400 00:22:31.142 Nvme10n1 : 0.91 69.95 4.37 69.95 0.00 371090.01 67574.90 352632.23 00:22:31.142 =================================================================================================================== 00:22:31.142 Total : 1575.99 98.50 712.81 0.00 251451.62 7378.87 383701.14 00:22:31.142 [2024-04-25 03:23:05.372073] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:22:31.142 [2024-04-25 03:23:05.372163] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:22:31.142 [2024-04-25 03:23:05.372561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:31.142 [2024-04-25 03:23:05.372781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:31.142 [2024-04-25 03:23:05.372810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17bd310 with addr=10.0.0.2, port=4420 00:22:31.142 [2024-04-25 03:23:05.372831] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bd310 is same with the state(5) to be set 00:22:31.142 [2024-04-25 03:23:05.373062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:31.142 [2024-04-25 03:23:05.373244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:31.142 [2024-04-25 03:23:05.373270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17eaa90 with addr=10.0.0.2, port=4420 00:22:31.142 [2024-04-25 03:23:05.373287] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17eaa90 is same with the state(5) to be set 00:22:31.142 [2024-04-25 03:23:05.373458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:31.142 [2024-04-25 03:23:05.373638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:31.142 [2024-04-25 03:23:05.373667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x197b6d0 with addr=10.0.0.2, port=4420 00:22:31.142 [2024-04-25 03:23:05.373684] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197b6d0 is same with the state(5) to be set 00:22:31.142 [2024-04-25 03:23:05.373840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:31.142 [2024-04-25 03:23:05.374040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:31.142 [2024-04-25 03:23:05.374065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1983000 with addr=10.0.0.2, port=4420 00:22:31.142 [2024-04-25 03:23:05.374082] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1983000 is same with the state(5) to be set 00:22:31.142 [2024-04-25 03:23:05.375891] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:22:31.143 [2024-04-25 03:23:05.375929] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:22:31.143 [2024-04-25 03:23:05.375948] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:22:31.143 [2024-04-25 03:23:05.375975] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:31.143 [2024-04-25 03:23:05.376225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:31.143 [2024-04-25 03:23:05.376393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:31.143 [2024-04-25 03:23:05.376420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1852d20 with addr=10.0.0.2, port=4420 00:22:31.143 [2024-04-25 03:23:05.376437] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852d20 is same with the state(5) to be set 00:22:31.143 [2024-04-25 03:23:05.376603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:31.143 [2024-04-25 03:23:05.376799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:31.143 [2024-04-25 03:23:05.376825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e2960 with addr=10.0.0.2, port=4420 00:22:31.143 [2024-04-25 03:23:05.376842] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e2960 is same with the state(5) to be set 00:22:31.143 [2024-04-25 03:23:05.376867] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17bd310 (9): Bad file descriptor 00:22:31.143 [2024-04-25 03:23:05.376891] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17eaa90 (9): Bad file descriptor 00:22:31.143 [2024-04-25 03:23:05.376920] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x197b6d0 (9): Bad file descriptor 00:22:31.143 [2024-04-25 03:23:05.376940] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1983000 (9): Bad file descriptor 00:22:31.143 [2024-04-25 03:23:05.376992] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:31.143 [2024-04-25 03:23:05.377016] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:31.143 [2024-04-25 03:23:05.377034] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:31.143 [2024-04-25 03:23:05.377053] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:31.143 [2024-04-25 03:23:05.377289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:31.143 [2024-04-25 03:23:05.377459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:31.143 [2024-04-25 03:23:05.377485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18eb6e0 with addr=10.0.0.2, port=4420 00:22:31.143 [2024-04-25 03:23:05.377502] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18eb6e0 is same with the state(5) to be set 00:22:31.143 [2024-04-25 03:23:05.377662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:31.143 [2024-04-25 03:23:05.377830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:31.143 [2024-04-25 03:23:05.377856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c1ed0 with addr=10.0.0.2, port=4420 00:22:31.143 [2024-04-25 03:23:05.377873] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c1ed0 is same with the state(5) to be set 00:22:31.143 [2024-04-25 03:23:05.378035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:31.143 [2024-04-25 03:23:05.378212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:31.143 [2024-04-25 03:23:05.378239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17cda20 with addr=10.0.0.2, port=4420 00:22:31.143 [2024-04-25 03:23:05.378256] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cda20 is same with the state(5) to be set 00:22:31.143 [2024-04-25 03:23:05.378418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:31.143 [2024-04-25 03:23:05.378691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:31.143 [2024-04-25 03:23:05.378719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13ac660 with addr=10.0.0.2, port=4420 00:22:31.143 [2024-04-25 03:23:05.378736] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ac660 is same with the state(5) to be set 00:22:31.143 [2024-04-25 03:23:05.378755] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1852d20 (9): Bad file descriptor 00:22:31.143 [2024-04-25 03:23:05.378775] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e2960 (9): Bad file descriptor 00:22:31.143 [2024-04-25 03:23:05.378793] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:22:31.143 [2024-04-25 03:23:05.378807] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:22:31.143 [2024-04-25 03:23:05.378823] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:22:31.143 [2024-04-25 03:23:05.378844] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:22:31.143 [2024-04-25 03:23:05.378858] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:22:31.143 [2024-04-25 03:23:05.378872] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:22:31.143 [2024-04-25 03:23:05.378888] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:22:31.143 [2024-04-25 03:23:05.378908] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:22:31.143 [2024-04-25 03:23:05.378922] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:22:31.143 [2024-04-25 03:23:05.378940] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:22:31.143 [2024-04-25 03:23:05.378955] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:22:31.143 [2024-04-25 03:23:05.378967] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:22:31.143 [2024-04-25 03:23:05.379057] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:31.143 [2024-04-25 03:23:05.379079] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:31.143 [2024-04-25 03:23:05.379092] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:31.143 [2024-04-25 03:23:05.379104] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:31.143 [2024-04-25 03:23:05.379119] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18eb6e0 (9): Bad file descriptor 00:22:31.143 [2024-04-25 03:23:05.379139] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c1ed0 (9): Bad file descriptor 00:22:31.143 [2024-04-25 03:23:05.379158] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17cda20 (9): Bad file descriptor 00:22:31.143 [2024-04-25 03:23:05.379175] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13ac660 (9): Bad file descriptor 00:22:31.143 [2024-04-25 03:23:05.379191] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:22:31.143 [2024-04-25 03:23:05.379204] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:22:31.143 [2024-04-25 03:23:05.379217] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:22:31.143 [2024-04-25 03:23:05.379233] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:22:31.143 [2024-04-25 03:23:05.379247] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:22:31.143 [2024-04-25 03:23:05.379260] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:22:31.143 [2024-04-25 03:23:05.379298] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:31.143 [2024-04-25 03:23:05.379316] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:31.143 [2024-04-25 03:23:05.379330] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:22:31.143 [2024-04-25 03:23:05.379342] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:22:31.143 [2024-04-25 03:23:05.379356] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:22:31.143 [2024-04-25 03:23:05.379372] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:22:31.143 [2024-04-25 03:23:05.379386] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:22:31.143 [2024-04-25 03:23:05.379399] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:22:31.143 [2024-04-25 03:23:05.379414] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:22:31.143 [2024-04-25 03:23:05.379428] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:22:31.143 [2024-04-25 03:23:05.379441] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:22:31.143 [2024-04-25 03:23:05.379461] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:31.143 [2024-04-25 03:23:05.379476] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:31.143 [2024-04-25 03:23:05.379490] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:31.143 [2024-04-25 03:23:05.379528] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:31.143 [2024-04-25 03:23:05.379546] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:31.143 [2024-04-25 03:23:05.379558] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:31.143 [2024-04-25 03:23:05.379570] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:31.402 03:23:05 -- target/shutdown.sh@136 -- # nvmfpid= 00:22:31.402 03:23:05 -- target/shutdown.sh@139 -- # sleep 1 00:22:32.783 03:23:06 -- target/shutdown.sh@142 -- # kill -9 1557630 00:22:32.783 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (1557630) - No such process 00:22:32.783 03:23:06 -- target/shutdown.sh@142 -- # true 00:22:32.783 03:23:06 -- target/shutdown.sh@144 -- # stoptarget 00:22:32.783 03:23:06 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:22:32.783 03:23:06 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:32.783 03:23:06 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:32.783 03:23:06 -- target/shutdown.sh@45 -- # nvmftestfini 00:22:32.783 03:23:06 -- nvmf/common.sh@477 -- # nvmfcleanup 00:22:32.783 03:23:06 -- nvmf/common.sh@117 -- # sync 00:22:32.783 03:23:06 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:32.783 03:23:06 -- nvmf/common.sh@120 -- # set +e 00:22:32.783 03:23:06 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:32.783 03:23:06 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:32.783 rmmod nvme_tcp 00:22:32.783 rmmod nvme_fabrics 00:22:32.783 rmmod nvme_keyring 00:22:32.783 03:23:06 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:32.783 03:23:06 -- nvmf/common.sh@124 -- # set -e 00:22:32.783 03:23:06 -- nvmf/common.sh@125 -- # return 0 00:22:32.783 03:23:06 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:22:32.783 03:23:06 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:22:32.783 03:23:06 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:22:32.783 03:23:06 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:22:32.783 03:23:06 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:32.783 03:23:06 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:32.783 03:23:06 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:32.783 03:23:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:32.783 03:23:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:34.742 03:23:08 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:34.742 00:22:34.742 real 0m7.983s 00:22:34.742 user 0m19.685s 00:22:34.742 sys 0m1.568s 00:22:34.742 03:23:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:34.742 03:23:08 -- common/autotest_common.sh@10 -- # set +x 00:22:34.742 ************************************ 00:22:34.742 END TEST nvmf_shutdown_tc3 00:22:34.742 ************************************ 00:22:34.742 03:23:08 -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:22:34.742 00:22:34.742 real 0m28.833s 00:22:34.742 user 1m20.963s 00:22:34.742 sys 0m6.723s 00:22:34.742 03:23:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:34.742 03:23:08 -- common/autotest_common.sh@10 -- # set +x 00:22:34.742 ************************************ 00:22:34.742 END TEST nvmf_shutdown 00:22:34.742 ************************************ 00:22:34.742 03:23:08 -- nvmf/nvmf.sh@84 -- # timing_exit target 00:22:34.742 03:23:08 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:34.742 03:23:08 -- common/autotest_common.sh@10 -- # set +x 00:22:34.742 03:23:09 -- nvmf/nvmf.sh@86 -- # timing_enter host 00:22:34.742 03:23:09 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:34.742 03:23:09 -- common/autotest_common.sh@10 -- # set +x 00:22:34.742 03:23:09 -- nvmf/nvmf.sh@88 -- # [[ 0 -eq 0 ]] 00:22:34.742 03:23:09 -- nvmf/nvmf.sh@89 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:22:34.742 03:23:09 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:34.742 03:23:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:34.742 03:23:09 -- common/autotest_common.sh@10 -- # set +x 00:22:34.742 ************************************ 00:22:34.742 START TEST nvmf_multicontroller 00:22:34.742 ************************************ 00:22:34.742 03:23:09 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:22:34.742 * Looking for test storage... 00:22:34.742 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:34.742 03:23:09 -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:34.742 03:23:09 -- nvmf/common.sh@7 -- # uname -s 00:22:34.742 03:23:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:34.742 03:23:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:34.742 03:23:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:34.742 03:23:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:34.742 03:23:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:34.742 03:23:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:34.742 03:23:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:34.742 03:23:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:34.742 03:23:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:34.742 03:23:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:34.742 03:23:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:34.742 03:23:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:34.742 03:23:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:34.742 03:23:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:34.742 03:23:09 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:34.742 03:23:09 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:34.742 03:23:09 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:34.742 03:23:09 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:34.742 03:23:09 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:34.742 03:23:09 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:34.742 03:23:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.742 03:23:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.743 03:23:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.743 03:23:09 -- paths/export.sh@5 -- # export PATH 00:22:34.743 03:23:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.743 03:23:09 -- nvmf/common.sh@47 -- # : 0 00:22:34.743 03:23:09 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:34.743 03:23:09 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:34.743 03:23:09 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:34.743 03:23:09 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:34.743 03:23:09 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:34.743 03:23:09 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:34.743 03:23:09 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:34.743 03:23:09 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:34.743 03:23:09 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:34.743 03:23:09 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:34.743 03:23:09 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:22:34.743 03:23:09 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:22:34.743 03:23:09 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:34.743 03:23:09 -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:22:34.743 03:23:09 -- host/multicontroller.sh@23 -- # nvmftestinit 00:22:34.743 03:23:09 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:22:34.743 03:23:09 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:34.743 03:23:09 -- nvmf/common.sh@437 -- # prepare_net_devs 00:22:34.743 03:23:09 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:22:34.743 03:23:09 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:22:34.743 03:23:09 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:34.743 03:23:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:34.743 03:23:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:34.743 03:23:09 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:22:34.743 03:23:09 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:22:34.743 03:23:09 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:34.743 03:23:09 -- common/autotest_common.sh@10 -- # set +x 00:22:37.280 03:23:11 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:37.280 03:23:11 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:37.280 03:23:11 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:37.280 03:23:11 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:37.280 03:23:11 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:37.280 03:23:11 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:37.280 03:23:11 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:37.280 03:23:11 -- nvmf/common.sh@295 -- # net_devs=() 00:22:37.281 03:23:11 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:37.281 03:23:11 -- nvmf/common.sh@296 -- # e810=() 00:22:37.281 03:23:11 -- nvmf/common.sh@296 -- # local -ga e810 00:22:37.281 03:23:11 -- nvmf/common.sh@297 -- # x722=() 00:22:37.281 03:23:11 -- nvmf/common.sh@297 -- # local -ga x722 00:22:37.281 03:23:11 -- nvmf/common.sh@298 -- # mlx=() 00:22:37.281 03:23:11 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:37.281 03:23:11 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:37.281 03:23:11 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:37.281 03:23:11 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:37.281 03:23:11 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:37.281 03:23:11 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:37.281 03:23:11 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:37.281 03:23:11 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:37.281 03:23:11 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:37.281 03:23:11 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:37.281 03:23:11 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:37.281 03:23:11 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:37.281 03:23:11 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:37.281 03:23:11 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:37.281 03:23:11 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:37.281 03:23:11 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:37.281 03:23:11 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:37.281 03:23:11 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:37.281 03:23:11 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:37.281 03:23:11 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:37.281 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:37.281 03:23:11 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:37.281 03:23:11 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:37.281 03:23:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:37.281 03:23:11 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:37.281 03:23:11 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:37.281 03:23:11 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:37.281 03:23:11 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:37.281 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:37.281 03:23:11 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:37.281 03:23:11 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:37.281 03:23:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:37.281 03:23:11 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:37.281 03:23:11 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:37.281 03:23:11 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:37.281 03:23:11 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:37.281 03:23:11 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:37.281 03:23:11 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:37.281 03:23:11 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:37.281 03:23:11 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:37.281 03:23:11 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:37.281 03:23:11 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:37.281 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:37.281 03:23:11 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:37.281 03:23:11 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:37.281 03:23:11 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:37.281 03:23:11 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:37.281 03:23:11 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:37.281 03:23:11 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:37.281 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:37.281 03:23:11 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:37.281 03:23:11 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:22:37.281 03:23:11 -- nvmf/common.sh@403 -- # is_hw=yes 00:22:37.281 03:23:11 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:22:37.281 03:23:11 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:22:37.281 03:23:11 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:22:37.281 03:23:11 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:37.281 03:23:11 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:37.281 03:23:11 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:37.281 03:23:11 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:37.281 03:23:11 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:37.281 03:23:11 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:37.281 03:23:11 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:37.281 03:23:11 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:37.281 03:23:11 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:37.281 03:23:11 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:37.281 03:23:11 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:37.281 03:23:11 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:37.281 03:23:11 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:37.281 03:23:11 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:37.281 03:23:11 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:37.281 03:23:11 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:37.281 03:23:11 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:37.281 03:23:11 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:37.281 03:23:11 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:37.281 03:23:11 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:37.281 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:37.281 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.201 ms 00:22:37.281 00:22:37.281 --- 10.0.0.2 ping statistics --- 00:22:37.281 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:37.281 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:22:37.281 03:23:11 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:37.281 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:37.281 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.139 ms 00:22:37.281 00:22:37.281 --- 10.0.0.1 ping statistics --- 00:22:37.281 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:37.281 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:22:37.281 03:23:11 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:37.281 03:23:11 -- nvmf/common.sh@411 -- # return 0 00:22:37.281 03:23:11 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:22:37.281 03:23:11 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:37.281 03:23:11 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:22:37.281 03:23:11 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:22:37.281 03:23:11 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:37.281 03:23:11 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:22:37.281 03:23:11 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:22:37.281 03:23:11 -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:22:37.281 03:23:11 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:22:37.281 03:23:11 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:37.281 03:23:11 -- common/autotest_common.sh@10 -- # set +x 00:22:37.281 03:23:11 -- nvmf/common.sh@470 -- # nvmfpid=1560160 00:22:37.281 03:23:11 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:37.281 03:23:11 -- nvmf/common.sh@471 -- # waitforlisten 1560160 00:22:37.281 03:23:11 -- common/autotest_common.sh@817 -- # '[' -z 1560160 ']' 00:22:37.281 03:23:11 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:37.282 03:23:11 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:37.282 03:23:11 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:37.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:37.282 03:23:11 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:37.282 03:23:11 -- common/autotest_common.sh@10 -- # set +x 00:22:37.282 [2024-04-25 03:23:11.509914] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:22:37.282 [2024-04-25 03:23:11.510004] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:37.282 EAL: No free 2048 kB hugepages reported on node 1 00:22:37.282 [2024-04-25 03:23:11.581434] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:37.282 [2024-04-25 03:23:11.697683] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:37.282 [2024-04-25 03:23:11.697733] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:37.282 [2024-04-25 03:23:11.697756] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:37.282 [2024-04-25 03:23:11.697767] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:37.282 [2024-04-25 03:23:11.697778] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:37.282 [2024-04-25 03:23:11.697840] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:37.282 [2024-04-25 03:23:11.697901] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:37.282 [2024-04-25 03:23:11.697904] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:38.218 03:23:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:38.218 03:23:12 -- common/autotest_common.sh@850 -- # return 0 00:22:38.218 03:23:12 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:22:38.218 03:23:12 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:38.218 03:23:12 -- common/autotest_common.sh@10 -- # set +x 00:22:38.218 03:23:12 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:38.218 03:23:12 -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:38.218 03:23:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:38.218 03:23:12 -- common/autotest_common.sh@10 -- # set +x 00:22:38.218 [2024-04-25 03:23:12.510584] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:38.218 03:23:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:38.218 03:23:12 -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:38.218 03:23:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:38.218 03:23:12 -- common/autotest_common.sh@10 -- # set +x 00:22:38.218 Malloc0 00:22:38.218 03:23:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:38.218 03:23:12 -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:38.218 03:23:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:38.218 03:23:12 -- common/autotest_common.sh@10 -- # set +x 00:22:38.218 03:23:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:38.218 03:23:12 -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:38.218 03:23:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:38.218 03:23:12 -- common/autotest_common.sh@10 -- # set +x 00:22:38.218 03:23:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:38.218 03:23:12 -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:38.218 03:23:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:38.218 03:23:12 -- common/autotest_common.sh@10 -- # set +x 00:22:38.218 [2024-04-25 03:23:12.570752] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:38.218 03:23:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:38.218 03:23:12 -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:38.218 03:23:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:38.218 03:23:12 -- common/autotest_common.sh@10 -- # set +x 00:22:38.218 [2024-04-25 03:23:12.578649] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:38.218 03:23:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:38.218 03:23:12 -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:38.218 03:23:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:38.218 03:23:12 -- common/autotest_common.sh@10 -- # set +x 00:22:38.218 Malloc1 00:22:38.218 03:23:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:38.218 03:23:12 -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:22:38.218 03:23:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:38.218 03:23:12 -- common/autotest_common.sh@10 -- # set +x 00:22:38.218 03:23:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:38.218 03:23:12 -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:22:38.218 03:23:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:38.218 03:23:12 -- common/autotest_common.sh@10 -- # set +x 00:22:38.218 03:23:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:38.218 03:23:12 -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:22:38.218 03:23:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:38.218 03:23:12 -- common/autotest_common.sh@10 -- # set +x 00:22:38.218 03:23:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:38.218 03:23:12 -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:22:38.218 03:23:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:38.218 03:23:12 -- common/autotest_common.sh@10 -- # set +x 00:22:38.218 03:23:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:38.218 03:23:12 -- host/multicontroller.sh@44 -- # bdevperf_pid=1560312 00:22:38.218 03:23:12 -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:22:38.218 03:23:12 -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:38.218 03:23:12 -- host/multicontroller.sh@47 -- # waitforlisten 1560312 /var/tmp/bdevperf.sock 00:22:38.218 03:23:12 -- common/autotest_common.sh@817 -- # '[' -z 1560312 ']' 00:22:38.218 03:23:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:38.218 03:23:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:38.218 03:23:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:38.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:38.218 03:23:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:38.218 03:23:12 -- common/autotest_common.sh@10 -- # set +x 00:22:38.476 03:23:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:38.477 03:23:12 -- common/autotest_common.sh@850 -- # return 0 00:22:38.477 03:23:12 -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:22:38.477 03:23:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:38.477 03:23:12 -- common/autotest_common.sh@10 -- # set +x 00:22:38.733 NVMe0n1 00:22:38.733 03:23:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:38.734 03:23:13 -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:38.734 03:23:13 -- host/multicontroller.sh@54 -- # grep -c NVMe 00:22:38.734 03:23:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:38.734 03:23:13 -- common/autotest_common.sh@10 -- # set +x 00:22:38.734 03:23:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:38.734 1 00:22:38.734 03:23:13 -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:22:38.734 03:23:13 -- common/autotest_common.sh@638 -- # local es=0 00:22:38.734 03:23:13 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:22:38.734 03:23:13 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:22:38.734 03:23:13 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:38.734 03:23:13 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:22:38.734 03:23:13 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:38.734 03:23:13 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:22:38.734 03:23:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:38.734 03:23:13 -- common/autotest_common.sh@10 -- # set +x 00:22:38.734 request: 00:22:38.734 { 00:22:38.734 "name": "NVMe0", 00:22:38.734 "trtype": "tcp", 00:22:38.734 "traddr": "10.0.0.2", 00:22:38.734 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:22:38.734 "hostaddr": "10.0.0.2", 00:22:38.734 "hostsvcid": "60000", 00:22:38.734 "adrfam": "ipv4", 00:22:38.734 "trsvcid": "4420", 00:22:38.734 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:38.734 "method": "bdev_nvme_attach_controller", 00:22:38.734 "req_id": 1 00:22:38.734 } 00:22:38.734 Got JSON-RPC error response 00:22:38.734 response: 00:22:38.734 { 00:22:38.734 "code": -114, 00:22:38.734 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:22:38.734 } 00:22:38.734 03:23:13 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:22:38.734 03:23:13 -- common/autotest_common.sh@641 -- # es=1 00:22:38.734 03:23:13 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:22:38.734 03:23:13 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:22:38.734 03:23:13 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:22:38.734 03:23:13 -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:22:38.734 03:23:13 -- common/autotest_common.sh@638 -- # local es=0 00:22:38.734 03:23:13 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:22:38.734 03:23:13 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:22:38.734 03:23:13 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:38.734 03:23:13 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:22:38.734 03:23:13 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:38.734 03:23:13 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:22:38.734 03:23:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:38.734 03:23:13 -- common/autotest_common.sh@10 -- # set +x 00:22:38.734 request: 00:22:38.734 { 00:22:38.734 "name": "NVMe0", 00:22:38.734 "trtype": "tcp", 00:22:38.734 "traddr": "10.0.0.2", 00:22:38.734 "hostaddr": "10.0.0.2", 00:22:38.734 "hostsvcid": "60000", 00:22:38.734 "adrfam": "ipv4", 00:22:38.734 "trsvcid": "4420", 00:22:38.734 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:38.734 "method": "bdev_nvme_attach_controller", 00:22:38.734 "req_id": 1 00:22:38.734 } 00:22:38.734 Got JSON-RPC error response 00:22:38.734 response: 00:22:38.734 { 00:22:38.734 "code": -114, 00:22:38.734 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:22:38.734 } 00:22:38.734 03:23:13 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:22:38.734 03:23:13 -- common/autotest_common.sh@641 -- # es=1 00:22:38.734 03:23:13 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:22:38.734 03:23:13 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:22:38.734 03:23:13 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:22:38.734 03:23:13 -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:22:38.734 03:23:13 -- common/autotest_common.sh@638 -- # local es=0 00:22:38.734 03:23:13 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:22:38.734 03:23:13 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:22:38.734 03:23:13 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:38.734 03:23:13 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:22:38.734 03:23:13 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:38.734 03:23:13 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:22:38.734 03:23:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:38.734 03:23:13 -- common/autotest_common.sh@10 -- # set +x 00:22:38.991 request: 00:22:38.991 { 00:22:38.991 "name": "NVMe0", 00:22:38.992 "trtype": "tcp", 00:22:38.992 "traddr": "10.0.0.2", 00:22:38.992 "hostaddr": "10.0.0.2", 00:22:38.992 "hostsvcid": "60000", 00:22:38.992 "adrfam": "ipv4", 00:22:38.992 "trsvcid": "4420", 00:22:38.992 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:38.992 "multipath": "disable", 00:22:38.992 "method": "bdev_nvme_attach_controller", 00:22:38.992 "req_id": 1 00:22:38.992 } 00:22:38.992 Got JSON-RPC error response 00:22:38.992 response: 00:22:38.992 { 00:22:38.992 "code": -114, 00:22:38.992 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:22:38.992 } 00:22:38.992 03:23:13 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:22:38.992 03:23:13 -- common/autotest_common.sh@641 -- # es=1 00:22:38.992 03:23:13 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:22:38.992 03:23:13 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:22:38.992 03:23:13 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:22:38.992 03:23:13 -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:22:38.992 03:23:13 -- common/autotest_common.sh@638 -- # local es=0 00:22:38.992 03:23:13 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:22:38.992 03:23:13 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:22:38.992 03:23:13 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:38.992 03:23:13 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:22:38.992 03:23:13 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:38.992 03:23:13 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:22:38.992 03:23:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:38.992 03:23:13 -- common/autotest_common.sh@10 -- # set +x 00:22:38.992 request: 00:22:38.992 { 00:22:38.992 "name": "NVMe0", 00:22:38.992 "trtype": "tcp", 00:22:38.992 "traddr": "10.0.0.2", 00:22:38.992 "hostaddr": "10.0.0.2", 00:22:38.992 "hostsvcid": "60000", 00:22:38.992 "adrfam": "ipv4", 00:22:38.992 "trsvcid": "4420", 00:22:38.992 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:38.992 "multipath": "failover", 00:22:38.992 "method": "bdev_nvme_attach_controller", 00:22:38.992 "req_id": 1 00:22:38.992 } 00:22:38.992 Got JSON-RPC error response 00:22:38.992 response: 00:22:38.992 { 00:22:38.992 "code": -114, 00:22:38.992 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:22:38.992 } 00:22:38.992 03:23:13 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:22:38.992 03:23:13 -- common/autotest_common.sh@641 -- # es=1 00:22:38.992 03:23:13 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:22:38.992 03:23:13 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:22:38.992 03:23:13 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:22:38.992 03:23:13 -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:38.992 03:23:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:38.992 03:23:13 -- common/autotest_common.sh@10 -- # set +x 00:22:38.992 00:22:38.992 03:23:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:38.992 03:23:13 -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:38.992 03:23:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:38.992 03:23:13 -- common/autotest_common.sh@10 -- # set +x 00:22:38.992 03:23:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:38.992 03:23:13 -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:22:38.992 03:23:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:38.992 03:23:13 -- common/autotest_common.sh@10 -- # set +x 00:22:38.992 00:22:38.992 03:23:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:38.992 03:23:13 -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:38.992 03:23:13 -- host/multicontroller.sh@90 -- # grep -c NVMe 00:22:38.992 03:23:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:38.992 03:23:13 -- common/autotest_common.sh@10 -- # set +x 00:22:38.992 03:23:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:38.992 03:23:13 -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:22:38.992 03:23:13 -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:40.371 0 00:22:40.371 03:23:14 -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:22:40.371 03:23:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:40.371 03:23:14 -- common/autotest_common.sh@10 -- # set +x 00:22:40.371 03:23:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:40.371 03:23:14 -- host/multicontroller.sh@100 -- # killprocess 1560312 00:22:40.371 03:23:14 -- common/autotest_common.sh@936 -- # '[' -z 1560312 ']' 00:22:40.371 03:23:14 -- common/autotest_common.sh@940 -- # kill -0 1560312 00:22:40.371 03:23:14 -- common/autotest_common.sh@941 -- # uname 00:22:40.371 03:23:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:40.371 03:23:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1560312 00:22:40.371 03:23:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:40.371 03:23:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:40.371 03:23:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1560312' 00:22:40.371 killing process with pid 1560312 00:22:40.371 03:23:14 -- common/autotest_common.sh@955 -- # kill 1560312 00:22:40.371 03:23:14 -- common/autotest_common.sh@960 -- # wait 1560312 00:22:40.371 03:23:14 -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:40.371 03:23:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:40.371 03:23:14 -- common/autotest_common.sh@10 -- # set +x 00:22:40.371 03:23:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:40.371 03:23:14 -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:22:40.371 03:23:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:40.371 03:23:14 -- common/autotest_common.sh@10 -- # set +x 00:22:40.371 03:23:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:40.371 03:23:14 -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:22:40.371 03:23:14 -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:40.371 03:23:14 -- common/autotest_common.sh@1598 -- # read -r file 00:22:40.371 03:23:14 -- common/autotest_common.sh@1597 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:22:40.371 03:23:14 -- common/autotest_common.sh@1597 -- # sort -u 00:22:40.371 03:23:14 -- common/autotest_common.sh@1599 -- # cat 00:22:40.371 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:22:40.371 [2024-04-25 03:23:12.677968] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:22:40.371 [2024-04-25 03:23:12.678068] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1560312 ] 00:22:40.371 EAL: No free 2048 kB hugepages reported on node 1 00:22:40.371 [2024-04-25 03:23:12.739880] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:40.371 [2024-04-25 03:23:12.847991] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:40.371 [2024-04-25 03:23:13.389469] bdev.c:4548:bdev_name_add: *ERROR*: Bdev name 2133eccf-b1c0-440a-957a-c199dbc2a77f already exists 00:22:40.371 [2024-04-25 03:23:13.389508] bdev.c:7651:bdev_register: *ERROR*: Unable to add uuid:2133eccf-b1c0-440a-957a-c199dbc2a77f alias for bdev NVMe1n1 00:22:40.371 [2024-04-25 03:23:13.389536] bdev_nvme.c:4272:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:22:40.371 Running I/O for 1 seconds... 00:22:40.371 00:22:40.371 Latency(us) 00:22:40.371 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:40.371 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:22:40.371 NVMe0n1 : 1.01 18652.20 72.86 0.00 0.00 6843.28 3203.98 11505.21 00:22:40.371 =================================================================================================================== 00:22:40.371 Total : 18652.20 72.86 0.00 0.00 6843.28 3203.98 11505.21 00:22:40.371 Received shutdown signal, test time was about 1.000000 seconds 00:22:40.371 00:22:40.371 Latency(us) 00:22:40.371 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:40.371 =================================================================================================================== 00:22:40.371 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:40.371 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:22:40.371 03:23:14 -- common/autotest_common.sh@1604 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:40.371 03:23:14 -- common/autotest_common.sh@1598 -- # read -r file 00:22:40.371 03:23:14 -- host/multicontroller.sh@108 -- # nvmftestfini 00:22:40.371 03:23:14 -- nvmf/common.sh@477 -- # nvmfcleanup 00:22:40.371 03:23:14 -- nvmf/common.sh@117 -- # sync 00:22:40.371 03:23:14 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:40.371 03:23:14 -- nvmf/common.sh@120 -- # set +e 00:22:40.371 03:23:14 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:40.371 03:23:14 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:40.371 rmmod nvme_tcp 00:22:40.630 rmmod nvme_fabrics 00:22:40.630 rmmod nvme_keyring 00:22:40.630 03:23:14 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:40.630 03:23:14 -- nvmf/common.sh@124 -- # set -e 00:22:40.630 03:23:14 -- nvmf/common.sh@125 -- # return 0 00:22:40.630 03:23:14 -- nvmf/common.sh@478 -- # '[' -n 1560160 ']' 00:22:40.630 03:23:14 -- nvmf/common.sh@479 -- # killprocess 1560160 00:22:40.630 03:23:14 -- common/autotest_common.sh@936 -- # '[' -z 1560160 ']' 00:22:40.630 03:23:14 -- common/autotest_common.sh@940 -- # kill -0 1560160 00:22:40.630 03:23:14 -- common/autotest_common.sh@941 -- # uname 00:22:40.630 03:23:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:40.630 03:23:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1560160 00:22:40.630 03:23:14 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:40.630 03:23:14 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:40.630 03:23:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1560160' 00:22:40.630 killing process with pid 1560160 00:22:40.630 03:23:14 -- common/autotest_common.sh@955 -- # kill 1560160 00:22:40.630 03:23:14 -- common/autotest_common.sh@960 -- # wait 1560160 00:22:40.890 03:23:15 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:22:40.890 03:23:15 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:22:40.890 03:23:15 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:22:40.890 03:23:15 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:40.890 03:23:15 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:40.890 03:23:15 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:40.890 03:23:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:40.890 03:23:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:42.803 03:23:17 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:42.803 00:22:42.803 real 0m8.154s 00:22:42.803 user 0m13.644s 00:22:42.803 sys 0m2.379s 00:22:42.803 03:23:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:42.803 03:23:17 -- common/autotest_common.sh@10 -- # set +x 00:22:42.803 ************************************ 00:22:42.803 END TEST nvmf_multicontroller 00:22:42.803 ************************************ 00:22:43.062 03:23:17 -- nvmf/nvmf.sh@90 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:22:43.062 03:23:17 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:43.062 03:23:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:43.062 03:23:17 -- common/autotest_common.sh@10 -- # set +x 00:22:43.062 ************************************ 00:22:43.062 START TEST nvmf_aer 00:22:43.062 ************************************ 00:22:43.062 03:23:17 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:22:43.062 * Looking for test storage... 00:22:43.062 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:43.062 03:23:17 -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:43.062 03:23:17 -- nvmf/common.sh@7 -- # uname -s 00:22:43.062 03:23:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:43.062 03:23:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:43.063 03:23:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:43.063 03:23:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:43.063 03:23:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:43.063 03:23:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:43.063 03:23:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:43.063 03:23:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:43.063 03:23:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:43.063 03:23:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:43.063 03:23:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:43.063 03:23:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:43.063 03:23:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:43.063 03:23:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:43.063 03:23:17 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:43.063 03:23:17 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:43.063 03:23:17 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:43.063 03:23:17 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:43.063 03:23:17 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:43.063 03:23:17 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:43.063 03:23:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.063 03:23:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.063 03:23:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.063 03:23:17 -- paths/export.sh@5 -- # export PATH 00:22:43.063 03:23:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.063 03:23:17 -- nvmf/common.sh@47 -- # : 0 00:22:43.063 03:23:17 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:43.063 03:23:17 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:43.063 03:23:17 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:43.063 03:23:17 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:43.063 03:23:17 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:43.063 03:23:17 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:43.063 03:23:17 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:43.063 03:23:17 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:43.063 03:23:17 -- host/aer.sh@11 -- # nvmftestinit 00:22:43.063 03:23:17 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:22:43.063 03:23:17 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:43.063 03:23:17 -- nvmf/common.sh@437 -- # prepare_net_devs 00:22:43.063 03:23:17 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:22:43.063 03:23:17 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:22:43.063 03:23:17 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:43.063 03:23:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:43.063 03:23:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:43.063 03:23:17 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:22:43.063 03:23:17 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:22:43.063 03:23:17 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:43.063 03:23:17 -- common/autotest_common.sh@10 -- # set +x 00:22:44.970 03:23:19 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:44.970 03:23:19 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:44.970 03:23:19 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:44.970 03:23:19 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:44.970 03:23:19 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:44.970 03:23:19 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:44.970 03:23:19 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:44.970 03:23:19 -- nvmf/common.sh@295 -- # net_devs=() 00:22:44.970 03:23:19 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:44.970 03:23:19 -- nvmf/common.sh@296 -- # e810=() 00:22:44.970 03:23:19 -- nvmf/common.sh@296 -- # local -ga e810 00:22:44.970 03:23:19 -- nvmf/common.sh@297 -- # x722=() 00:22:44.970 03:23:19 -- nvmf/common.sh@297 -- # local -ga x722 00:22:44.970 03:23:19 -- nvmf/common.sh@298 -- # mlx=() 00:22:44.970 03:23:19 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:44.970 03:23:19 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:44.970 03:23:19 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:44.970 03:23:19 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:44.970 03:23:19 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:44.970 03:23:19 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:44.970 03:23:19 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:44.970 03:23:19 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:44.970 03:23:19 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:44.970 03:23:19 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:44.970 03:23:19 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:44.970 03:23:19 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:44.970 03:23:19 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:44.970 03:23:19 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:44.970 03:23:19 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:44.970 03:23:19 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:44.970 03:23:19 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:44.970 03:23:19 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:44.970 03:23:19 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:44.970 03:23:19 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:44.970 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:44.970 03:23:19 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:44.970 03:23:19 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:44.970 03:23:19 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:44.970 03:23:19 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:44.970 03:23:19 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:44.970 03:23:19 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:44.970 03:23:19 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:44.970 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:44.970 03:23:19 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:44.970 03:23:19 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:44.970 03:23:19 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:44.970 03:23:19 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:44.970 03:23:19 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:44.970 03:23:19 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:44.970 03:23:19 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:44.970 03:23:19 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:44.970 03:23:19 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:44.970 03:23:19 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:44.970 03:23:19 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:44.970 03:23:19 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:44.970 03:23:19 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:44.970 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:44.970 03:23:19 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:44.977 03:23:19 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:44.977 03:23:19 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:44.977 03:23:19 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:44.977 03:23:19 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:44.977 03:23:19 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:44.977 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:44.977 03:23:19 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:44.977 03:23:19 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:22:44.977 03:23:19 -- nvmf/common.sh@403 -- # is_hw=yes 00:22:44.977 03:23:19 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:22:44.977 03:23:19 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:22:44.977 03:23:19 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:22:44.977 03:23:19 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:44.977 03:23:19 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:44.977 03:23:19 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:44.977 03:23:19 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:44.977 03:23:19 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:44.977 03:23:19 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:44.977 03:23:19 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:44.977 03:23:19 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:44.977 03:23:19 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:44.977 03:23:19 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:44.977 03:23:19 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:44.977 03:23:19 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:44.977 03:23:19 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:44.977 03:23:19 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:44.977 03:23:19 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:44.977 03:23:19 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:44.977 03:23:19 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:44.977 03:23:19 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:44.977 03:23:19 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:44.977 03:23:19 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:44.977 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:44.977 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.187 ms 00:22:44.977 00:22:44.977 --- 10.0.0.2 ping statistics --- 00:22:44.977 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:44.977 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:22:44.977 03:23:19 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:44.977 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:44.977 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:22:44.977 00:22:44.977 --- 10.0.0.1 ping statistics --- 00:22:44.977 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:44.977 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:22:44.977 03:23:19 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:44.977 03:23:19 -- nvmf/common.sh@411 -- # return 0 00:22:44.977 03:23:19 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:22:44.977 03:23:19 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:44.977 03:23:19 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:22:44.977 03:23:19 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:22:44.977 03:23:19 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:44.977 03:23:19 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:22:44.977 03:23:19 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:22:44.977 03:23:19 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:22:44.977 03:23:19 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:22:44.977 03:23:19 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:44.977 03:23:19 -- common/autotest_common.sh@10 -- # set +x 00:22:44.977 03:23:19 -- nvmf/common.sh@470 -- # nvmfpid=1562527 00:22:44.977 03:23:19 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:44.977 03:23:19 -- nvmf/common.sh@471 -- # waitforlisten 1562527 00:22:44.977 03:23:19 -- common/autotest_common.sh@817 -- # '[' -z 1562527 ']' 00:22:44.977 03:23:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:44.977 03:23:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:44.977 03:23:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:44.977 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:44.977 03:23:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:44.977 03:23:19 -- common/autotest_common.sh@10 -- # set +x 00:22:45.238 [2024-04-25 03:23:19.505437] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:22:45.238 [2024-04-25 03:23:19.505510] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:45.238 EAL: No free 2048 kB hugepages reported on node 1 00:22:45.238 [2024-04-25 03:23:19.577194] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:45.238 [2024-04-25 03:23:19.698140] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:45.238 [2024-04-25 03:23:19.698209] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:45.238 [2024-04-25 03:23:19.698226] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:45.238 [2024-04-25 03:23:19.698239] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:45.238 [2024-04-25 03:23:19.698251] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:45.238 [2024-04-25 03:23:19.698318] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:45.238 [2024-04-25 03:23:19.698391] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:45.238 [2024-04-25 03:23:19.701650] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:45.238 [2024-04-25 03:23:19.701662] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:45.497 03:23:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:45.497 03:23:19 -- common/autotest_common.sh@850 -- # return 0 00:22:45.497 03:23:19 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:22:45.497 03:23:19 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:45.497 03:23:19 -- common/autotest_common.sh@10 -- # set +x 00:22:45.497 03:23:19 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:45.497 03:23:19 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:45.497 03:23:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:45.497 03:23:19 -- common/autotest_common.sh@10 -- # set +x 00:22:45.497 [2024-04-25 03:23:19.876471] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:45.497 03:23:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:45.497 03:23:19 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:22:45.497 03:23:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:45.497 03:23:19 -- common/autotest_common.sh@10 -- # set +x 00:22:45.497 Malloc0 00:22:45.497 03:23:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:45.497 03:23:19 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:22:45.497 03:23:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:45.497 03:23:19 -- common/autotest_common.sh@10 -- # set +x 00:22:45.497 03:23:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:45.497 03:23:19 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:45.497 03:23:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:45.497 03:23:19 -- common/autotest_common.sh@10 -- # set +x 00:22:45.497 03:23:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:45.497 03:23:19 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:45.497 03:23:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:45.497 03:23:19 -- common/autotest_common.sh@10 -- # set +x 00:22:45.497 [2024-04-25 03:23:19.930040] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:45.497 03:23:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:45.497 03:23:19 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:22:45.497 03:23:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:45.497 03:23:19 -- common/autotest_common.sh@10 -- # set +x 00:22:45.497 [2024-04-25 03:23:19.937758] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:22:45.497 [ 00:22:45.497 { 00:22:45.497 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:45.497 "subtype": "Discovery", 00:22:45.497 "listen_addresses": [], 00:22:45.497 "allow_any_host": true, 00:22:45.497 "hosts": [] 00:22:45.497 }, 00:22:45.497 { 00:22:45.497 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:45.497 "subtype": "NVMe", 00:22:45.497 "listen_addresses": [ 00:22:45.497 { 00:22:45.497 "transport": "TCP", 00:22:45.497 "trtype": "TCP", 00:22:45.497 "adrfam": "IPv4", 00:22:45.497 "traddr": "10.0.0.2", 00:22:45.497 "trsvcid": "4420" 00:22:45.498 } 00:22:45.498 ], 00:22:45.498 "allow_any_host": true, 00:22:45.498 "hosts": [], 00:22:45.498 "serial_number": "SPDK00000000000001", 00:22:45.498 "model_number": "SPDK bdev Controller", 00:22:45.498 "max_namespaces": 2, 00:22:45.498 "min_cntlid": 1, 00:22:45.498 "max_cntlid": 65519, 00:22:45.498 "namespaces": [ 00:22:45.498 { 00:22:45.498 "nsid": 1, 00:22:45.498 "bdev_name": "Malloc0", 00:22:45.498 "name": "Malloc0", 00:22:45.498 "nguid": "D8263AF56E204D8F9DE53FFA513E5D55", 00:22:45.498 "uuid": "d8263af5-6e20-4d8f-9de5-3ffa513e5d55" 00:22:45.498 } 00:22:45.498 ] 00:22:45.498 } 00:22:45.498 ] 00:22:45.498 03:23:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:45.498 03:23:19 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:22:45.498 03:23:19 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:22:45.498 03:23:19 -- host/aer.sh@33 -- # aerpid=1562559 00:22:45.498 03:23:19 -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:22:45.498 03:23:19 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:22:45.498 03:23:19 -- common/autotest_common.sh@1251 -- # local i=0 00:22:45.498 03:23:19 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:45.498 03:23:19 -- common/autotest_common.sh@1253 -- # '[' 0 -lt 200 ']' 00:22:45.498 03:23:19 -- common/autotest_common.sh@1254 -- # i=1 00:22:45.498 03:23:19 -- common/autotest_common.sh@1255 -- # sleep 0.1 00:22:45.756 EAL: No free 2048 kB hugepages reported on node 1 00:22:45.756 03:23:20 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:45.756 03:23:20 -- common/autotest_common.sh@1253 -- # '[' 1 -lt 200 ']' 00:22:45.756 03:23:20 -- common/autotest_common.sh@1254 -- # i=2 00:22:45.756 03:23:20 -- common/autotest_common.sh@1255 -- # sleep 0.1 00:22:45.756 03:23:20 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:45.756 03:23:20 -- common/autotest_common.sh@1253 -- # '[' 2 -lt 200 ']' 00:22:45.756 03:23:20 -- common/autotest_common.sh@1254 -- # i=3 00:22:45.756 03:23:20 -- common/autotest_common.sh@1255 -- # sleep 0.1 00:22:45.756 03:23:20 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:45.756 03:23:20 -- common/autotest_common.sh@1258 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:46.014 03:23:20 -- common/autotest_common.sh@1262 -- # return 0 00:22:46.014 03:23:20 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:22:46.014 03:23:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:46.014 03:23:20 -- common/autotest_common.sh@10 -- # set +x 00:22:46.014 Malloc1 00:22:46.014 03:23:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:46.014 03:23:20 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:22:46.014 03:23:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:46.014 03:23:20 -- common/autotest_common.sh@10 -- # set +x 00:22:46.014 03:23:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:46.014 03:23:20 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:22:46.014 03:23:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:46.014 03:23:20 -- common/autotest_common.sh@10 -- # set +x 00:22:46.014 [ 00:22:46.014 { 00:22:46.014 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:46.014 "subtype": "Discovery", 00:22:46.014 "listen_addresses": [], 00:22:46.014 "allow_any_host": true, 00:22:46.014 "hosts": [] 00:22:46.014 }, 00:22:46.014 { 00:22:46.014 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:46.014 "subtype": "NVMe", 00:22:46.014 "listen_addresses": [ 00:22:46.014 { 00:22:46.014 "transport": "TCP", 00:22:46.014 "trtype": "TCP", 00:22:46.014 "adrfam": "IPv4", 00:22:46.014 "traddr": "10.0.0.2", 00:22:46.014 "trsvcid": "4420" 00:22:46.014 } 00:22:46.014 ], 00:22:46.014 "allow_any_host": true, 00:22:46.014 "hosts": [], 00:22:46.014 "serial_number": "SPDK00000000000001", 00:22:46.014 "model_number": "SPDK bdev Controller", 00:22:46.014 "max_namespaces": 2, 00:22:46.014 "min_cntlid": 1, 00:22:46.014 "max_cntlid": 65519, 00:22:46.014 "namespaces": [ 00:22:46.014 { 00:22:46.014 "nsid": 1, 00:22:46.014 "bdev_name": "Malloc0", 00:22:46.014 "name": "Malloc0", 00:22:46.014 "nguid": "D8263AF56E204D8F9DE53FFA513E5D55", 00:22:46.014 "uuid": "d8263af5-6e20-4d8f-9de5-3ffa513e5d55" 00:22:46.014 }, 00:22:46.014 { 00:22:46.014 "nsid": 2, 00:22:46.014 "bdev_name": "Malloc1", 00:22:46.014 "name": "Malloc1", 00:22:46.014 "nguid": "D53EAF71AD1248F0B0F347FFAF5AE594", 00:22:46.014 "uuid": "d53eaf71-ad12-48f0-b0f3-47ffaf5ae594" 00:22:46.014 } 00:22:46.014 ] 00:22:46.014 } 00:22:46.014 ] 00:22:46.014 03:23:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:46.014 03:23:20 -- host/aer.sh@43 -- # wait 1562559 00:22:46.014 Asynchronous Event Request test 00:22:46.014 Attaching to 10.0.0.2 00:22:46.014 Attached to 10.0.0.2 00:22:46.014 Registering asynchronous event callbacks... 00:22:46.014 Starting namespace attribute notice tests for all controllers... 00:22:46.014 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:22:46.014 aer_cb - Changed Namespace 00:22:46.014 Cleaning up... 00:22:46.014 03:23:20 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:22:46.014 03:23:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:46.014 03:23:20 -- common/autotest_common.sh@10 -- # set +x 00:22:46.014 03:23:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:46.014 03:23:20 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:22:46.014 03:23:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:46.014 03:23:20 -- common/autotest_common.sh@10 -- # set +x 00:22:46.014 03:23:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:46.014 03:23:20 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:46.014 03:23:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:46.014 03:23:20 -- common/autotest_common.sh@10 -- # set +x 00:22:46.014 03:23:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:46.014 03:23:20 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:22:46.014 03:23:20 -- host/aer.sh@51 -- # nvmftestfini 00:22:46.014 03:23:20 -- nvmf/common.sh@477 -- # nvmfcleanup 00:22:46.014 03:23:20 -- nvmf/common.sh@117 -- # sync 00:22:46.014 03:23:20 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:46.014 03:23:20 -- nvmf/common.sh@120 -- # set +e 00:22:46.014 03:23:20 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:46.014 03:23:20 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:46.014 rmmod nvme_tcp 00:22:46.014 rmmod nvme_fabrics 00:22:46.014 rmmod nvme_keyring 00:22:46.014 03:23:20 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:46.014 03:23:20 -- nvmf/common.sh@124 -- # set -e 00:22:46.014 03:23:20 -- nvmf/common.sh@125 -- # return 0 00:22:46.014 03:23:20 -- nvmf/common.sh@478 -- # '[' -n 1562527 ']' 00:22:46.014 03:23:20 -- nvmf/common.sh@479 -- # killprocess 1562527 00:22:46.014 03:23:20 -- common/autotest_common.sh@936 -- # '[' -z 1562527 ']' 00:22:46.014 03:23:20 -- common/autotest_common.sh@940 -- # kill -0 1562527 00:22:46.014 03:23:20 -- common/autotest_common.sh@941 -- # uname 00:22:46.014 03:23:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:46.014 03:23:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1562527 00:22:46.014 03:23:20 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:46.014 03:23:20 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:46.014 03:23:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1562527' 00:22:46.014 killing process with pid 1562527 00:22:46.014 03:23:20 -- common/autotest_common.sh@955 -- # kill 1562527 00:22:46.014 [2024-04-25 03:23:20.473180] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:22:46.014 03:23:20 -- common/autotest_common.sh@960 -- # wait 1562527 00:22:46.273 03:23:20 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:22:46.273 03:23:20 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:22:46.273 03:23:20 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:22:46.273 03:23:20 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:46.273 03:23:20 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:46.273 03:23:20 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:46.273 03:23:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:46.273 03:23:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:48.812 03:23:22 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:48.812 00:22:48.812 real 0m5.370s 00:22:48.812 user 0m4.599s 00:22:48.812 sys 0m1.866s 00:22:48.812 03:23:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:48.812 03:23:22 -- common/autotest_common.sh@10 -- # set +x 00:22:48.812 ************************************ 00:22:48.812 END TEST nvmf_aer 00:22:48.812 ************************************ 00:22:48.812 03:23:22 -- nvmf/nvmf.sh@91 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:22:48.812 03:23:22 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:48.812 03:23:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:48.812 03:23:22 -- common/autotest_common.sh@10 -- # set +x 00:22:48.812 ************************************ 00:22:48.812 START TEST nvmf_async_init 00:22:48.812 ************************************ 00:22:48.812 03:23:22 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:22:48.812 * Looking for test storage... 00:22:48.812 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:48.812 03:23:22 -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:48.812 03:23:22 -- nvmf/common.sh@7 -- # uname -s 00:22:48.812 03:23:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:48.812 03:23:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:48.812 03:23:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:48.812 03:23:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:48.812 03:23:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:48.812 03:23:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:48.812 03:23:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:48.812 03:23:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:48.812 03:23:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:48.812 03:23:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:48.812 03:23:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:48.812 03:23:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:48.812 03:23:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:48.812 03:23:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:48.812 03:23:22 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:48.812 03:23:22 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:48.812 03:23:22 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:48.812 03:23:22 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:48.812 03:23:22 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:48.812 03:23:22 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:48.812 03:23:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:48.812 03:23:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:48.812 03:23:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:48.812 03:23:22 -- paths/export.sh@5 -- # export PATH 00:22:48.812 03:23:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:48.812 03:23:22 -- nvmf/common.sh@47 -- # : 0 00:22:48.812 03:23:22 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:48.812 03:23:22 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:48.812 03:23:22 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:48.812 03:23:22 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:48.812 03:23:22 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:48.812 03:23:22 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:48.812 03:23:22 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:48.812 03:23:22 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:48.812 03:23:22 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:22:48.812 03:23:22 -- host/async_init.sh@14 -- # null_block_size=512 00:22:48.812 03:23:22 -- host/async_init.sh@15 -- # null_bdev=null0 00:22:48.812 03:23:22 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:22:48.812 03:23:22 -- host/async_init.sh@20 -- # uuidgen 00:22:48.812 03:23:22 -- host/async_init.sh@20 -- # tr -d - 00:22:48.812 03:23:22 -- host/async_init.sh@20 -- # nguid=111892953a77449a86cb3534ab803339 00:22:48.812 03:23:22 -- host/async_init.sh@22 -- # nvmftestinit 00:22:48.812 03:23:22 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:22:48.812 03:23:22 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:48.812 03:23:22 -- nvmf/common.sh@437 -- # prepare_net_devs 00:22:48.812 03:23:22 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:22:48.812 03:23:22 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:22:48.812 03:23:22 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:48.812 03:23:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:48.812 03:23:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:48.812 03:23:22 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:22:48.812 03:23:22 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:22:48.812 03:23:22 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:48.812 03:23:22 -- common/autotest_common.sh@10 -- # set +x 00:22:50.748 03:23:24 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:50.748 03:23:24 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:50.748 03:23:24 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:50.748 03:23:24 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:50.748 03:23:24 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:50.748 03:23:24 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:50.748 03:23:24 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:50.748 03:23:24 -- nvmf/common.sh@295 -- # net_devs=() 00:22:50.748 03:23:24 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:50.748 03:23:24 -- nvmf/common.sh@296 -- # e810=() 00:22:50.748 03:23:24 -- nvmf/common.sh@296 -- # local -ga e810 00:22:50.748 03:23:24 -- nvmf/common.sh@297 -- # x722=() 00:22:50.748 03:23:24 -- nvmf/common.sh@297 -- # local -ga x722 00:22:50.748 03:23:24 -- nvmf/common.sh@298 -- # mlx=() 00:22:50.748 03:23:24 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:50.748 03:23:24 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:50.748 03:23:24 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:50.748 03:23:24 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:50.748 03:23:24 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:50.748 03:23:24 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:50.748 03:23:24 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:50.748 03:23:24 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:50.748 03:23:24 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:50.748 03:23:24 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:50.748 03:23:24 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:50.748 03:23:24 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:50.748 03:23:24 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:50.748 03:23:24 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:50.748 03:23:24 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:50.748 03:23:24 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:50.748 03:23:24 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:50.748 03:23:24 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:50.748 03:23:24 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:50.748 03:23:24 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:50.748 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:50.748 03:23:24 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:50.748 03:23:24 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:50.748 03:23:24 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:50.748 03:23:24 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:50.748 03:23:24 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:50.748 03:23:24 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:50.748 03:23:24 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:50.748 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:50.748 03:23:24 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:50.748 03:23:24 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:50.748 03:23:24 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:50.748 03:23:24 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:50.748 03:23:24 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:50.748 03:23:24 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:50.748 03:23:24 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:50.748 03:23:24 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:50.748 03:23:24 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:50.748 03:23:24 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:50.748 03:23:24 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:50.748 03:23:24 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:50.748 03:23:24 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:50.748 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:50.748 03:23:24 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:50.748 03:23:24 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:50.748 03:23:24 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:50.748 03:23:24 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:50.748 03:23:24 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:50.748 03:23:24 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:50.748 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:50.748 03:23:24 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:50.748 03:23:24 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:22:50.748 03:23:24 -- nvmf/common.sh@403 -- # is_hw=yes 00:22:50.748 03:23:24 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:22:50.748 03:23:24 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:22:50.748 03:23:24 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:22:50.748 03:23:24 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:50.748 03:23:24 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:50.748 03:23:24 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:50.748 03:23:24 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:50.748 03:23:24 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:50.748 03:23:24 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:50.748 03:23:24 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:50.749 03:23:24 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:50.749 03:23:24 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:50.749 03:23:24 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:50.749 03:23:24 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:50.749 03:23:24 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:50.749 03:23:24 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:50.749 03:23:24 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:50.749 03:23:24 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:50.749 03:23:24 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:50.749 03:23:24 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:50.749 03:23:25 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:50.749 03:23:25 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:50.749 03:23:25 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:50.749 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:50.749 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.218 ms 00:22:50.749 00:22:50.749 --- 10.0.0.2 ping statistics --- 00:22:50.749 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:50.749 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:22:50.749 03:23:25 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:50.749 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:50.749 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:22:50.749 00:22:50.749 --- 10.0.0.1 ping statistics --- 00:22:50.749 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:50.749 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:22:50.749 03:23:25 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:50.749 03:23:25 -- nvmf/common.sh@411 -- # return 0 00:22:50.749 03:23:25 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:22:50.749 03:23:25 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:50.749 03:23:25 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:22:50.749 03:23:25 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:22:50.749 03:23:25 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:50.749 03:23:25 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:22:50.749 03:23:25 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:22:50.749 03:23:25 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:22:50.749 03:23:25 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:22:50.749 03:23:25 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:50.749 03:23:25 -- common/autotest_common.sh@10 -- # set +x 00:22:50.749 03:23:25 -- nvmf/common.sh@470 -- # nvmfpid=1564613 00:22:50.749 03:23:25 -- nvmf/common.sh@471 -- # waitforlisten 1564613 00:22:50.749 03:23:25 -- common/autotest_common.sh@817 -- # '[' -z 1564613 ']' 00:22:50.749 03:23:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:50.749 03:23:25 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:22:50.749 03:23:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:50.749 03:23:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:50.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:50.749 03:23:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:50.749 03:23:25 -- common/autotest_common.sh@10 -- # set +x 00:22:50.749 [2024-04-25 03:23:25.103140] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:22:50.749 [2024-04-25 03:23:25.103224] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:50.749 EAL: No free 2048 kB hugepages reported on node 1 00:22:50.749 [2024-04-25 03:23:25.167504] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:51.007 [2024-04-25 03:23:25.284795] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:51.007 [2024-04-25 03:23:25.284845] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:51.007 [2024-04-25 03:23:25.284865] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:51.007 [2024-04-25 03:23:25.284876] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:51.007 [2024-04-25 03:23:25.284886] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:51.007 [2024-04-25 03:23:25.284927] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:51.574 03:23:26 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:51.574 03:23:26 -- common/autotest_common.sh@850 -- # return 0 00:22:51.574 03:23:26 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:22:51.574 03:23:26 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:51.574 03:23:26 -- common/autotest_common.sh@10 -- # set +x 00:22:51.574 03:23:26 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:51.574 03:23:26 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:22:51.574 03:23:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:51.574 03:23:26 -- common/autotest_common.sh@10 -- # set +x 00:22:51.574 [2024-04-25 03:23:26.067270] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:51.574 03:23:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:51.574 03:23:26 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:22:51.574 03:23:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:51.574 03:23:26 -- common/autotest_common.sh@10 -- # set +x 00:22:51.835 null0 00:22:51.835 03:23:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:51.835 03:23:26 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:22:51.835 03:23:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:51.835 03:23:26 -- common/autotest_common.sh@10 -- # set +x 00:22:51.835 03:23:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:51.835 03:23:26 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:22:51.835 03:23:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:51.835 03:23:26 -- common/autotest_common.sh@10 -- # set +x 00:22:51.835 03:23:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:51.835 03:23:26 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 111892953a77449a86cb3534ab803339 00:22:51.835 03:23:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:51.835 03:23:26 -- common/autotest_common.sh@10 -- # set +x 00:22:51.835 03:23:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:51.835 03:23:26 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:51.835 03:23:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:51.835 03:23:26 -- common/autotest_common.sh@10 -- # set +x 00:22:51.835 [2024-04-25 03:23:26.107504] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:51.835 03:23:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:51.835 03:23:26 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:22:51.835 03:23:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:51.835 03:23:26 -- common/autotest_common.sh@10 -- # set +x 00:22:52.096 nvme0n1 00:22:52.096 03:23:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:52.096 03:23:26 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:52.096 03:23:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:52.096 03:23:26 -- common/autotest_common.sh@10 -- # set +x 00:22:52.096 [ 00:22:52.096 { 00:22:52.096 "name": "nvme0n1", 00:22:52.096 "aliases": [ 00:22:52.096 "11189295-3a77-449a-86cb-3534ab803339" 00:22:52.096 ], 00:22:52.096 "product_name": "NVMe disk", 00:22:52.096 "block_size": 512, 00:22:52.096 "num_blocks": 2097152, 00:22:52.096 "uuid": "11189295-3a77-449a-86cb-3534ab803339", 00:22:52.096 "assigned_rate_limits": { 00:22:52.096 "rw_ios_per_sec": 0, 00:22:52.096 "rw_mbytes_per_sec": 0, 00:22:52.096 "r_mbytes_per_sec": 0, 00:22:52.096 "w_mbytes_per_sec": 0 00:22:52.096 }, 00:22:52.096 "claimed": false, 00:22:52.096 "zoned": false, 00:22:52.096 "supported_io_types": { 00:22:52.096 "read": true, 00:22:52.096 "write": true, 00:22:52.096 "unmap": false, 00:22:52.096 "write_zeroes": true, 00:22:52.096 "flush": true, 00:22:52.096 "reset": true, 00:22:52.096 "compare": true, 00:22:52.096 "compare_and_write": true, 00:22:52.096 "abort": true, 00:22:52.096 "nvme_admin": true, 00:22:52.096 "nvme_io": true 00:22:52.096 }, 00:22:52.096 "memory_domains": [ 00:22:52.096 { 00:22:52.096 "dma_device_id": "system", 00:22:52.096 "dma_device_type": 1 00:22:52.096 } 00:22:52.096 ], 00:22:52.096 "driver_specific": { 00:22:52.096 "nvme": [ 00:22:52.096 { 00:22:52.096 "trid": { 00:22:52.096 "trtype": "TCP", 00:22:52.096 "adrfam": "IPv4", 00:22:52.096 "traddr": "10.0.0.2", 00:22:52.096 "trsvcid": "4420", 00:22:52.096 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:52.096 }, 00:22:52.096 "ctrlr_data": { 00:22:52.096 "cntlid": 1, 00:22:52.096 "vendor_id": "0x8086", 00:22:52.096 "model_number": "SPDK bdev Controller", 00:22:52.096 "serial_number": "00000000000000000000", 00:22:52.096 "firmware_revision": "24.05", 00:22:52.096 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:52.096 "oacs": { 00:22:52.096 "security": 0, 00:22:52.096 "format": 0, 00:22:52.096 "firmware": 0, 00:22:52.096 "ns_manage": 0 00:22:52.096 }, 00:22:52.096 "multi_ctrlr": true, 00:22:52.096 "ana_reporting": false 00:22:52.096 }, 00:22:52.096 "vs": { 00:22:52.096 "nvme_version": "1.3" 00:22:52.096 }, 00:22:52.096 "ns_data": { 00:22:52.096 "id": 1, 00:22:52.096 "can_share": true 00:22:52.096 } 00:22:52.096 } 00:22:52.096 ], 00:22:52.096 "mp_policy": "active_passive" 00:22:52.096 } 00:22:52.096 } 00:22:52.096 ] 00:22:52.096 03:23:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:52.096 03:23:26 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:22:52.096 03:23:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:52.096 03:23:26 -- common/autotest_common.sh@10 -- # set +x 00:22:52.096 [2024-04-25 03:23:26.360255] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:52.096 [2024-04-25 03:23:26.360349] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x72e480 (9): Bad file descriptor 00:22:52.096 [2024-04-25 03:23:26.502781] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:52.096 03:23:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:52.096 03:23:26 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:52.096 03:23:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:52.096 03:23:26 -- common/autotest_common.sh@10 -- # set +x 00:22:52.096 [ 00:22:52.096 { 00:22:52.096 "name": "nvme0n1", 00:22:52.096 "aliases": [ 00:22:52.096 "11189295-3a77-449a-86cb-3534ab803339" 00:22:52.096 ], 00:22:52.096 "product_name": "NVMe disk", 00:22:52.096 "block_size": 512, 00:22:52.096 "num_blocks": 2097152, 00:22:52.096 "uuid": "11189295-3a77-449a-86cb-3534ab803339", 00:22:52.096 "assigned_rate_limits": { 00:22:52.096 "rw_ios_per_sec": 0, 00:22:52.096 "rw_mbytes_per_sec": 0, 00:22:52.096 "r_mbytes_per_sec": 0, 00:22:52.096 "w_mbytes_per_sec": 0 00:22:52.096 }, 00:22:52.096 "claimed": false, 00:22:52.096 "zoned": false, 00:22:52.096 "supported_io_types": { 00:22:52.096 "read": true, 00:22:52.096 "write": true, 00:22:52.096 "unmap": false, 00:22:52.096 "write_zeroes": true, 00:22:52.096 "flush": true, 00:22:52.096 "reset": true, 00:22:52.096 "compare": true, 00:22:52.096 "compare_and_write": true, 00:22:52.096 "abort": true, 00:22:52.096 "nvme_admin": true, 00:22:52.096 "nvme_io": true 00:22:52.096 }, 00:22:52.096 "memory_domains": [ 00:22:52.096 { 00:22:52.096 "dma_device_id": "system", 00:22:52.096 "dma_device_type": 1 00:22:52.096 } 00:22:52.096 ], 00:22:52.096 "driver_specific": { 00:22:52.096 "nvme": [ 00:22:52.096 { 00:22:52.096 "trid": { 00:22:52.096 "trtype": "TCP", 00:22:52.096 "adrfam": "IPv4", 00:22:52.096 "traddr": "10.0.0.2", 00:22:52.096 "trsvcid": "4420", 00:22:52.096 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:52.096 }, 00:22:52.096 "ctrlr_data": { 00:22:52.096 "cntlid": 2, 00:22:52.096 "vendor_id": "0x8086", 00:22:52.096 "model_number": "SPDK bdev Controller", 00:22:52.096 "serial_number": "00000000000000000000", 00:22:52.096 "firmware_revision": "24.05", 00:22:52.096 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:52.096 "oacs": { 00:22:52.096 "security": 0, 00:22:52.096 "format": 0, 00:22:52.096 "firmware": 0, 00:22:52.096 "ns_manage": 0 00:22:52.096 }, 00:22:52.096 "multi_ctrlr": true, 00:22:52.096 "ana_reporting": false 00:22:52.096 }, 00:22:52.096 "vs": { 00:22:52.096 "nvme_version": "1.3" 00:22:52.096 }, 00:22:52.096 "ns_data": { 00:22:52.096 "id": 1, 00:22:52.096 "can_share": true 00:22:52.096 } 00:22:52.096 } 00:22:52.096 ], 00:22:52.096 "mp_policy": "active_passive" 00:22:52.096 } 00:22:52.096 } 00:22:52.096 ] 00:22:52.096 03:23:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:52.096 03:23:26 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:52.096 03:23:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:52.096 03:23:26 -- common/autotest_common.sh@10 -- # set +x 00:22:52.096 03:23:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:52.096 03:23:26 -- host/async_init.sh@53 -- # mktemp 00:22:52.096 03:23:26 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.GMczE303dm 00:22:52.096 03:23:26 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:52.096 03:23:26 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.GMczE303dm 00:22:52.096 03:23:26 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:22:52.096 03:23:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:52.096 03:23:26 -- common/autotest_common.sh@10 -- # set +x 00:22:52.096 03:23:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:52.096 03:23:26 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:22:52.096 03:23:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:52.096 03:23:26 -- common/autotest_common.sh@10 -- # set +x 00:22:52.096 [2024-04-25 03:23:26.552885] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:52.096 [2024-04-25 03:23:26.553021] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:52.096 03:23:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:52.096 03:23:26 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.GMczE303dm 00:22:52.097 03:23:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:52.097 03:23:26 -- common/autotest_common.sh@10 -- # set +x 00:22:52.097 [2024-04-25 03:23:26.560903] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:52.097 03:23:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:52.097 03:23:26 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.GMczE303dm 00:22:52.097 03:23:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:52.097 03:23:26 -- common/autotest_common.sh@10 -- # set +x 00:22:52.097 [2024-04-25 03:23:26.568922] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:52.097 [2024-04-25 03:23:26.568984] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:52.357 nvme0n1 00:22:52.357 03:23:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:52.357 03:23:26 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:52.357 03:23:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:52.357 03:23:26 -- common/autotest_common.sh@10 -- # set +x 00:22:52.357 [ 00:22:52.357 { 00:22:52.357 "name": "nvme0n1", 00:22:52.357 "aliases": [ 00:22:52.357 "11189295-3a77-449a-86cb-3534ab803339" 00:22:52.357 ], 00:22:52.357 "product_name": "NVMe disk", 00:22:52.357 "block_size": 512, 00:22:52.357 "num_blocks": 2097152, 00:22:52.357 "uuid": "11189295-3a77-449a-86cb-3534ab803339", 00:22:52.357 "assigned_rate_limits": { 00:22:52.357 "rw_ios_per_sec": 0, 00:22:52.357 "rw_mbytes_per_sec": 0, 00:22:52.357 "r_mbytes_per_sec": 0, 00:22:52.357 "w_mbytes_per_sec": 0 00:22:52.357 }, 00:22:52.357 "claimed": false, 00:22:52.357 "zoned": false, 00:22:52.357 "supported_io_types": { 00:22:52.357 "read": true, 00:22:52.357 "write": true, 00:22:52.357 "unmap": false, 00:22:52.357 "write_zeroes": true, 00:22:52.357 "flush": true, 00:22:52.357 "reset": true, 00:22:52.357 "compare": true, 00:22:52.357 "compare_and_write": true, 00:22:52.357 "abort": true, 00:22:52.357 "nvme_admin": true, 00:22:52.357 "nvme_io": true 00:22:52.357 }, 00:22:52.357 "memory_domains": [ 00:22:52.357 { 00:22:52.357 "dma_device_id": "system", 00:22:52.357 "dma_device_type": 1 00:22:52.357 } 00:22:52.357 ], 00:22:52.357 "driver_specific": { 00:22:52.357 "nvme": [ 00:22:52.357 { 00:22:52.357 "trid": { 00:22:52.357 "trtype": "TCP", 00:22:52.357 "adrfam": "IPv4", 00:22:52.357 "traddr": "10.0.0.2", 00:22:52.357 "trsvcid": "4421", 00:22:52.357 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:52.357 }, 00:22:52.358 "ctrlr_data": { 00:22:52.358 "cntlid": 3, 00:22:52.358 "vendor_id": "0x8086", 00:22:52.358 "model_number": "SPDK bdev Controller", 00:22:52.358 "serial_number": "00000000000000000000", 00:22:52.358 "firmware_revision": "24.05", 00:22:52.358 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:52.358 "oacs": { 00:22:52.358 "security": 0, 00:22:52.358 "format": 0, 00:22:52.358 "firmware": 0, 00:22:52.358 "ns_manage": 0 00:22:52.358 }, 00:22:52.358 "multi_ctrlr": true, 00:22:52.358 "ana_reporting": false 00:22:52.358 }, 00:22:52.358 "vs": { 00:22:52.358 "nvme_version": "1.3" 00:22:52.358 }, 00:22:52.358 "ns_data": { 00:22:52.358 "id": 1, 00:22:52.358 "can_share": true 00:22:52.358 } 00:22:52.358 } 00:22:52.358 ], 00:22:52.358 "mp_policy": "active_passive" 00:22:52.358 } 00:22:52.358 } 00:22:52.358 ] 00:22:52.358 03:23:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:52.358 03:23:26 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:52.358 03:23:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:52.358 03:23:26 -- common/autotest_common.sh@10 -- # set +x 00:22:52.358 03:23:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:52.358 03:23:26 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.GMczE303dm 00:22:52.358 03:23:26 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:22:52.358 03:23:26 -- host/async_init.sh@78 -- # nvmftestfini 00:22:52.358 03:23:26 -- nvmf/common.sh@477 -- # nvmfcleanup 00:22:52.358 03:23:26 -- nvmf/common.sh@117 -- # sync 00:22:52.358 03:23:26 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:52.358 03:23:26 -- nvmf/common.sh@120 -- # set +e 00:22:52.358 03:23:26 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:52.358 03:23:26 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:52.358 rmmod nvme_tcp 00:22:52.358 rmmod nvme_fabrics 00:22:52.358 rmmod nvme_keyring 00:22:52.358 03:23:26 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:52.358 03:23:26 -- nvmf/common.sh@124 -- # set -e 00:22:52.358 03:23:26 -- nvmf/common.sh@125 -- # return 0 00:22:52.358 03:23:26 -- nvmf/common.sh@478 -- # '[' -n 1564613 ']' 00:22:52.358 03:23:26 -- nvmf/common.sh@479 -- # killprocess 1564613 00:22:52.358 03:23:26 -- common/autotest_common.sh@936 -- # '[' -z 1564613 ']' 00:22:52.358 03:23:26 -- common/autotest_common.sh@940 -- # kill -0 1564613 00:22:52.358 03:23:26 -- common/autotest_common.sh@941 -- # uname 00:22:52.358 03:23:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:52.358 03:23:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1564613 00:22:52.358 03:23:26 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:52.358 03:23:26 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:52.358 03:23:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1564613' 00:22:52.358 killing process with pid 1564613 00:22:52.358 03:23:26 -- common/autotest_common.sh@955 -- # kill 1564613 00:22:52.358 [2024-04-25 03:23:26.745010] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:52.358 [2024-04-25 03:23:26.745049] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:52.358 03:23:26 -- common/autotest_common.sh@960 -- # wait 1564613 00:22:52.617 03:23:27 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:22:52.618 03:23:27 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:22:52.618 03:23:27 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:22:52.618 03:23:27 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:52.618 03:23:27 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:52.618 03:23:27 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:52.618 03:23:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:52.618 03:23:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:55.153 03:23:29 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:55.153 00:22:55.153 real 0m6.149s 00:22:55.153 user 0m2.909s 00:22:55.153 sys 0m1.834s 00:22:55.153 03:23:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:55.153 03:23:29 -- common/autotest_common.sh@10 -- # set +x 00:22:55.153 ************************************ 00:22:55.153 END TEST nvmf_async_init 00:22:55.153 ************************************ 00:22:55.153 03:23:29 -- nvmf/nvmf.sh@92 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:22:55.153 03:23:29 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:55.153 03:23:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:55.153 03:23:29 -- common/autotest_common.sh@10 -- # set +x 00:22:55.153 ************************************ 00:22:55.153 START TEST dma 00:22:55.153 ************************************ 00:22:55.153 03:23:29 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:22:55.153 * Looking for test storage... 00:22:55.153 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:55.153 03:23:29 -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:55.153 03:23:29 -- nvmf/common.sh@7 -- # uname -s 00:22:55.153 03:23:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:55.153 03:23:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:55.153 03:23:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:55.153 03:23:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:55.153 03:23:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:55.153 03:23:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:55.153 03:23:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:55.153 03:23:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:55.153 03:23:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:55.153 03:23:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:55.153 03:23:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:55.153 03:23:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:55.153 03:23:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:55.153 03:23:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:55.153 03:23:29 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:55.153 03:23:29 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:55.153 03:23:29 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:55.153 03:23:29 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:55.153 03:23:29 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:55.153 03:23:29 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:55.154 03:23:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:55.154 03:23:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:55.154 03:23:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:55.154 03:23:29 -- paths/export.sh@5 -- # export PATH 00:22:55.154 03:23:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:55.154 03:23:29 -- nvmf/common.sh@47 -- # : 0 00:22:55.154 03:23:29 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:55.154 03:23:29 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:55.154 03:23:29 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:55.154 03:23:29 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:55.154 03:23:29 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:55.154 03:23:29 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:55.154 03:23:29 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:55.154 03:23:29 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:55.154 03:23:29 -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:22:55.154 03:23:29 -- host/dma.sh@13 -- # exit 0 00:22:55.154 00:22:55.154 real 0m0.057s 00:22:55.154 user 0m0.027s 00:22:55.154 sys 0m0.034s 00:22:55.154 03:23:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:55.154 03:23:29 -- common/autotest_common.sh@10 -- # set +x 00:22:55.154 ************************************ 00:22:55.154 END TEST dma 00:22:55.154 ************************************ 00:22:55.154 03:23:29 -- nvmf/nvmf.sh@95 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:55.154 03:23:29 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:55.154 03:23:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:55.154 03:23:29 -- common/autotest_common.sh@10 -- # set +x 00:22:55.154 ************************************ 00:22:55.154 START TEST nvmf_identify 00:22:55.154 ************************************ 00:22:55.154 03:23:29 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:55.154 * Looking for test storage... 00:22:55.154 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:55.154 03:23:29 -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:55.154 03:23:29 -- nvmf/common.sh@7 -- # uname -s 00:22:55.154 03:23:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:55.154 03:23:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:55.154 03:23:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:55.154 03:23:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:55.154 03:23:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:55.154 03:23:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:55.154 03:23:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:55.154 03:23:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:55.154 03:23:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:55.154 03:23:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:55.154 03:23:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:55.154 03:23:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:55.154 03:23:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:55.154 03:23:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:55.154 03:23:29 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:55.154 03:23:29 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:55.154 03:23:29 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:55.154 03:23:29 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:55.154 03:23:29 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:55.154 03:23:29 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:55.154 03:23:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:55.154 03:23:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:55.154 03:23:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:55.154 03:23:29 -- paths/export.sh@5 -- # export PATH 00:22:55.154 03:23:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:55.154 03:23:29 -- nvmf/common.sh@47 -- # : 0 00:22:55.154 03:23:29 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:55.154 03:23:29 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:55.154 03:23:29 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:55.154 03:23:29 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:55.154 03:23:29 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:55.154 03:23:29 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:55.154 03:23:29 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:55.154 03:23:29 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:55.154 03:23:29 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:55.154 03:23:29 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:55.154 03:23:29 -- host/identify.sh@14 -- # nvmftestinit 00:22:55.154 03:23:29 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:22:55.154 03:23:29 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:55.154 03:23:29 -- nvmf/common.sh@437 -- # prepare_net_devs 00:22:55.154 03:23:29 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:22:55.154 03:23:29 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:22:55.154 03:23:29 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:55.154 03:23:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:55.154 03:23:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:55.154 03:23:29 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:22:55.154 03:23:29 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:22:55.154 03:23:29 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:55.154 03:23:29 -- common/autotest_common.sh@10 -- # set +x 00:22:57.053 03:23:31 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:57.053 03:23:31 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:57.053 03:23:31 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:57.053 03:23:31 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:57.053 03:23:31 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:57.053 03:23:31 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:57.053 03:23:31 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:57.053 03:23:31 -- nvmf/common.sh@295 -- # net_devs=() 00:22:57.053 03:23:31 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:57.053 03:23:31 -- nvmf/common.sh@296 -- # e810=() 00:22:57.053 03:23:31 -- nvmf/common.sh@296 -- # local -ga e810 00:22:57.053 03:23:31 -- nvmf/common.sh@297 -- # x722=() 00:22:57.053 03:23:31 -- nvmf/common.sh@297 -- # local -ga x722 00:22:57.053 03:23:31 -- nvmf/common.sh@298 -- # mlx=() 00:22:57.053 03:23:31 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:57.053 03:23:31 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:57.053 03:23:31 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:57.053 03:23:31 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:57.053 03:23:31 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:57.053 03:23:31 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:57.053 03:23:31 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:57.053 03:23:31 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:57.053 03:23:31 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:57.053 03:23:31 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:57.053 03:23:31 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:57.053 03:23:31 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:57.053 03:23:31 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:57.053 03:23:31 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:57.053 03:23:31 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:57.053 03:23:31 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:57.053 03:23:31 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:57.053 03:23:31 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:57.053 03:23:31 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:57.053 03:23:31 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:57.053 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:57.053 03:23:31 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:57.053 03:23:31 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:57.053 03:23:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:57.053 03:23:31 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:57.053 03:23:31 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:57.053 03:23:31 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:57.053 03:23:31 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:57.053 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:57.053 03:23:31 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:57.053 03:23:31 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:57.053 03:23:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:57.053 03:23:31 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:57.054 03:23:31 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:57.054 03:23:31 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:57.054 03:23:31 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:57.054 03:23:31 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:57.054 03:23:31 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:57.054 03:23:31 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:57.054 03:23:31 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:57.054 03:23:31 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:57.054 03:23:31 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:57.054 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:57.054 03:23:31 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:57.054 03:23:31 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:57.054 03:23:31 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:57.054 03:23:31 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:57.054 03:23:31 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:57.054 03:23:31 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:57.054 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:57.054 03:23:31 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:57.054 03:23:31 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:22:57.054 03:23:31 -- nvmf/common.sh@403 -- # is_hw=yes 00:22:57.054 03:23:31 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:22:57.054 03:23:31 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:22:57.054 03:23:31 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:22:57.054 03:23:31 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:57.054 03:23:31 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:57.054 03:23:31 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:57.054 03:23:31 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:57.054 03:23:31 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:57.054 03:23:31 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:57.054 03:23:31 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:57.054 03:23:31 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:57.054 03:23:31 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:57.054 03:23:31 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:57.054 03:23:31 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:57.054 03:23:31 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:57.054 03:23:31 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:57.054 03:23:31 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:57.054 03:23:31 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:57.054 03:23:31 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:57.054 03:23:31 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:57.054 03:23:31 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:57.054 03:23:31 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:57.054 03:23:31 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:57.054 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:57.054 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.119 ms 00:22:57.054 00:22:57.054 --- 10.0.0.2 ping statistics --- 00:22:57.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:57.054 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:22:57.054 03:23:31 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:57.054 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:57.054 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:22:57.054 00:22:57.054 --- 10.0.0.1 ping statistics --- 00:22:57.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:57.054 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:22:57.054 03:23:31 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:57.054 03:23:31 -- nvmf/common.sh@411 -- # return 0 00:22:57.054 03:23:31 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:22:57.054 03:23:31 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:57.054 03:23:31 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:22:57.054 03:23:31 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:22:57.054 03:23:31 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:57.054 03:23:31 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:22:57.054 03:23:31 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:22:57.054 03:23:31 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:22:57.054 03:23:31 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:57.054 03:23:31 -- common/autotest_common.sh@10 -- # set +x 00:22:57.054 03:23:31 -- host/identify.sh@19 -- # nvmfpid=1566770 00:22:57.054 03:23:31 -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:57.054 03:23:31 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:57.054 03:23:31 -- host/identify.sh@23 -- # waitforlisten 1566770 00:22:57.054 03:23:31 -- common/autotest_common.sh@817 -- # '[' -z 1566770 ']' 00:22:57.054 03:23:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:57.054 03:23:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:57.054 03:23:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:57.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:57.054 03:23:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:57.054 03:23:31 -- common/autotest_common.sh@10 -- # set +x 00:22:57.054 [2024-04-25 03:23:31.483828] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:22:57.054 [2024-04-25 03:23:31.483901] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:57.054 EAL: No free 2048 kB hugepages reported on node 1 00:22:57.312 [2024-04-25 03:23:31.553667] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:57.312 [2024-04-25 03:23:31.672321] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:57.312 [2024-04-25 03:23:31.672377] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:57.312 [2024-04-25 03:23:31.672390] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:57.312 [2024-04-25 03:23:31.672400] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:57.312 [2024-04-25 03:23:31.672410] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:57.312 [2024-04-25 03:23:31.672478] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:57.312 [2024-04-25 03:23:31.672560] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:57.312 [2024-04-25 03:23:31.672654] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:57.312 [2024-04-25 03:23:31.672650] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:57.312 03:23:31 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:57.312 03:23:31 -- common/autotest_common.sh@850 -- # return 0 00:22:57.312 03:23:31 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:57.312 03:23:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:57.312 03:23:31 -- common/autotest_common.sh@10 -- # set +x 00:22:57.312 [2024-04-25 03:23:31.807383] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:57.598 03:23:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:57.598 03:23:31 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:22:57.598 03:23:31 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:57.598 03:23:31 -- common/autotest_common.sh@10 -- # set +x 00:22:57.598 03:23:31 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:57.598 03:23:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:57.598 03:23:31 -- common/autotest_common.sh@10 -- # set +x 00:22:57.598 Malloc0 00:22:57.598 03:23:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:57.598 03:23:31 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:57.598 03:23:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:57.598 03:23:31 -- common/autotest_common.sh@10 -- # set +x 00:22:57.598 03:23:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:57.598 03:23:31 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:22:57.598 03:23:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:57.598 03:23:31 -- common/autotest_common.sh@10 -- # set +x 00:22:57.598 03:23:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:57.598 03:23:31 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:57.598 03:23:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:57.598 03:23:31 -- common/autotest_common.sh@10 -- # set +x 00:22:57.598 [2024-04-25 03:23:31.884417] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:57.598 03:23:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:57.598 03:23:31 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:57.598 03:23:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:57.598 03:23:31 -- common/autotest_common.sh@10 -- # set +x 00:22:57.598 03:23:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:57.598 03:23:31 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:22:57.598 03:23:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:57.598 03:23:31 -- common/autotest_common.sh@10 -- # set +x 00:22:57.598 [2024-04-25 03:23:31.900187] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:22:57.598 [ 00:22:57.598 { 00:22:57.598 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:57.598 "subtype": "Discovery", 00:22:57.598 "listen_addresses": [ 00:22:57.598 { 00:22:57.598 "transport": "TCP", 00:22:57.598 "trtype": "TCP", 00:22:57.598 "adrfam": "IPv4", 00:22:57.598 "traddr": "10.0.0.2", 00:22:57.598 "trsvcid": "4420" 00:22:57.598 } 00:22:57.598 ], 00:22:57.598 "allow_any_host": true, 00:22:57.598 "hosts": [] 00:22:57.598 }, 00:22:57.598 { 00:22:57.598 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:57.598 "subtype": "NVMe", 00:22:57.598 "listen_addresses": [ 00:22:57.598 { 00:22:57.598 "transport": "TCP", 00:22:57.598 "trtype": "TCP", 00:22:57.598 "adrfam": "IPv4", 00:22:57.598 "traddr": "10.0.0.2", 00:22:57.598 "trsvcid": "4420" 00:22:57.598 } 00:22:57.598 ], 00:22:57.598 "allow_any_host": true, 00:22:57.598 "hosts": [], 00:22:57.598 "serial_number": "SPDK00000000000001", 00:22:57.598 "model_number": "SPDK bdev Controller", 00:22:57.598 "max_namespaces": 32, 00:22:57.598 "min_cntlid": 1, 00:22:57.598 "max_cntlid": 65519, 00:22:57.598 "namespaces": [ 00:22:57.598 { 00:22:57.598 "nsid": 1, 00:22:57.598 "bdev_name": "Malloc0", 00:22:57.598 "name": "Malloc0", 00:22:57.598 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:22:57.598 "eui64": "ABCDEF0123456789", 00:22:57.598 "uuid": "8eaf30a4-160d-4cce-8e7c-6b5f31a0a4a4" 00:22:57.598 } 00:22:57.598 ] 00:22:57.598 } 00:22:57.598 ] 00:22:57.598 03:23:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:57.598 03:23:31 -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:22:57.598 [2024-04-25 03:23:31.926549] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:22:57.598 [2024-04-25 03:23:31.926593] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1566910 ] 00:22:57.598 EAL: No free 2048 kB hugepages reported on node 1 00:22:57.598 [2024-04-25 03:23:31.962978] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:22:57.598 [2024-04-25 03:23:31.963051] nvme_tcp.c:2326:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:57.598 [2024-04-25 03:23:31.963061] nvme_tcp.c:2330:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:57.598 [2024-04-25 03:23:31.963078] nvme_tcp.c:2348:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:57.598 [2024-04-25 03:23:31.963091] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:57.598 [2024-04-25 03:23:31.963490] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:22:57.598 [2024-04-25 03:23:31.963565] nvme_tcp.c:1543:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x20bcd00 0 00:22:57.598 [2024-04-25 03:23:31.969648] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:57.598 [2024-04-25 03:23:31.969673] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:57.598 [2024-04-25 03:23:31.969682] nvme_tcp.c:1589:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:57.598 [2024-04-25 03:23:31.969688] nvme_tcp.c:1590:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:57.598 [2024-04-25 03:23:31.969764] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:57.598 [2024-04-25 03:23:31.969778] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:57.598 [2024-04-25 03:23:31.969787] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20bcd00) 00:22:57.598 [2024-04-25 03:23:31.969809] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:57.598 [2024-04-25 03:23:31.969837] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x211bec0, cid 0, qid 0 00:22:57.598 [2024-04-25 03:23:31.977646] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:57.598 [2024-04-25 03:23:31.977664] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:57.598 [2024-04-25 03:23:31.977671] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:57.599 [2024-04-25 03:23:31.977681] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x211bec0) on tqpair=0x20bcd00 00:22:57.599 [2024-04-25 03:23:31.977721] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:57.599 [2024-04-25 03:23:31.977736] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:22:57.599 [2024-04-25 03:23:31.977745] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:22:57.599 [2024-04-25 03:23:31.977769] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:57.599 [2024-04-25 03:23:31.977777] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:57.599 [2024-04-25 03:23:31.977784] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20bcd00) 00:22:57.599 [2024-04-25 03:23:31.977795] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.599 [2024-04-25 03:23:31.977820] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x211bec0, cid 0, qid 0 00:22:57.599 [2024-04-25 03:23:31.978036] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:57.599 [2024-04-25 03:23:31.978051] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:57.599 [2024-04-25 03:23:31.978058] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:57.599 [2024-04-25 03:23:31.978065] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x211bec0) on tqpair=0x20bcd00 00:22:57.599 [2024-04-25 03:23:31.978082] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:22:57.599 [2024-04-25 03:23:31.978097] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:22:57.599 [2024-04-25 03:23:31.978110] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:57.599 [2024-04-25 03:23:31.978118] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:57.599 [2024-04-25 03:23:31.978124] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20bcd00) 00:22:57.599 [2024-04-25 03:23:31.978135] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.599 [2024-04-25 03:23:31.978156] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x211bec0, cid 0, qid 0 00:22:57.599 [2024-04-25 03:23:31.978375] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:57.599 [2024-04-25 03:23:31.978387] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:57.599 [2024-04-25 03:23:31.978394] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:57.599 [2024-04-25 03:23:31.978401] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x211bec0) on tqpair=0x20bcd00 00:22:57.599 [2024-04-25 03:23:31.978412] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:22:57.599 [2024-04-25 03:23:31.978427] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:22:57.599 [2024-04-25 03:23:31.978439] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:57.599 [2024-04-25 03:23:31.978446] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:57.599 [2024-04-25 03:23:31.978453] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20bcd00) 00:22:57.599 [2024-04-25 03:23:31.978463] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.599 [2024-04-25 03:23:31.978483] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x211bec0, cid 0, qid 0 00:22:57.599 [2024-04-25 03:23:31.978709] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:57.599 [2024-04-25 03:23:31.978725] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:57.599 [2024-04-25 03:23:31.978732] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:57.599 [2024-04-25 03:23:31.978739] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x211bec0) on tqpair=0x20bcd00 00:22:57.599 [2024-04-25 03:23:31.978750] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:57.599 [2024-04-25 03:23:31.978767] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:57.599 [2024-04-25 03:23:31.978776] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:57.599 [2024-04-25 03:23:31.978783] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20bcd00) 00:22:57.599 [2024-04-25 03:23:31.978794] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.599 [2024-04-25 03:23:31.978815] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x211bec0, cid 0, qid 0 00:22:57.599 [2024-04-25 03:23:31.978992] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:57.599 [2024-04-25 03:23:31.979007] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:57.599 [2024-04-25 03:23:31.979013] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:57.599 [2024-04-25 03:23:31.979020] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x211bec0) on tqpair=0x20bcd00 00:22:57.599 [2024-04-25 03:23:31.979031] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:22:57.599 [2024-04-25 03:23:31.979041] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:22:57.599 [2024-04-25 03:23:31.979054] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:57.599 [2024-04-25 03:23:31.979165] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:22:57.599 [2024-04-25 03:23:31.979174] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:57.599 [2024-04-25 03:23:31.979205] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:57.599 [2024-04-25 03:23:31.979213] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:57.599 [2024-04-25 03:23:31.979219] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20bcd00) 00:22:57.599 [2024-04-25 03:23:31.979230] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.599 [2024-04-25 03:23:31.979250] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x211bec0, cid 0, qid 0 00:22:57.599 [2024-04-25 03:23:31.979475] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:57.599 [2024-04-25 03:23:31.979488] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:57.599 [2024-04-25 03:23:31.979494] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:57.599 [2024-04-25 03:23:31.979501] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x211bec0) on tqpair=0x20bcd00 00:22:57.599 [2024-04-25 03:23:31.979511] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:57.599 [2024-04-25 03:23:31.979528] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:57.599 [2024-04-25 03:23:31.979537] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:57.599 [2024-04-25 03:23:31.979543] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20bcd00) 00:22:57.599 [2024-04-25 03:23:31.979554] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.599 [2024-04-25 03:23:31.979579] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x211bec0, cid 0, qid 0 00:22:57.599 [2024-04-25 03:23:31.979791] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:57.599 [2024-04-25 03:23:31.979804] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:57.599 [2024-04-25 03:23:31.979811] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:57.599 [2024-04-25 03:23:31.979818] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x211bec0) on tqpair=0x20bcd00 00:22:57.599 [2024-04-25 03:23:31.979828] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:57.599 [2024-04-25 03:23:31.979836] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:22:57.599 [2024-04-25 03:23:31.979849] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:22:57.599 [2024-04-25 03:23:31.979864] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:22:57.599 [2024-04-25 03:23:31.979881] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:57.599 [2024-04-25 03:23:31.979889] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20bcd00) 00:22:57.599 [2024-04-25 03:23:31.979900] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.599 [2024-04-25 03:23:31.979922] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x211bec0, cid 0, qid 0 00:22:57.599 [2024-04-25 03:23:31.980229] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:57.599 [2024-04-25 03:23:31.980244] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:57.599 [2024-04-25 03:23:31.980251] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:57.599 [2024-04-25 03:23:31.980259] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x20bcd00): datao=0, datal=4096, cccid=0 00:22:57.599 [2024-04-25 03:23:31.980267] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x211bec0) on tqpair(0x20bcd00): expected_datao=0, payload_size=4096 00:22:57.599 [2024-04-25 03:23:31.980276] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:57.599 [2024-04-25 03:23:31.980288] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:57.599 [2024-04-25 03:23:31.980298] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:57.599 [2024-04-25 03:23:32.023639] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:57.599 [2024-04-25 03:23:32.023658] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:57.599 [2024-04-25 03:23:32.023666] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:57.599 [2024-04-25 03:23:32.023673] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x211bec0) on tqpair=0x20bcd00 00:22:57.599 [2024-04-25 03:23:32.023687] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:22:57.599 [2024-04-25 03:23:32.023696] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:22:57.599 [2024-04-25 03:23:32.023704] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:22:57.599 [2024-04-25 03:23:32.023718] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:22:57.599 [2024-04-25 03:23:32.023727] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:22:57.599 [2024-04-25 03:23:32.023735] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:22:57.599 [2024-04-25 03:23:32.023751] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:22:57.599 [2024-04-25 03:23:32.023784] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:57.600 [2024-04-25 03:23:32.023793] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:57.600 [2024-04-25 03:23:32.023800] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20bcd00) 00:22:57.600 [2024-04-25 03:23:32.023812] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:57.600 [2024-04-25 03:23:32.023836] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x211bec0, cid 0, qid 0 00:22:57.600 [2024-04-25 03:23:32.024017] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:57.600 [2024-04-25 03:23:32.024033] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:57.600 [2024-04-25 03:23:32.024040] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:57.600 [2024-04-25 03:23:32.024047] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x211bec0) on tqpair=0x20bcd00 00:22:57.600 [2024-04-25 03:23:32.024062] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:57.600 [2024-04-25 03:23:32.024070] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:57.600 [2024-04-25 03:23:32.024076] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20bcd00) 00:22:57.600 [2024-04-25 03:23:32.024087] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:57.600 [2024-04-25 03:23:32.024097] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:57.600 [2024-04-25 03:23:32.024104] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:57.600 [2024-04-25 03:23:32.024111] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x20bcd00) 00:22:57.600 [2024-04-25 03:23:32.024120] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:57.600 [2024-04-25 03:23:32.024129] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:57.600 [2024-04-25 03:23:32.024137] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:57.600 [2024-04-25 03:23:32.024143] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x20bcd00) 00:22:57.600 [2024-04-25 03:23:32.024152] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:57.600 [2024-04-25 03:23:32.024162] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:57.600 [2024-04-25 03:23:32.024169] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:57.600 [2024-04-25 03:23:32.024175] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20bcd00) 00:22:57.600 [2024-04-25 03:23:32.024184] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:57.600 [2024-04-25 03:23:32.024194] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:22:57.600 [2024-04-25 03:23:32.024229] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:57.600 [2024-04-25 03:23:32.024243] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:57.600 [2024-04-25 03:23:32.024250] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x20bcd00) 00:22:57.600 [2024-04-25 03:23:32.024261] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.600 [2024-04-25 03:23:32.024283] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x211bec0, cid 0, qid 0 00:22:57.600 [2024-04-25 03:23:32.024310] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x211c020, cid 1, qid 0 00:22:57.600 [2024-04-25 03:23:32.024321] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x211c180, cid 2, qid 0 00:22:57.600 [2024-04-25 03:23:32.024330] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x211c2e0, cid 3, qid 0 00:22:57.600 [2024-04-25 03:23:32.024338] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x211c440, cid 4, qid 0 00:22:57.600 [2024-04-25 03:23:32.024542] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:57.600 [2024-04-25 03:23:32.024555] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:57.600 [2024-04-25 03:23:32.024562] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:57.600 [2024-04-25 03:23:32.024569] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x211c440) on tqpair=0x20bcd00 00:22:57.600 [2024-04-25 03:23:32.024580] nvme_ctrlr.c:2902:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:22:57.600 [2024-04-25 03:23:32.024589] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:22:57.600 [2024-04-25 03:23:32.024607] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:57.600 [2024-04-25 03:23:32.024617] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x20bcd00) 00:22:57.600 [2024-04-25 03:23:32.024635] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.600 [2024-04-25 03:23:32.024658] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x211c440, cid 4, qid 0 00:22:57.600 [2024-04-25 03:23:32.024839] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:57.600 [2024-04-25 03:23:32.024851] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:57.600 [2024-04-25 03:23:32.024858] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:57.600 [2024-04-25 03:23:32.024865] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x20bcd00): datao=0, datal=4096, cccid=4 00:22:57.600 [2024-04-25 03:23:32.024872] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x211c440) on tqpair(0x20bcd00): expected_datao=0, payload_size=4096 00:22:57.600 [2024-04-25 03:23:32.024880] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:57.600 [2024-04-25 03:23:32.024935] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:57.600 [2024-04-25 03:23:32.024944] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:57.600 [2024-04-25 03:23:32.025071] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:57.600 [2024-04-25 03:23:32.025085] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:57.600 [2024-04-25 03:23:32.025092] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:57.600 [2024-04-25 03:23:32.025099] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x211c440) on tqpair=0x20bcd00 00:22:57.600 [2024-04-25 03:23:32.025120] nvme_ctrlr.c:4036:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:22:57.600 [2024-04-25 03:23:32.025153] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:57.600 [2024-04-25 03:23:32.025163] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x20bcd00) 00:22:57.600 [2024-04-25 03:23:32.025174] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.600 [2024-04-25 03:23:32.025186] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:57.600 [2024-04-25 03:23:32.025193] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:57.600 [2024-04-25 03:23:32.025200] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x20bcd00) 00:22:57.600 [2024-04-25 03:23:32.025208] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:57.600 [2024-04-25 03:23:32.025237] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x211c440, cid 4, qid 0 00:22:57.600 [2024-04-25 03:23:32.025263] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x211c5a0, cid 5, qid 0 00:22:57.600 [2024-04-25 03:23:32.025495] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:57.600 [2024-04-25 03:23:32.025509] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:57.600 [2024-04-25 03:23:32.025516] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:57.600 [2024-04-25 03:23:32.025523] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x20bcd00): datao=0, datal=1024, cccid=4 00:22:57.600 [2024-04-25 03:23:32.025531] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x211c440) on tqpair(0x20bcd00): expected_datao=0, payload_size=1024 00:22:57.600 [2024-04-25 03:23:32.025538] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:57.600 [2024-04-25 03:23:32.025548] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:57.600 [2024-04-25 03:23:32.025556] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:57.600 [2024-04-25 03:23:32.025564] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:57.600 [2024-04-25 03:23:32.025574] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:57.600 [2024-04-25 03:23:32.025580] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:57.600 [2024-04-25 03:23:32.025587] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x211c5a0) on tqpair=0x20bcd00 00:22:57.600 [2024-04-25 03:23:32.065800] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:57.600 [2024-04-25 03:23:32.065818] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:57.600 [2024-04-25 03:23:32.065826] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:57.600 [2024-04-25 03:23:32.065833] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x211c440) on tqpair=0x20bcd00 00:22:57.600 [2024-04-25 03:23:32.065853] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:57.600 [2024-04-25 03:23:32.065862] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x20bcd00) 00:22:57.600 [2024-04-25 03:23:32.065874] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.600 [2024-04-25 03:23:32.065903] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x211c440, cid 4, qid 0 00:22:57.600 [2024-04-25 03:23:32.066098] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:57.600 [2024-04-25 03:23:32.066113] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:57.600 [2024-04-25 03:23:32.066120] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:57.600 [2024-04-25 03:23:32.066126] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x20bcd00): datao=0, datal=3072, cccid=4 00:22:57.600 [2024-04-25 03:23:32.066134] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x211c440) on tqpair(0x20bcd00): expected_datao=0, payload_size=3072 00:22:57.600 [2024-04-25 03:23:32.066142] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:57.600 [2024-04-25 03:23:32.066187] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:57.600 [2024-04-25 03:23:32.066205] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:57.864 [2024-04-25 03:23:32.110662] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:57.864 [2024-04-25 03:23:32.110683] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:57.864 [2024-04-25 03:23:32.110691] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:57.864 [2024-04-25 03:23:32.110698] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x211c440) on tqpair=0x20bcd00 00:22:57.864 [2024-04-25 03:23:32.110717] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:57.864 [2024-04-25 03:23:32.110727] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x20bcd00) 00:22:57.864 [2024-04-25 03:23:32.110739] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.864 [2024-04-25 03:23:32.110769] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x211c440, cid 4, qid 0 00:22:57.864 [2024-04-25 03:23:32.110970] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:57.864 [2024-04-25 03:23:32.110983] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:57.864 [2024-04-25 03:23:32.110991] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:57.864 [2024-04-25 03:23:32.110997] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x20bcd00): datao=0, datal=8, cccid=4 00:22:57.864 [2024-04-25 03:23:32.111005] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x211c440) on tqpair(0x20bcd00): expected_datao=0, payload_size=8 00:22:57.864 [2024-04-25 03:23:32.111013] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:57.864 [2024-04-25 03:23:32.111023] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:57.864 [2024-04-25 03:23:32.111031] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:57.864 [2024-04-25 03:23:32.156647] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:57.864 [2024-04-25 03:23:32.156667] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:57.864 [2024-04-25 03:23:32.156675] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:57.864 [2024-04-25 03:23:32.156682] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x211c440) on tqpair=0x20bcd00 00:22:57.864 ===================================================== 00:22:57.864 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:22:57.864 ===================================================== 00:22:57.864 Controller Capabilities/Features 00:22:57.864 ================================ 00:22:57.864 Vendor ID: 0000 00:22:57.864 Subsystem Vendor ID: 0000 00:22:57.864 Serial Number: .................... 00:22:57.864 Model Number: ........................................ 00:22:57.864 Firmware Version: 24.05 00:22:57.864 Recommended Arb Burst: 0 00:22:57.864 IEEE OUI Identifier: 00 00 00 00:22:57.864 Multi-path I/O 00:22:57.864 May have multiple subsystem ports: No 00:22:57.864 May have multiple controllers: No 00:22:57.864 Associated with SR-IOV VF: No 00:22:57.864 Max Data Transfer Size: 131072 00:22:57.864 Max Number of Namespaces: 0 00:22:57.864 Max Number of I/O Queues: 1024 00:22:57.864 NVMe Specification Version (VS): 1.3 00:22:57.864 NVMe Specification Version (Identify): 1.3 00:22:57.864 Maximum Queue Entries: 128 00:22:57.864 Contiguous Queues Required: Yes 00:22:57.864 Arbitration Mechanisms Supported 00:22:57.864 Weighted Round Robin: Not Supported 00:22:57.864 Vendor Specific: Not Supported 00:22:57.864 Reset Timeout: 15000 ms 00:22:57.864 Doorbell Stride: 4 bytes 00:22:57.864 NVM Subsystem Reset: Not Supported 00:22:57.864 Command Sets Supported 00:22:57.864 NVM Command Set: Supported 00:22:57.864 Boot Partition: Not Supported 00:22:57.864 Memory Page Size Minimum: 4096 bytes 00:22:57.864 Memory Page Size Maximum: 4096 bytes 00:22:57.864 Persistent Memory Region: Not Supported 00:22:57.864 Optional Asynchronous Events Supported 00:22:57.864 Namespace Attribute Notices: Not Supported 00:22:57.864 Firmware Activation Notices: Not Supported 00:22:57.864 ANA Change Notices: Not Supported 00:22:57.864 PLE Aggregate Log Change Notices: Not Supported 00:22:57.864 LBA Status Info Alert Notices: Not Supported 00:22:57.864 EGE Aggregate Log Change Notices: Not Supported 00:22:57.865 Normal NVM Subsystem Shutdown event: Not Supported 00:22:57.865 Zone Descriptor Change Notices: Not Supported 00:22:57.865 Discovery Log Change Notices: Supported 00:22:57.865 Controller Attributes 00:22:57.865 128-bit Host Identifier: Not Supported 00:22:57.865 Non-Operational Permissive Mode: Not Supported 00:22:57.865 NVM Sets: Not Supported 00:22:57.865 Read Recovery Levels: Not Supported 00:22:57.865 Endurance Groups: Not Supported 00:22:57.865 Predictable Latency Mode: Not Supported 00:22:57.865 Traffic Based Keep ALive: Not Supported 00:22:57.865 Namespace Granularity: Not Supported 00:22:57.865 SQ Associations: Not Supported 00:22:57.865 UUID List: Not Supported 00:22:57.865 Multi-Domain Subsystem: Not Supported 00:22:57.865 Fixed Capacity Management: Not Supported 00:22:57.865 Variable Capacity Management: Not Supported 00:22:57.865 Delete Endurance Group: Not Supported 00:22:57.865 Delete NVM Set: Not Supported 00:22:57.865 Extended LBA Formats Supported: Not Supported 00:22:57.865 Flexible Data Placement Supported: Not Supported 00:22:57.865 00:22:57.865 Controller Memory Buffer Support 00:22:57.865 ================================ 00:22:57.865 Supported: No 00:22:57.865 00:22:57.865 Persistent Memory Region Support 00:22:57.865 ================================ 00:22:57.865 Supported: No 00:22:57.865 00:22:57.865 Admin Command Set Attributes 00:22:57.865 ============================ 00:22:57.865 Security Send/Receive: Not Supported 00:22:57.865 Format NVM: Not Supported 00:22:57.865 Firmware Activate/Download: Not Supported 00:22:57.865 Namespace Management: Not Supported 00:22:57.865 Device Self-Test: Not Supported 00:22:57.865 Directives: Not Supported 00:22:57.865 NVMe-MI: Not Supported 00:22:57.865 Virtualization Management: Not Supported 00:22:57.865 Doorbell Buffer Config: Not Supported 00:22:57.865 Get LBA Status Capability: Not Supported 00:22:57.865 Command & Feature Lockdown Capability: Not Supported 00:22:57.865 Abort Command Limit: 1 00:22:57.865 Async Event Request Limit: 4 00:22:57.865 Number of Firmware Slots: N/A 00:22:57.865 Firmware Slot 1 Read-Only: N/A 00:22:57.865 Firmware Activation Without Reset: N/A 00:22:57.865 Multiple Update Detection Support: N/A 00:22:57.865 Firmware Update Granularity: No Information Provided 00:22:57.865 Per-Namespace SMART Log: No 00:22:57.865 Asymmetric Namespace Access Log Page: Not Supported 00:22:57.865 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:22:57.865 Command Effects Log Page: Not Supported 00:22:57.865 Get Log Page Extended Data: Supported 00:22:57.865 Telemetry Log Pages: Not Supported 00:22:57.865 Persistent Event Log Pages: Not Supported 00:22:57.865 Supported Log Pages Log Page: May Support 00:22:57.865 Commands Supported & Effects Log Page: Not Supported 00:22:57.865 Feature Identifiers & Effects Log Page:May Support 00:22:57.865 NVMe-MI Commands & Effects Log Page: May Support 00:22:57.865 Data Area 4 for Telemetry Log: Not Supported 00:22:57.865 Error Log Page Entries Supported: 128 00:22:57.865 Keep Alive: Not Supported 00:22:57.865 00:22:57.865 NVM Command Set Attributes 00:22:57.865 ========================== 00:22:57.865 Submission Queue Entry Size 00:22:57.865 Max: 1 00:22:57.865 Min: 1 00:22:57.865 Completion Queue Entry Size 00:22:57.865 Max: 1 00:22:57.865 Min: 1 00:22:57.865 Number of Namespaces: 0 00:22:57.865 Compare Command: Not Supported 00:22:57.865 Write Uncorrectable Command: Not Supported 00:22:57.865 Dataset Management Command: Not Supported 00:22:57.865 Write Zeroes Command: Not Supported 00:22:57.865 Set Features Save Field: Not Supported 00:22:57.865 Reservations: Not Supported 00:22:57.865 Timestamp: Not Supported 00:22:57.865 Copy: Not Supported 00:22:57.865 Volatile Write Cache: Not Present 00:22:57.865 Atomic Write Unit (Normal): 1 00:22:57.865 Atomic Write Unit (PFail): 1 00:22:57.865 Atomic Compare & Write Unit: 1 00:22:57.865 Fused Compare & Write: Supported 00:22:57.865 Scatter-Gather List 00:22:57.865 SGL Command Set: Supported 00:22:57.865 SGL Keyed: Supported 00:22:57.865 SGL Bit Bucket Descriptor: Not Supported 00:22:57.865 SGL Metadata Pointer: Not Supported 00:22:57.865 Oversized SGL: Not Supported 00:22:57.865 SGL Metadata Address: Not Supported 00:22:57.865 SGL Offset: Supported 00:22:57.865 Transport SGL Data Block: Not Supported 00:22:57.865 Replay Protected Memory Block: Not Supported 00:22:57.865 00:22:57.865 Firmware Slot Information 00:22:57.865 ========================= 00:22:57.865 Active slot: 0 00:22:57.865 00:22:57.865 00:22:57.865 Error Log 00:22:57.865 ========= 00:22:57.865 00:22:57.865 Active Namespaces 00:22:57.865 ================= 00:22:57.865 Discovery Log Page 00:22:57.865 ================== 00:22:57.865 Generation Counter: 2 00:22:57.865 Number of Records: 2 00:22:57.865 Record Format: 0 00:22:57.865 00:22:57.865 Discovery Log Entry 0 00:22:57.865 ---------------------- 00:22:57.865 Transport Type: 3 (TCP) 00:22:57.865 Address Family: 1 (IPv4) 00:22:57.865 Subsystem Type: 3 (Current Discovery Subsystem) 00:22:57.865 Entry Flags: 00:22:57.865 Duplicate Returned Information: 1 00:22:57.865 Explicit Persistent Connection Support for Discovery: 1 00:22:57.865 Transport Requirements: 00:22:57.865 Secure Channel: Not Required 00:22:57.865 Port ID: 0 (0x0000) 00:22:57.865 Controller ID: 65535 (0xffff) 00:22:57.865 Admin Max SQ Size: 128 00:22:57.865 Transport Service Identifier: 4420 00:22:57.865 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:22:57.865 Transport Address: 10.0.0.2 00:22:57.865 Discovery Log Entry 1 00:22:57.865 ---------------------- 00:22:57.865 Transport Type: 3 (TCP) 00:22:57.865 Address Family: 1 (IPv4) 00:22:57.865 Subsystem Type: 2 (NVM Subsystem) 00:22:57.865 Entry Flags: 00:22:57.865 Duplicate Returned Information: 0 00:22:57.865 Explicit Persistent Connection Support for Discovery: 0 00:22:57.865 Transport Requirements: 00:22:57.865 Secure Channel: Not Required 00:22:57.865 Port ID: 0 (0x0000) 00:22:57.865 Controller ID: 65535 (0xffff) 00:22:57.865 Admin Max SQ Size: 128 00:22:57.865 Transport Service Identifier: 4420 00:22:57.865 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:22:57.865 Transport Address: 10.0.0.2 [2024-04-25 03:23:32.156800] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:22:57.865 [2024-04-25 03:23:32.156826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.865 [2024-04-25 03:23:32.156839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.865 [2024-04-25 03:23:32.156849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.865 [2024-04-25 03:23:32.156859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.865 [2024-04-25 03:23:32.156872] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:57.865 [2024-04-25 03:23:32.156880] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:57.865 [2024-04-25 03:23:32.156887] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20bcd00) 00:22:57.865 [2024-04-25 03:23:32.156898] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.865 [2024-04-25 03:23:32.156923] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x211c2e0, cid 3, qid 0 00:22:57.865 [2024-04-25 03:23:32.157108] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:57.865 [2024-04-25 03:23:32.157124] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:57.865 [2024-04-25 03:23:32.157131] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:57.865 [2024-04-25 03:23:32.157138] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x211c2e0) on tqpair=0x20bcd00 00:22:57.865 [2024-04-25 03:23:32.157156] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:57.865 [2024-04-25 03:23:32.157166] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:57.865 [2024-04-25 03:23:32.157172] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20bcd00) 00:22:57.865 [2024-04-25 03:23:32.157183] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.865 [2024-04-25 03:23:32.157210] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x211c2e0, cid 3, qid 0 00:22:57.865 [2024-04-25 03:23:32.157413] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:57.865 [2024-04-25 03:23:32.157425] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:57.865 [2024-04-25 03:23:32.157432] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:57.865 [2024-04-25 03:23:32.157439] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x211c2e0) on tqpair=0x20bcd00 00:22:57.865 [2024-04-25 03:23:32.157449] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:22:57.865 [2024-04-25 03:23:32.157463] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:22:57.865 [2024-04-25 03:23:32.157480] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:57.865 [2024-04-25 03:23:32.157489] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:57.866 [2024-04-25 03:23:32.157496] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20bcd00) 00:22:57.866 [2024-04-25 03:23:32.157506] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.866 [2024-04-25 03:23:32.157527] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x211c2e0, cid 3, qid 0 00:22:57.866 [2024-04-25 03:23:32.157716] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:57.866 [2024-04-25 03:23:32.157732] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:57.866 [2024-04-25 03:23:32.157738] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:57.866 [2024-04-25 03:23:32.157745] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x211c2e0) on tqpair=0x20bcd00 00:22:57.866 [2024-04-25 03:23:32.157765] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:57.866 [2024-04-25 03:23:32.157774] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:57.866 [2024-04-25 03:23:32.157781] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20bcd00) 00:22:57.866 [2024-04-25 03:23:32.157792] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.866 [2024-04-25 03:23:32.157812] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x211c2e0, cid 3, qid 0 00:22:57.866 [2024-04-25 03:23:32.157992] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:57.866 [2024-04-25 03:23:32.158007] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:57.866 [2024-04-25 03:23:32.158015] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:57.866 [2024-04-25 03:23:32.158022] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x211c2e0) on tqpair=0x20bcd00 00:22:57.866 [2024-04-25 03:23:32.158040] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:57.866 [2024-04-25 03:23:32.158050] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:57.866 [2024-04-25 03:23:32.158057] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20bcd00) 00:22:57.866 [2024-04-25 03:23:32.158068] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.866 [2024-04-25 03:23:32.158088] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x211c2e0, cid 3, qid 0 00:22:57.866 [2024-04-25 03:23:32.158268] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:57.866 [2024-04-25 03:23:32.158283] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:57.866 [2024-04-25 03:23:32.158290] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:57.866 [2024-04-25 03:23:32.158297] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x211c2e0) on tqpair=0x20bcd00 00:22:57.866 [2024-04-25 03:23:32.158316] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:57.866 [2024-04-25 03:23:32.158326] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:57.866 [2024-04-25 03:23:32.158333] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20bcd00) 00:22:57.866 [2024-04-25 03:23:32.158343] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.866 [2024-04-25 03:23:32.158365] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x211c2e0, cid 3, qid 0 00:22:57.866 [2024-04-25 03:23:32.158545] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:57.866 [2024-04-25 03:23:32.158560] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:57.866 [2024-04-25 03:23:32.158567] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:57.866 [2024-04-25 03:23:32.158578] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x211c2e0) on tqpair=0x20bcd00 00:22:57.866 [2024-04-25 03:23:32.158598] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:57.866 [2024-04-25 03:23:32.158608] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:57.866 [2024-04-25 03:23:32.158616] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20bcd00) 00:22:57.866 [2024-04-25 03:23:32.158634] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.866 [2024-04-25 03:23:32.158657] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x211c2e0, cid 3, qid 0 00:22:57.866 [2024-04-25 03:23:32.158845] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:57.866 [2024-04-25 03:23:32.158857] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:57.866 [2024-04-25 03:23:32.158864] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:57.866 [2024-04-25 03:23:32.158871] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x211c2e0) on tqpair=0x20bcd00 00:22:57.866 [2024-04-25 03:23:32.158888] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:57.866 [2024-04-25 03:23:32.158898] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:57.866 [2024-04-25 03:23:32.158905] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20bcd00) 00:22:57.866 [2024-04-25 03:23:32.158915] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.866 [2024-04-25 03:23:32.158936] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x211c2e0, cid 3, qid 0 00:22:57.866 [2024-04-25 03:23:32.159119] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:57.866 [2024-04-25 03:23:32.159135] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:57.866 [2024-04-25 03:23:32.159142] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:57.866 [2024-04-25 03:23:32.159148] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x211c2e0) on tqpair=0x20bcd00 00:22:57.866 [2024-04-25 03:23:32.159166] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:57.866 [2024-04-25 03:23:32.159176] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:57.866 [2024-04-25 03:23:32.159183] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20bcd00) 00:22:57.866 [2024-04-25 03:23:32.159193] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.866 [2024-04-25 03:23:32.159214] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x211c2e0, cid 3, qid 0 00:22:57.866 [2024-04-25 03:23:32.159391] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:57.866 [2024-04-25 03:23:32.159406] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:57.866 [2024-04-25 03:23:32.159412] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:57.866 [2024-04-25 03:23:32.159419] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x211c2e0) on tqpair=0x20bcd00 00:22:57.866 [2024-04-25 03:23:32.159437] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:57.866 [2024-04-25 03:23:32.159447] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:57.866 [2024-04-25 03:23:32.159453] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20bcd00) 00:22:57.866 [2024-04-25 03:23:32.159464] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.866 [2024-04-25 03:23:32.159485] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x211c2e0, cid 3, qid 0 00:22:57.866 [2024-04-25 03:23:32.159671] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:57.866 [2024-04-25 03:23:32.159686] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:57.866 [2024-04-25 03:23:32.159693] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:57.866 [2024-04-25 03:23:32.159700] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x211c2e0) on tqpair=0x20bcd00 00:22:57.866 [2024-04-25 03:23:32.159723] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:57.866 [2024-04-25 03:23:32.159733] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:57.866 [2024-04-25 03:23:32.159740] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20bcd00) 00:22:57.866 [2024-04-25 03:23:32.159750] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.866 [2024-04-25 03:23:32.159771] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x211c2e0, cid 3, qid 0 00:22:57.866 [2024-04-25 03:23:32.159949] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:57.866 [2024-04-25 03:23:32.159964] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:57.866 [2024-04-25 03:23:32.159971] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:57.866 [2024-04-25 03:23:32.159977] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x211c2e0) on tqpair=0x20bcd00 00:22:57.866 [2024-04-25 03:23:32.159995] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:57.866 [2024-04-25 03:23:32.160005] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:57.866 [2024-04-25 03:23:32.160012] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20bcd00) 00:22:57.866 [2024-04-25 03:23:32.160022] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.866 [2024-04-25 03:23:32.160043] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x211c2e0, cid 3, qid 0 00:22:57.866 [2024-04-25 03:23:32.160216] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:57.866 [2024-04-25 03:23:32.160231] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:57.866 [2024-04-25 03:23:32.160238] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:57.866 [2024-04-25 03:23:32.160245] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x211c2e0) on tqpair=0x20bcd00 00:22:57.866 [2024-04-25 03:23:32.160263] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:57.866 [2024-04-25 03:23:32.160273] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:57.866 [2024-04-25 03:23:32.160279] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20bcd00) 00:22:57.866 [2024-04-25 03:23:32.160290] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.866 [2024-04-25 03:23:32.160311] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x211c2e0, cid 3, qid 0 00:22:57.866 [2024-04-25 03:23:32.160488] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:57.866 [2024-04-25 03:23:32.160503] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:57.866 [2024-04-25 03:23:32.160510] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:57.866 [2024-04-25 03:23:32.160516] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x211c2e0) on tqpair=0x20bcd00 00:22:57.866 [2024-04-25 03:23:32.160534] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:57.866 [2024-04-25 03:23:32.160544] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:57.866 [2024-04-25 03:23:32.160551] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20bcd00) 00:22:57.866 [2024-04-25 03:23:32.160561] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.866 [2024-04-25 03:23:32.160582] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x211c2e0, cid 3, qid 0 00:22:57.866 [2024-04-25 03:23:32.164644] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:57.866 [2024-04-25 03:23:32.164661] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:57.866 [2024-04-25 03:23:32.164668] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:57.866 [2024-04-25 03:23:32.164674] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x211c2e0) on tqpair=0x20bcd00 00:22:57.866 [2024-04-25 03:23:32.164711] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:57.866 [2024-04-25 03:23:32.164723] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:57.867 [2024-04-25 03:23:32.164730] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20bcd00) 00:22:57.867 [2024-04-25 03:23:32.164741] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.867 [2024-04-25 03:23:32.164763] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x211c2e0, cid 3, qid 0 00:22:57.867 [2024-04-25 03:23:32.164946] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:57.867 [2024-04-25 03:23:32.164961] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:57.867 [2024-04-25 03:23:32.164968] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:57.867 [2024-04-25 03:23:32.164975] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x211c2e0) on tqpair=0x20bcd00 00:22:57.867 [2024-04-25 03:23:32.164991] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:22:57.867 00:22:57.867 03:23:32 -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:22:57.867 [2024-04-25 03:23:32.200685] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:22:57.867 [2024-04-25 03:23:32.200733] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1566917 ] 00:22:57.867 EAL: No free 2048 kB hugepages reported on node 1 00:22:57.867 [2024-04-25 03:23:32.235385] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:22:57.867 [2024-04-25 03:23:32.235437] nvme_tcp.c:2326:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:57.867 [2024-04-25 03:23:32.235447] nvme_tcp.c:2330:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:57.867 [2024-04-25 03:23:32.235461] nvme_tcp.c:2348:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:57.867 [2024-04-25 03:23:32.235472] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:57.867 [2024-04-25 03:23:32.235800] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:22:57.867 [2024-04-25 03:23:32.235843] nvme_tcp.c:1543:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1588d00 0 00:22:57.867 [2024-04-25 03:23:32.242643] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:57.867 [2024-04-25 03:23:32.242661] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:57.867 [2024-04-25 03:23:32.242669] nvme_tcp.c:1589:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:57.867 [2024-04-25 03:23:32.242675] nvme_tcp.c:1590:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:57.867 [2024-04-25 03:23:32.242727] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:57.867 [2024-04-25 03:23:32.242740] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:57.867 [2024-04-25 03:23:32.242746] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1588d00) 00:22:57.867 [2024-04-25 03:23:32.242760] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:57.867 [2024-04-25 03:23:32.242786] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15e7ec0, cid 0, qid 0 00:22:57.867 [2024-04-25 03:23:32.250641] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:57.867 [2024-04-25 03:23:32.250659] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:57.867 [2024-04-25 03:23:32.250670] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:57.867 [2024-04-25 03:23:32.250678] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15e7ec0) on tqpair=0x1588d00 00:22:57.867 [2024-04-25 03:23:32.250712] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:57.867 [2024-04-25 03:23:32.250724] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:22:57.867 [2024-04-25 03:23:32.250734] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:22:57.867 [2024-04-25 03:23:32.250751] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:57.867 [2024-04-25 03:23:32.250760] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:57.867 [2024-04-25 03:23:32.250766] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1588d00) 00:22:57.867 [2024-04-25 03:23:32.250778] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.867 [2024-04-25 03:23:32.250801] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15e7ec0, cid 0, qid 0 00:22:57.867 [2024-04-25 03:23:32.251003] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:57.867 [2024-04-25 03:23:32.251015] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:57.867 [2024-04-25 03:23:32.251022] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:57.867 [2024-04-25 03:23:32.251029] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15e7ec0) on tqpair=0x1588d00 00:22:57.867 [2024-04-25 03:23:32.251042] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:22:57.867 [2024-04-25 03:23:32.251057] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:22:57.867 [2024-04-25 03:23:32.251070] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:57.867 [2024-04-25 03:23:32.251078] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:57.867 [2024-04-25 03:23:32.251084] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1588d00) 00:22:57.867 [2024-04-25 03:23:32.251095] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.867 [2024-04-25 03:23:32.251116] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15e7ec0, cid 0, qid 0 00:22:57.867 [2024-04-25 03:23:32.251315] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:57.867 [2024-04-25 03:23:32.251331] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:57.867 [2024-04-25 03:23:32.251338] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:57.867 [2024-04-25 03:23:32.251344] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15e7ec0) on tqpair=0x1588d00 00:22:57.867 [2024-04-25 03:23:32.251354] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:22:57.867 [2024-04-25 03:23:32.251368] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:22:57.867 [2024-04-25 03:23:32.251380] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:57.867 [2024-04-25 03:23:32.251388] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:57.867 [2024-04-25 03:23:32.251394] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1588d00) 00:22:57.867 [2024-04-25 03:23:32.251405] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.867 [2024-04-25 03:23:32.251426] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15e7ec0, cid 0, qid 0 00:22:57.867 [2024-04-25 03:23:32.251619] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:57.867 [2024-04-25 03:23:32.251642] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:57.867 [2024-04-25 03:23:32.251649] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:57.867 [2024-04-25 03:23:32.251660] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15e7ec0) on tqpair=0x1588d00 00:22:57.867 [2024-04-25 03:23:32.251671] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:57.867 [2024-04-25 03:23:32.251688] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:57.867 [2024-04-25 03:23:32.251698] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:57.867 [2024-04-25 03:23:32.251704] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1588d00) 00:22:57.867 [2024-04-25 03:23:32.251715] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.867 [2024-04-25 03:23:32.251736] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15e7ec0, cid 0, qid 0 00:22:57.867 [2024-04-25 03:23:32.251931] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:57.867 [2024-04-25 03:23:32.251946] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:57.867 [2024-04-25 03:23:32.251953] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:57.867 [2024-04-25 03:23:32.251960] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15e7ec0) on tqpair=0x1588d00 00:22:57.867 [2024-04-25 03:23:32.251969] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:22:57.867 [2024-04-25 03:23:32.251977] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:22:57.867 [2024-04-25 03:23:32.251991] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:57.867 [2024-04-25 03:23:32.252100] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:22:57.867 [2024-04-25 03:23:32.252108] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:57.867 [2024-04-25 03:23:32.252120] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:57.867 [2024-04-25 03:23:32.252142] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:57.867 [2024-04-25 03:23:32.252149] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1588d00) 00:22:57.867 [2024-04-25 03:23:32.252159] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.867 [2024-04-25 03:23:32.252180] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15e7ec0, cid 0, qid 0 00:22:57.867 [2024-04-25 03:23:32.252382] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:57.867 [2024-04-25 03:23:32.252394] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:57.867 [2024-04-25 03:23:32.252401] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:57.867 [2024-04-25 03:23:32.252408] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15e7ec0) on tqpair=0x1588d00 00:22:57.867 [2024-04-25 03:23:32.252417] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:57.867 [2024-04-25 03:23:32.252434] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:57.867 [2024-04-25 03:23:32.252443] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:57.867 [2024-04-25 03:23:32.252450] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1588d00) 00:22:57.867 [2024-04-25 03:23:32.252460] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.867 [2024-04-25 03:23:32.252480] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15e7ec0, cid 0, qid 0 00:22:57.868 [2024-04-25 03:23:32.252661] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:57.868 [2024-04-25 03:23:32.252676] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:57.868 [2024-04-25 03:23:32.252687] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:57.868 [2024-04-25 03:23:32.252694] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15e7ec0) on tqpair=0x1588d00 00:22:57.868 [2024-04-25 03:23:32.252703] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:57.868 [2024-04-25 03:23:32.252711] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:22:57.868 [2024-04-25 03:23:32.252725] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:22:57.868 [2024-04-25 03:23:32.252743] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:22:57.868 [2024-04-25 03:23:32.252757] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:57.868 [2024-04-25 03:23:32.252765] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1588d00) 00:22:57.868 [2024-04-25 03:23:32.252776] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.868 [2024-04-25 03:23:32.252798] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15e7ec0, cid 0, qid 0 00:22:57.868 [2024-04-25 03:23:32.253033] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:57.868 [2024-04-25 03:23:32.253049] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:57.868 [2024-04-25 03:23:32.253056] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:57.868 [2024-04-25 03:23:32.253062] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1588d00): datao=0, datal=4096, cccid=0 00:22:57.868 [2024-04-25 03:23:32.253070] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15e7ec0) on tqpair(0x1588d00): expected_datao=0, payload_size=4096 00:22:57.868 [2024-04-25 03:23:32.253077] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:57.868 [2024-04-25 03:23:32.253088] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:57.868 [2024-04-25 03:23:32.253095] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:57.868 [2024-04-25 03:23:32.253160] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:57.868 [2024-04-25 03:23:32.253172] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:57.868 [2024-04-25 03:23:32.253179] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:57.868 [2024-04-25 03:23:32.253186] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15e7ec0) on tqpair=0x1588d00 00:22:57.868 [2024-04-25 03:23:32.253198] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:22:57.868 [2024-04-25 03:23:32.253207] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:22:57.868 [2024-04-25 03:23:32.253214] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:22:57.868 [2024-04-25 03:23:32.253226] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:22:57.868 [2024-04-25 03:23:32.253234] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:22:57.868 [2024-04-25 03:23:32.253242] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:22:57.868 [2024-04-25 03:23:32.253256] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:22:57.868 [2024-04-25 03:23:32.253269] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:57.868 [2024-04-25 03:23:32.253276] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:57.868 [2024-04-25 03:23:32.253283] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1588d00) 00:22:57.868 [2024-04-25 03:23:32.253294] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:57.868 [2024-04-25 03:23:32.253319] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15e7ec0, cid 0, qid 0 00:22:57.868 [2024-04-25 03:23:32.253517] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:57.868 [2024-04-25 03:23:32.253532] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:57.868 [2024-04-25 03:23:32.253539] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:57.868 [2024-04-25 03:23:32.253546] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15e7ec0) on tqpair=0x1588d00 00:22:57.868 [2024-04-25 03:23:32.253557] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:57.868 [2024-04-25 03:23:32.253565] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:57.868 [2024-04-25 03:23:32.253571] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1588d00) 00:22:57.868 [2024-04-25 03:23:32.253581] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:57.868 [2024-04-25 03:23:32.253591] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:57.868 [2024-04-25 03:23:32.253598] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:57.868 [2024-04-25 03:23:32.253604] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1588d00) 00:22:57.868 [2024-04-25 03:23:32.253612] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:57.868 [2024-04-25 03:23:32.253622] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:57.868 [2024-04-25 03:23:32.253636] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:57.868 [2024-04-25 03:23:32.253643] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1588d00) 00:22:57.868 [2024-04-25 03:23:32.253652] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:57.868 [2024-04-25 03:23:32.253662] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:57.868 [2024-04-25 03:23:32.253669] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:57.868 [2024-04-25 03:23:32.253675] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1588d00) 00:22:57.868 [2024-04-25 03:23:32.253684] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:57.868 [2024-04-25 03:23:32.253692] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:22:57.868 [2024-04-25 03:23:32.253711] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:57.868 [2024-04-25 03:23:32.253724] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:57.868 [2024-04-25 03:23:32.253731] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1588d00) 00:22:57.868 [2024-04-25 03:23:32.253742] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.868 [2024-04-25 03:23:32.253764] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15e7ec0, cid 0, qid 0 00:22:57.868 [2024-04-25 03:23:32.253775] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15e8020, cid 1, qid 0 00:22:57.868 [2024-04-25 03:23:32.253783] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15e8180, cid 2, qid 0 00:22:57.868 [2024-04-25 03:23:32.253790] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15e82e0, cid 3, qid 0 00:22:57.868 [2024-04-25 03:23:32.253798] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15e8440, cid 4, qid 0 00:22:57.868 [2024-04-25 03:23:32.254018] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:57.868 [2024-04-25 03:23:32.254033] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:57.868 [2024-04-25 03:23:32.254044] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:57.868 [2024-04-25 03:23:32.254051] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15e8440) on tqpair=0x1588d00 00:22:57.868 [2024-04-25 03:23:32.254061] nvme_ctrlr.c:2902:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:22:57.868 [2024-04-25 03:23:32.254070] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:22:57.868 [2024-04-25 03:23:32.254085] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:22:57.868 [2024-04-25 03:23:32.254097] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:22:57.868 [2024-04-25 03:23:32.254108] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:57.868 [2024-04-25 03:23:32.254116] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:57.868 [2024-04-25 03:23:32.254122] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1588d00) 00:22:57.868 [2024-04-25 03:23:32.254133] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:57.868 [2024-04-25 03:23:32.254154] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15e8440, cid 4, qid 0 00:22:57.868 [2024-04-25 03:23:32.254350] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:57.868 [2024-04-25 03:23:32.254366] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:57.868 [2024-04-25 03:23:32.254372] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:57.868 [2024-04-25 03:23:32.254379] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15e8440) on tqpair=0x1588d00 00:22:57.868 [2024-04-25 03:23:32.254435] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:22:57.868 [2024-04-25 03:23:32.254455] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:22:57.868 [2024-04-25 03:23:32.254471] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:57.868 [2024-04-25 03:23:32.254479] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1588d00) 00:22:57.868 [2024-04-25 03:23:32.254489] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.868 [2024-04-25 03:23:32.254510] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15e8440, cid 4, qid 0 00:22:57.868 [2024-04-25 03:23:32.258640] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:57.868 [2024-04-25 03:23:32.258655] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:57.868 [2024-04-25 03:23:32.258663] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:57.868 [2024-04-25 03:23:32.258669] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1588d00): datao=0, datal=4096, cccid=4 00:22:57.868 [2024-04-25 03:23:32.258676] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15e8440) on tqpair(0x1588d00): expected_datao=0, payload_size=4096 00:22:57.868 [2024-04-25 03:23:32.258683] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:57.868 [2024-04-25 03:23:32.258693] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:57.868 [2024-04-25 03:23:32.258700] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:57.868 [2024-04-25 03:23:32.258709] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:57.868 [2024-04-25 03:23:32.258717] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:57.869 [2024-04-25 03:23:32.258724] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:57.869 [2024-04-25 03:23:32.258730] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15e8440) on tqpair=0x1588d00 00:22:57.869 [2024-04-25 03:23:32.258749] nvme_ctrlr.c:4557:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:22:57.869 [2024-04-25 03:23:32.258773] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:22:57.869 [2024-04-25 03:23:32.258808] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:22:57.869 [2024-04-25 03:23:32.258823] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:57.869 [2024-04-25 03:23:32.258830] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1588d00) 00:22:57.869 [2024-04-25 03:23:32.258841] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.869 [2024-04-25 03:23:32.258864] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15e8440, cid 4, qid 0 00:22:57.869 [2024-04-25 03:23:32.259087] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:57.869 [2024-04-25 03:23:32.259103] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:57.869 [2024-04-25 03:23:32.259110] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:57.869 [2024-04-25 03:23:32.259116] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1588d00): datao=0, datal=4096, cccid=4 00:22:57.869 [2024-04-25 03:23:32.259123] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15e8440) on tqpair(0x1588d00): expected_datao=0, payload_size=4096 00:22:57.869 [2024-04-25 03:23:32.259131] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:57.869 [2024-04-25 03:23:32.259141] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:57.869 [2024-04-25 03:23:32.259148] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:57.869 [2024-04-25 03:23:32.259218] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:57.869 [2024-04-25 03:23:32.259230] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:57.869 [2024-04-25 03:23:32.259237] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:57.869 [2024-04-25 03:23:32.259243] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15e8440) on tqpair=0x1588d00 00:22:57.869 [2024-04-25 03:23:32.259269] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:22:57.869 [2024-04-25 03:23:32.259289] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:22:57.869 [2024-04-25 03:23:32.259303] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:57.869 [2024-04-25 03:23:32.259311] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1588d00) 00:22:57.869 [2024-04-25 03:23:32.259321] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.869 [2024-04-25 03:23:32.259342] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15e8440, cid 4, qid 0 00:22:57.869 [2024-04-25 03:23:32.259561] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:57.869 [2024-04-25 03:23:32.259576] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:57.869 [2024-04-25 03:23:32.259583] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:57.869 [2024-04-25 03:23:32.259590] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1588d00): datao=0, datal=4096, cccid=4 00:22:57.869 [2024-04-25 03:23:32.259597] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15e8440) on tqpair(0x1588d00): expected_datao=0, payload_size=4096 00:22:57.869 [2024-04-25 03:23:32.259604] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:57.869 [2024-04-25 03:23:32.259614] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:57.869 [2024-04-25 03:23:32.259622] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:57.869 [2024-04-25 03:23:32.259688] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:57.869 [2024-04-25 03:23:32.259701] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:57.869 [2024-04-25 03:23:32.259711] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:57.869 [2024-04-25 03:23:32.259718] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15e8440) on tqpair=0x1588d00 00:22:57.869 [2024-04-25 03:23:32.259737] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:22:57.869 [2024-04-25 03:23:32.259752] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:22:57.869 [2024-04-25 03:23:32.259769] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:22:57.869 [2024-04-25 03:23:32.259781] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:22:57.869 [2024-04-25 03:23:32.259790] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:22:57.869 [2024-04-25 03:23:32.259799] nvme_ctrlr.c:2990:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:22:57.869 [2024-04-25 03:23:32.259807] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:22:57.869 [2024-04-25 03:23:32.259815] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:22:57.869 [2024-04-25 03:23:32.259835] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:57.869 [2024-04-25 03:23:32.259844] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1588d00) 00:22:57.869 [2024-04-25 03:23:32.259855] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.869 [2024-04-25 03:23:32.259866] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:57.869 [2024-04-25 03:23:32.259873] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:57.869 [2024-04-25 03:23:32.259879] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1588d00) 00:22:57.869 [2024-04-25 03:23:32.259888] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:57.869 [2024-04-25 03:23:32.259928] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15e8440, cid 4, qid 0 00:22:57.869 [2024-04-25 03:23:32.259940] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15e85a0, cid 5, qid 0 00:22:57.869 [2024-04-25 03:23:32.260185] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:57.869 [2024-04-25 03:23:32.260198] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:57.869 [2024-04-25 03:23:32.260205] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:57.869 [2024-04-25 03:23:32.260211] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15e8440) on tqpair=0x1588d00 00:22:57.869 [2024-04-25 03:23:32.260224] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:57.869 [2024-04-25 03:23:32.260233] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:57.869 [2024-04-25 03:23:32.260240] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:57.869 [2024-04-25 03:23:32.260246] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15e85a0) on tqpair=0x1588d00 00:22:57.869 [2024-04-25 03:23:32.260263] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:57.869 [2024-04-25 03:23:32.260272] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1588d00) 00:22:57.869 [2024-04-25 03:23:32.260282] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.869 [2024-04-25 03:23:32.260302] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15e85a0, cid 5, qid 0 00:22:57.869 [2024-04-25 03:23:32.260491] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:57.869 [2024-04-25 03:23:32.260507] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:57.869 [2024-04-25 03:23:32.260514] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:57.869 [2024-04-25 03:23:32.260521] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15e85a0) on tqpair=0x1588d00 00:22:57.869 [2024-04-25 03:23:32.260538] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:57.869 [2024-04-25 03:23:32.260547] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1588d00) 00:22:57.869 [2024-04-25 03:23:32.260557] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.870 [2024-04-25 03:23:32.260577] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15e85a0, cid 5, qid 0 00:22:57.870 [2024-04-25 03:23:32.260748] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:57.870 [2024-04-25 03:23:32.260763] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:57.870 [2024-04-25 03:23:32.260770] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:57.870 [2024-04-25 03:23:32.260777] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15e85a0) on tqpair=0x1588d00 00:22:57.870 [2024-04-25 03:23:32.260794] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:57.870 [2024-04-25 03:23:32.260804] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1588d00) 00:22:57.870 [2024-04-25 03:23:32.260814] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.870 [2024-04-25 03:23:32.260834] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15e85a0, cid 5, qid 0 00:22:57.870 [2024-04-25 03:23:32.261040] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:57.870 [2024-04-25 03:23:32.261052] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:57.870 [2024-04-25 03:23:32.261058] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:57.870 [2024-04-25 03:23:32.261065] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15e85a0) on tqpair=0x1588d00 00:22:57.870 [2024-04-25 03:23:32.261086] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:57.870 [2024-04-25 03:23:32.261096] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1588d00) 00:22:57.870 [2024-04-25 03:23:32.261106] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.870 [2024-04-25 03:23:32.261118] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:57.870 [2024-04-25 03:23:32.261125] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1588d00) 00:22:57.870 [2024-04-25 03:23:32.261135] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.870 [2024-04-25 03:23:32.261146] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:57.870 [2024-04-25 03:23:32.261153] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1588d00) 00:22:57.870 [2024-04-25 03:23:32.261162] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.870 [2024-04-25 03:23:32.261174] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:57.870 [2024-04-25 03:23:32.261181] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1588d00) 00:22:57.870 [2024-04-25 03:23:32.261190] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.870 [2024-04-25 03:23:32.261211] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15e85a0, cid 5, qid 0 00:22:57.870 [2024-04-25 03:23:32.261222] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15e8440, cid 4, qid 0 00:22:57.870 [2024-04-25 03:23:32.261233] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15e8700, cid 6, qid 0 00:22:57.870 [2024-04-25 03:23:32.261241] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15e8860, cid 7, qid 0 00:22:57.870 [2024-04-25 03:23:32.261506] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:57.870 [2024-04-25 03:23:32.261518] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:57.870 [2024-04-25 03:23:32.261525] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:57.870 [2024-04-25 03:23:32.261531] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1588d00): datao=0, datal=8192, cccid=5 00:22:57.870 [2024-04-25 03:23:32.261539] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15e85a0) on tqpair(0x1588d00): expected_datao=0, payload_size=8192 00:22:57.870 [2024-04-25 03:23:32.261547] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:57.870 [2024-04-25 03:23:32.261692] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:57.870 [2024-04-25 03:23:32.261704] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:57.870 [2024-04-25 03:23:32.261712] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:57.870 [2024-04-25 03:23:32.261721] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:57.870 [2024-04-25 03:23:32.261728] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:57.870 [2024-04-25 03:23:32.261734] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1588d00): datao=0, datal=512, cccid=4 00:22:57.870 [2024-04-25 03:23:32.261742] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15e8440) on tqpair(0x1588d00): expected_datao=0, payload_size=512 00:22:57.870 [2024-04-25 03:23:32.261749] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:57.870 [2024-04-25 03:23:32.261758] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:57.870 [2024-04-25 03:23:32.261765] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:57.870 [2024-04-25 03:23:32.261773] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:57.870 [2024-04-25 03:23:32.261782] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:57.870 [2024-04-25 03:23:32.261788] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:57.870 [2024-04-25 03:23:32.261795] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1588d00): datao=0, datal=512, cccid=6 00:22:57.870 [2024-04-25 03:23:32.261802] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15e8700) on tqpair(0x1588d00): expected_datao=0, payload_size=512 00:22:57.870 [2024-04-25 03:23:32.261809] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:57.870 [2024-04-25 03:23:32.261818] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:57.870 [2024-04-25 03:23:32.261825] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:57.870 [2024-04-25 03:23:32.261834] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:57.870 [2024-04-25 03:23:32.261842] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:57.870 [2024-04-25 03:23:32.261849] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:57.870 [2024-04-25 03:23:32.261855] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1588d00): datao=0, datal=4096, cccid=7 00:22:57.870 [2024-04-25 03:23:32.261862] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15e8860) on tqpair(0x1588d00): expected_datao=0, payload_size=4096 00:22:57.870 [2024-04-25 03:23:32.261870] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:57.870 [2024-04-25 03:23:32.261879] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:57.870 [2024-04-25 03:23:32.261886] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:57.870 [2024-04-25 03:23:32.261898] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:57.870 [2024-04-25 03:23:32.261907] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:57.870 [2024-04-25 03:23:32.261914] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:57.870 [2024-04-25 03:23:32.261920] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15e85a0) on tqpair=0x1588d00 00:22:57.870 [2024-04-25 03:23:32.261945] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:57.870 [2024-04-25 03:23:32.261956] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:57.870 [2024-04-25 03:23:32.261963] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:57.870 [2024-04-25 03:23:32.261970] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15e8440) on tqpair=0x1588d00 00:22:57.870 [2024-04-25 03:23:32.261985] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:57.870 [2024-04-25 03:23:32.261996] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:57.870 [2024-04-25 03:23:32.262018] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:57.870 [2024-04-25 03:23:32.262025] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15e8700) on tqpair=0x1588d00 00:22:57.870 [2024-04-25 03:23:32.262037] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:57.870 [2024-04-25 03:23:32.262046] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:57.870 [2024-04-25 03:23:32.262052] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:57.870 [2024-04-25 03:23:32.262058] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15e8860) on tqpair=0x1588d00 00:22:57.870 ===================================================== 00:22:57.870 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:57.870 ===================================================== 00:22:57.870 Controller Capabilities/Features 00:22:57.870 ================================ 00:22:57.870 Vendor ID: 8086 00:22:57.870 Subsystem Vendor ID: 8086 00:22:57.870 Serial Number: SPDK00000000000001 00:22:57.870 Model Number: SPDK bdev Controller 00:22:57.870 Firmware Version: 24.05 00:22:57.870 Recommended Arb Burst: 6 00:22:57.870 IEEE OUI Identifier: e4 d2 5c 00:22:57.870 Multi-path I/O 00:22:57.870 May have multiple subsystem ports: Yes 00:22:57.870 May have multiple controllers: Yes 00:22:57.870 Associated with SR-IOV VF: No 00:22:57.870 Max Data Transfer Size: 131072 00:22:57.870 Max Number of Namespaces: 32 00:22:57.870 Max Number of I/O Queues: 127 00:22:57.870 NVMe Specification Version (VS): 1.3 00:22:57.870 NVMe Specification Version (Identify): 1.3 00:22:57.870 Maximum Queue Entries: 128 00:22:57.870 Contiguous Queues Required: Yes 00:22:57.870 Arbitration Mechanisms Supported 00:22:57.870 Weighted Round Robin: Not Supported 00:22:57.870 Vendor Specific: Not Supported 00:22:57.870 Reset Timeout: 15000 ms 00:22:57.870 Doorbell Stride: 4 bytes 00:22:57.870 NVM Subsystem Reset: Not Supported 00:22:57.870 Command Sets Supported 00:22:57.870 NVM Command Set: Supported 00:22:57.870 Boot Partition: Not Supported 00:22:57.870 Memory Page Size Minimum: 4096 bytes 00:22:57.870 Memory Page Size Maximum: 4096 bytes 00:22:57.870 Persistent Memory Region: Not Supported 00:22:57.870 Optional Asynchronous Events Supported 00:22:57.870 Namespace Attribute Notices: Supported 00:22:57.870 Firmware Activation Notices: Not Supported 00:22:57.870 ANA Change Notices: Not Supported 00:22:57.870 PLE Aggregate Log Change Notices: Not Supported 00:22:57.870 LBA Status Info Alert Notices: Not Supported 00:22:57.870 EGE Aggregate Log Change Notices: Not Supported 00:22:57.870 Normal NVM Subsystem Shutdown event: Not Supported 00:22:57.870 Zone Descriptor Change Notices: Not Supported 00:22:57.870 Discovery Log Change Notices: Not Supported 00:22:57.870 Controller Attributes 00:22:57.870 128-bit Host Identifier: Supported 00:22:57.870 Non-Operational Permissive Mode: Not Supported 00:22:57.870 NVM Sets: Not Supported 00:22:57.870 Read Recovery Levels: Not Supported 00:22:57.870 Endurance Groups: Not Supported 00:22:57.870 Predictable Latency Mode: Not Supported 00:22:57.870 Traffic Based Keep ALive: Not Supported 00:22:57.870 Namespace Granularity: Not Supported 00:22:57.871 SQ Associations: Not Supported 00:22:57.871 UUID List: Not Supported 00:22:57.871 Multi-Domain Subsystem: Not Supported 00:22:57.871 Fixed Capacity Management: Not Supported 00:22:57.871 Variable Capacity Management: Not Supported 00:22:57.871 Delete Endurance Group: Not Supported 00:22:57.871 Delete NVM Set: Not Supported 00:22:57.871 Extended LBA Formats Supported: Not Supported 00:22:57.871 Flexible Data Placement Supported: Not Supported 00:22:57.871 00:22:57.871 Controller Memory Buffer Support 00:22:57.871 ================================ 00:22:57.871 Supported: No 00:22:57.871 00:22:57.871 Persistent Memory Region Support 00:22:57.871 ================================ 00:22:57.871 Supported: No 00:22:57.871 00:22:57.871 Admin Command Set Attributes 00:22:57.871 ============================ 00:22:57.871 Security Send/Receive: Not Supported 00:22:57.871 Format NVM: Not Supported 00:22:57.871 Firmware Activate/Download: Not Supported 00:22:57.871 Namespace Management: Not Supported 00:22:57.871 Device Self-Test: Not Supported 00:22:57.871 Directives: Not Supported 00:22:57.871 NVMe-MI: Not Supported 00:22:57.871 Virtualization Management: Not Supported 00:22:57.871 Doorbell Buffer Config: Not Supported 00:22:57.871 Get LBA Status Capability: Not Supported 00:22:57.871 Command & Feature Lockdown Capability: Not Supported 00:22:57.871 Abort Command Limit: 4 00:22:57.871 Async Event Request Limit: 4 00:22:57.871 Number of Firmware Slots: N/A 00:22:57.871 Firmware Slot 1 Read-Only: N/A 00:22:57.871 Firmware Activation Without Reset: N/A 00:22:57.871 Multiple Update Detection Support: N/A 00:22:57.871 Firmware Update Granularity: No Information Provided 00:22:57.871 Per-Namespace SMART Log: No 00:22:57.871 Asymmetric Namespace Access Log Page: Not Supported 00:22:57.871 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:22:57.871 Command Effects Log Page: Supported 00:22:57.871 Get Log Page Extended Data: Supported 00:22:57.871 Telemetry Log Pages: Not Supported 00:22:57.871 Persistent Event Log Pages: Not Supported 00:22:57.871 Supported Log Pages Log Page: May Support 00:22:57.871 Commands Supported & Effects Log Page: Not Supported 00:22:57.871 Feature Identifiers & Effects Log Page:May Support 00:22:57.871 NVMe-MI Commands & Effects Log Page: May Support 00:22:57.871 Data Area 4 for Telemetry Log: Not Supported 00:22:57.871 Error Log Page Entries Supported: 128 00:22:57.871 Keep Alive: Supported 00:22:57.871 Keep Alive Granularity: 10000 ms 00:22:57.871 00:22:57.871 NVM Command Set Attributes 00:22:57.871 ========================== 00:22:57.871 Submission Queue Entry Size 00:22:57.871 Max: 64 00:22:57.871 Min: 64 00:22:57.871 Completion Queue Entry Size 00:22:57.871 Max: 16 00:22:57.871 Min: 16 00:22:57.871 Number of Namespaces: 32 00:22:57.871 Compare Command: Supported 00:22:57.871 Write Uncorrectable Command: Not Supported 00:22:57.871 Dataset Management Command: Supported 00:22:57.871 Write Zeroes Command: Supported 00:22:57.871 Set Features Save Field: Not Supported 00:22:57.871 Reservations: Supported 00:22:57.871 Timestamp: Not Supported 00:22:57.871 Copy: Supported 00:22:57.871 Volatile Write Cache: Present 00:22:57.871 Atomic Write Unit (Normal): 1 00:22:57.871 Atomic Write Unit (PFail): 1 00:22:57.871 Atomic Compare & Write Unit: 1 00:22:57.871 Fused Compare & Write: Supported 00:22:57.871 Scatter-Gather List 00:22:57.871 SGL Command Set: Supported 00:22:57.871 SGL Keyed: Supported 00:22:57.871 SGL Bit Bucket Descriptor: Not Supported 00:22:57.871 SGL Metadata Pointer: Not Supported 00:22:57.871 Oversized SGL: Not Supported 00:22:57.871 SGL Metadata Address: Not Supported 00:22:57.871 SGL Offset: Supported 00:22:57.871 Transport SGL Data Block: Not Supported 00:22:57.871 Replay Protected Memory Block: Not Supported 00:22:57.871 00:22:57.871 Firmware Slot Information 00:22:57.871 ========================= 00:22:57.871 Active slot: 1 00:22:57.871 Slot 1 Firmware Revision: 24.05 00:22:57.871 00:22:57.871 00:22:57.871 Commands Supported and Effects 00:22:57.871 ============================== 00:22:57.871 Admin Commands 00:22:57.871 -------------- 00:22:57.871 Get Log Page (02h): Supported 00:22:57.871 Identify (06h): Supported 00:22:57.871 Abort (08h): Supported 00:22:57.871 Set Features (09h): Supported 00:22:57.871 Get Features (0Ah): Supported 00:22:57.871 Asynchronous Event Request (0Ch): Supported 00:22:57.871 Keep Alive (18h): Supported 00:22:57.871 I/O Commands 00:22:57.871 ------------ 00:22:57.871 Flush (00h): Supported LBA-Change 00:22:57.871 Write (01h): Supported LBA-Change 00:22:57.871 Read (02h): Supported 00:22:57.871 Compare (05h): Supported 00:22:57.871 Write Zeroes (08h): Supported LBA-Change 00:22:57.871 Dataset Management (09h): Supported LBA-Change 00:22:57.871 Copy (19h): Supported LBA-Change 00:22:57.871 Unknown (79h): Supported LBA-Change 00:22:57.871 Unknown (7Ah): Supported 00:22:57.871 00:22:57.871 Error Log 00:22:57.871 ========= 00:22:57.871 00:22:57.871 Arbitration 00:22:57.871 =========== 00:22:57.871 Arbitration Burst: 1 00:22:57.871 00:22:57.871 Power Management 00:22:57.871 ================ 00:22:57.871 Number of Power States: 1 00:22:57.871 Current Power State: Power State #0 00:22:57.871 Power State #0: 00:22:57.871 Max Power: 0.00 W 00:22:57.871 Non-Operational State: Operational 00:22:57.871 Entry Latency: Not Reported 00:22:57.871 Exit Latency: Not Reported 00:22:57.871 Relative Read Throughput: 0 00:22:57.871 Relative Read Latency: 0 00:22:57.871 Relative Write Throughput: 0 00:22:57.871 Relative Write Latency: 0 00:22:57.871 Idle Power: Not Reported 00:22:57.871 Active Power: Not Reported 00:22:57.871 Non-Operational Permissive Mode: Not Supported 00:22:57.871 00:22:57.871 Health Information 00:22:57.871 ================== 00:22:57.871 Critical Warnings: 00:22:57.871 Available Spare Space: OK 00:22:57.871 Temperature: OK 00:22:57.871 Device Reliability: OK 00:22:57.871 Read Only: No 00:22:57.871 Volatile Memory Backup: OK 00:22:57.871 Current Temperature: 0 Kelvin (-273 Celsius) 00:22:57.871 Temperature Threshold: [2024-04-25 03:23:32.262181] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:57.871 [2024-04-25 03:23:32.262193] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1588d00) 00:22:57.871 [2024-04-25 03:23:32.262204] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.871 [2024-04-25 03:23:32.262225] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15e8860, cid 7, qid 0 00:22:57.871 [2024-04-25 03:23:32.262437] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:57.871 [2024-04-25 03:23:32.262453] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:57.871 [2024-04-25 03:23:32.262460] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:57.871 [2024-04-25 03:23:32.262467] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15e8860) on tqpair=0x1588d00 00:22:57.871 [2024-04-25 03:23:32.262512] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:22:57.871 [2024-04-25 03:23:32.262534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.871 [2024-04-25 03:23:32.262546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.871 [2024-04-25 03:23:32.262556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.871 [2024-04-25 03:23:32.262566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.871 [2024-04-25 03:23:32.262578] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:57.871 [2024-04-25 03:23:32.262587] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:57.871 [2024-04-25 03:23:32.262593] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1588d00) 00:22:57.871 [2024-04-25 03:23:32.262604] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.871 [2024-04-25 03:23:32.262626] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15e82e0, cid 3, qid 0 00:22:57.871 [2024-04-25 03:23:32.266667] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:57.871 [2024-04-25 03:23:32.266678] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:57.871 [2024-04-25 03:23:32.266685] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:57.871 [2024-04-25 03:23:32.266691] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15e82e0) on tqpair=0x1588d00 00:22:57.871 [2024-04-25 03:23:32.266704] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:57.871 [2024-04-25 03:23:32.266716] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:57.871 [2024-04-25 03:23:32.266722] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1588d00) 00:22:57.871 [2024-04-25 03:23:32.266733] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.871 [2024-04-25 03:23:32.266775] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15e82e0, cid 3, qid 0 00:22:57.871 [2024-04-25 03:23:32.266990] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:57.871 [2024-04-25 03:23:32.267002] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:57.871 [2024-04-25 03:23:32.267009] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:57.872 [2024-04-25 03:23:32.267016] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15e82e0) on tqpair=0x1588d00 00:22:57.872 [2024-04-25 03:23:32.267025] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:22:57.872 [2024-04-25 03:23:32.267033] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:22:57.872 [2024-04-25 03:23:32.267049] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:57.872 [2024-04-25 03:23:32.267058] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:57.872 [2024-04-25 03:23:32.267065] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1588d00) 00:22:57.872 [2024-04-25 03:23:32.267075] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.872 [2024-04-25 03:23:32.267095] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15e82e0, cid 3, qid 0 00:22:57.872 [2024-04-25 03:23:32.267289] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:57.872 [2024-04-25 03:23:32.267305] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:57.872 [2024-04-25 03:23:32.267311] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:57.872 [2024-04-25 03:23:32.267318] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15e82e0) on tqpair=0x1588d00 00:22:57.872 [2024-04-25 03:23:32.267336] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:57.872 [2024-04-25 03:23:32.267345] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:57.872 [2024-04-25 03:23:32.267352] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1588d00) 00:22:57.872 [2024-04-25 03:23:32.267362] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.872 [2024-04-25 03:23:32.267383] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15e82e0, cid 3, qid 0 00:22:57.872 [2024-04-25 03:23:32.267552] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:57.872 [2024-04-25 03:23:32.267564] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:57.872 [2024-04-25 03:23:32.267571] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:57.872 [2024-04-25 03:23:32.267578] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15e82e0) on tqpair=0x1588d00 00:22:57.872 [2024-04-25 03:23:32.267595] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:57.872 [2024-04-25 03:23:32.267604] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:57.872 [2024-04-25 03:23:32.267611] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1588d00) 00:22:57.872 [2024-04-25 03:23:32.267621] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.872 [2024-04-25 03:23:32.267648] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15e82e0, cid 3, qid 0 00:22:57.872 [2024-04-25 03:23:32.271638] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:57.872 [2024-04-25 03:23:32.271654] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:57.872 [2024-04-25 03:23:32.271661] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:57.872 [2024-04-25 03:23:32.271668] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15e82e0) on tqpair=0x1588d00 00:22:57.872 [2024-04-25 03:23:32.271705] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:57.872 [2024-04-25 03:23:32.271716] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:57.872 [2024-04-25 03:23:32.271723] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1588d00) 00:22:57.872 [2024-04-25 03:23:32.271734] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.872 [2024-04-25 03:23:32.271755] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15e82e0, cid 3, qid 0 00:22:57.872 [2024-04-25 03:23:32.271956] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:57.872 [2024-04-25 03:23:32.271972] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:57.872 [2024-04-25 03:23:32.271979] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:57.872 [2024-04-25 03:23:32.271985] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15e82e0) on tqpair=0x1588d00 00:22:57.872 [2024-04-25 03:23:32.272000] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 4 milliseconds 00:22:57.872 0 Kelvin (-273 Celsius) 00:22:57.872 Available Spare: 0% 00:22:57.872 Available Spare Threshold: 0% 00:22:57.872 Life Percentage Used: 0% 00:22:57.872 Data Units Read: 0 00:22:57.872 Data Units Written: 0 00:22:57.872 Host Read Commands: 0 00:22:57.872 Host Write Commands: 0 00:22:57.872 Controller Busy Time: 0 minutes 00:22:57.872 Power Cycles: 0 00:22:57.872 Power On Hours: 0 hours 00:22:57.872 Unsafe Shutdowns: 0 00:22:57.872 Unrecoverable Media Errors: 0 00:22:57.872 Lifetime Error Log Entries: 0 00:22:57.872 Warning Temperature Time: 0 minutes 00:22:57.872 Critical Temperature Time: 0 minutes 00:22:57.872 00:22:57.872 Number of Queues 00:22:57.872 ================ 00:22:57.872 Number of I/O Submission Queues: 127 00:22:57.872 Number of I/O Completion Queues: 127 00:22:57.872 00:22:57.872 Active Namespaces 00:22:57.872 ================= 00:22:57.872 Namespace ID:1 00:22:57.872 Error Recovery Timeout: Unlimited 00:22:57.872 Command Set Identifier: NVM (00h) 00:22:57.872 Deallocate: Supported 00:22:57.872 Deallocated/Unwritten Error: Not Supported 00:22:57.872 Deallocated Read Value: Unknown 00:22:57.872 Deallocate in Write Zeroes: Not Supported 00:22:57.872 Deallocated Guard Field: 0xFFFF 00:22:57.872 Flush: Supported 00:22:57.872 Reservation: Supported 00:22:57.872 Namespace Sharing Capabilities: Multiple Controllers 00:22:57.872 Size (in LBAs): 131072 (0GiB) 00:22:57.872 Capacity (in LBAs): 131072 (0GiB) 00:22:57.872 Utilization (in LBAs): 131072 (0GiB) 00:22:57.872 NGUID: ABCDEF0123456789ABCDEF0123456789 00:22:57.872 EUI64: ABCDEF0123456789 00:22:57.872 UUID: 8eaf30a4-160d-4cce-8e7c-6b5f31a0a4a4 00:22:57.872 Thin Provisioning: Not Supported 00:22:57.872 Per-NS Atomic Units: Yes 00:22:57.872 Atomic Boundary Size (Normal): 0 00:22:57.872 Atomic Boundary Size (PFail): 0 00:22:57.872 Atomic Boundary Offset: 0 00:22:57.872 Maximum Single Source Range Length: 65535 00:22:57.872 Maximum Copy Length: 65535 00:22:57.872 Maximum Source Range Count: 1 00:22:57.872 NGUID/EUI64 Never Reused: No 00:22:57.872 Namespace Write Protected: No 00:22:57.872 Number of LBA Formats: 1 00:22:57.872 Current LBA Format: LBA Format #00 00:22:57.872 LBA Format #00: Data Size: 512 Metadata Size: 0 00:22:57.872 00:22:57.872 03:23:32 -- host/identify.sh@51 -- # sync 00:22:57.872 03:23:32 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:57.872 03:23:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:57.872 03:23:32 -- common/autotest_common.sh@10 -- # set +x 00:22:57.872 03:23:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:57.872 03:23:32 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:22:57.872 03:23:32 -- host/identify.sh@56 -- # nvmftestfini 00:22:57.872 03:23:32 -- nvmf/common.sh@477 -- # nvmfcleanup 00:22:57.872 03:23:32 -- nvmf/common.sh@117 -- # sync 00:22:57.872 03:23:32 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:57.872 03:23:32 -- nvmf/common.sh@120 -- # set +e 00:22:57.872 03:23:32 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:57.872 03:23:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:57.872 rmmod nvme_tcp 00:22:57.872 rmmod nvme_fabrics 00:22:57.872 rmmod nvme_keyring 00:22:57.872 03:23:32 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:57.872 03:23:32 -- nvmf/common.sh@124 -- # set -e 00:22:57.872 03:23:32 -- nvmf/common.sh@125 -- # return 0 00:22:57.872 03:23:32 -- nvmf/common.sh@478 -- # '[' -n 1566770 ']' 00:22:57.872 03:23:32 -- nvmf/common.sh@479 -- # killprocess 1566770 00:22:57.872 03:23:32 -- common/autotest_common.sh@936 -- # '[' -z 1566770 ']' 00:22:57.872 03:23:32 -- common/autotest_common.sh@940 -- # kill -0 1566770 00:22:57.872 03:23:32 -- common/autotest_common.sh@941 -- # uname 00:22:57.872 03:23:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:57.872 03:23:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1566770 00:22:58.129 03:23:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:58.129 03:23:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:58.129 03:23:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1566770' 00:22:58.129 killing process with pid 1566770 00:22:58.129 03:23:32 -- common/autotest_common.sh@955 -- # kill 1566770 00:22:58.129 [2024-04-25 03:23:32.376121] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:22:58.129 03:23:32 -- common/autotest_common.sh@960 -- # wait 1566770 00:22:58.387 03:23:32 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:22:58.387 03:23:32 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:22:58.387 03:23:32 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:22:58.387 03:23:32 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:58.387 03:23:32 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:58.387 03:23:32 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:58.387 03:23:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:58.387 03:23:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:00.285 03:23:34 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:00.285 00:23:00.285 real 0m5.406s 00:23:00.285 user 0m4.531s 00:23:00.285 sys 0m1.838s 00:23:00.285 03:23:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:00.285 03:23:34 -- common/autotest_common.sh@10 -- # set +x 00:23:00.285 ************************************ 00:23:00.285 END TEST nvmf_identify 00:23:00.285 ************************************ 00:23:00.285 03:23:34 -- nvmf/nvmf.sh@96 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:00.285 03:23:34 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:00.285 03:23:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:00.285 03:23:34 -- common/autotest_common.sh@10 -- # set +x 00:23:00.543 ************************************ 00:23:00.543 START TEST nvmf_perf 00:23:00.543 ************************************ 00:23:00.543 03:23:34 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:00.543 * Looking for test storage... 00:23:00.543 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:00.543 03:23:34 -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:00.543 03:23:34 -- nvmf/common.sh@7 -- # uname -s 00:23:00.543 03:23:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:00.543 03:23:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:00.543 03:23:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:00.543 03:23:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:00.543 03:23:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:00.543 03:23:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:00.543 03:23:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:00.543 03:23:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:00.543 03:23:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:00.543 03:23:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:00.543 03:23:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:00.543 03:23:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:00.543 03:23:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:00.543 03:23:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:00.543 03:23:34 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:00.543 03:23:34 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:00.543 03:23:34 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:00.543 03:23:34 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:00.543 03:23:34 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:00.543 03:23:34 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:00.543 03:23:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:00.543 03:23:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:00.543 03:23:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:00.543 03:23:34 -- paths/export.sh@5 -- # export PATH 00:23:00.543 03:23:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:00.543 03:23:34 -- nvmf/common.sh@47 -- # : 0 00:23:00.543 03:23:34 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:00.543 03:23:34 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:00.543 03:23:34 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:00.543 03:23:34 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:00.543 03:23:34 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:00.543 03:23:34 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:00.543 03:23:34 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:00.543 03:23:34 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:00.543 03:23:34 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:00.543 03:23:34 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:00.543 03:23:34 -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:00.543 03:23:34 -- host/perf.sh@17 -- # nvmftestinit 00:23:00.543 03:23:34 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:23:00.543 03:23:34 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:00.543 03:23:34 -- nvmf/common.sh@437 -- # prepare_net_devs 00:23:00.543 03:23:34 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:23:00.543 03:23:34 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:23:00.543 03:23:34 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:00.543 03:23:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:00.543 03:23:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:00.543 03:23:34 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:23:00.543 03:23:34 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:23:00.543 03:23:34 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:00.543 03:23:34 -- common/autotest_common.sh@10 -- # set +x 00:23:02.445 03:23:36 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:02.445 03:23:36 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:02.445 03:23:36 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:02.445 03:23:36 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:02.445 03:23:36 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:02.445 03:23:36 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:02.445 03:23:36 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:02.445 03:23:36 -- nvmf/common.sh@295 -- # net_devs=() 00:23:02.445 03:23:36 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:02.445 03:23:36 -- nvmf/common.sh@296 -- # e810=() 00:23:02.445 03:23:36 -- nvmf/common.sh@296 -- # local -ga e810 00:23:02.445 03:23:36 -- nvmf/common.sh@297 -- # x722=() 00:23:02.445 03:23:36 -- nvmf/common.sh@297 -- # local -ga x722 00:23:02.445 03:23:36 -- nvmf/common.sh@298 -- # mlx=() 00:23:02.445 03:23:36 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:02.445 03:23:36 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:02.445 03:23:36 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:02.445 03:23:36 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:02.445 03:23:36 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:02.445 03:23:36 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:02.445 03:23:36 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:02.445 03:23:36 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:02.445 03:23:36 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:02.445 03:23:36 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:02.445 03:23:36 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:02.445 03:23:36 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:02.445 03:23:36 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:02.445 03:23:36 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:02.445 03:23:36 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:02.445 03:23:36 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:02.445 03:23:36 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:02.445 03:23:36 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:02.445 03:23:36 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:02.445 03:23:36 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:02.445 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:02.445 03:23:36 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:02.445 03:23:36 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:02.445 03:23:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:02.445 03:23:36 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:02.445 03:23:36 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:02.445 03:23:36 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:02.445 03:23:36 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:02.445 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:02.445 03:23:36 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:02.445 03:23:36 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:02.445 03:23:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:02.445 03:23:36 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:02.445 03:23:36 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:02.445 03:23:36 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:02.445 03:23:36 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:02.445 03:23:36 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:02.445 03:23:36 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:02.446 03:23:36 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:02.446 03:23:36 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:02.446 03:23:36 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:02.446 03:23:36 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:02.446 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:02.446 03:23:36 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:02.446 03:23:36 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:02.446 03:23:36 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:02.446 03:23:36 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:02.446 03:23:36 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:02.446 03:23:36 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:02.446 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:02.446 03:23:36 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:02.446 03:23:36 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:23:02.446 03:23:36 -- nvmf/common.sh@403 -- # is_hw=yes 00:23:02.446 03:23:36 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:23:02.446 03:23:36 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:23:02.446 03:23:36 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:23:02.446 03:23:36 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:02.446 03:23:36 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:02.446 03:23:36 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:02.446 03:23:36 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:02.446 03:23:36 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:02.446 03:23:36 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:02.446 03:23:36 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:02.446 03:23:36 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:02.446 03:23:36 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:02.446 03:23:36 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:02.446 03:23:36 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:02.446 03:23:36 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:02.446 03:23:36 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:02.446 03:23:36 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:02.446 03:23:36 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:02.446 03:23:36 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:02.446 03:23:36 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:02.705 03:23:36 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:02.705 03:23:36 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:02.705 03:23:37 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:02.705 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:02.705 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.273 ms 00:23:02.705 00:23:02.705 --- 10.0.0.2 ping statistics --- 00:23:02.705 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:02.705 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:23:02.705 03:23:37 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:02.705 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:02.705 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:23:02.705 00:23:02.705 --- 10.0.0.1 ping statistics --- 00:23:02.705 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:02.705 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:23:02.705 03:23:37 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:02.705 03:23:37 -- nvmf/common.sh@411 -- # return 0 00:23:02.705 03:23:37 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:23:02.705 03:23:37 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:02.705 03:23:37 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:23:02.705 03:23:37 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:23:02.705 03:23:37 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:02.705 03:23:37 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:23:02.705 03:23:37 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:23:02.705 03:23:37 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:23:02.705 03:23:37 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:23:02.705 03:23:37 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:02.705 03:23:37 -- common/autotest_common.sh@10 -- # set +x 00:23:02.705 03:23:37 -- nvmf/common.sh@470 -- # nvmfpid=1568854 00:23:02.705 03:23:37 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:02.705 03:23:37 -- nvmf/common.sh@471 -- # waitforlisten 1568854 00:23:02.705 03:23:37 -- common/autotest_common.sh@817 -- # '[' -z 1568854 ']' 00:23:02.705 03:23:37 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:02.705 03:23:37 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:02.705 03:23:37 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:02.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:02.705 03:23:37 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:02.705 03:23:37 -- common/autotest_common.sh@10 -- # set +x 00:23:02.705 [2024-04-25 03:23:37.082843] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:23:02.705 [2024-04-25 03:23:37.082930] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:02.705 EAL: No free 2048 kB hugepages reported on node 1 00:23:02.705 [2024-04-25 03:23:37.147781] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:02.964 [2024-04-25 03:23:37.253703] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:02.964 [2024-04-25 03:23:37.253753] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:02.964 [2024-04-25 03:23:37.253767] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:02.964 [2024-04-25 03:23:37.253778] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:02.964 [2024-04-25 03:23:37.253788] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:02.964 [2024-04-25 03:23:37.253853] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:02.964 [2024-04-25 03:23:37.253881] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:02.964 [2024-04-25 03:23:37.253941] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:02.964 [2024-04-25 03:23:37.253943] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:03.529 03:23:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:03.529 03:23:38 -- common/autotest_common.sh@850 -- # return 0 00:23:03.529 03:23:38 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:23:03.529 03:23:38 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:03.529 03:23:38 -- common/autotest_common.sh@10 -- # set +x 00:23:03.799 03:23:38 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:03.799 03:23:38 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:23:03.799 03:23:38 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:23:07.118 03:23:41 -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:23:07.118 03:23:41 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:23:07.118 03:23:41 -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:23:07.118 03:23:41 -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:07.376 03:23:41 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:23:07.376 03:23:41 -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:23:07.376 03:23:41 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:23:07.376 03:23:41 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:23:07.376 03:23:41 -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:07.634 [2024-04-25 03:23:41.954110] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:07.634 03:23:41 -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:07.891 03:23:42 -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:07.891 03:23:42 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:08.148 03:23:42 -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:08.148 03:23:42 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:23:08.406 03:23:42 -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:08.662 [2024-04-25 03:23:42.945733] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:08.662 03:23:42 -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:08.920 03:23:43 -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:23:08.920 03:23:43 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:23:08.920 03:23:43 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:23:08.920 03:23:43 -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:23:10.290 Initializing NVMe Controllers 00:23:10.290 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:23:10.290 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:23:10.290 Initialization complete. Launching workers. 00:23:10.290 ======================================================== 00:23:10.290 Latency(us) 00:23:10.290 Device Information : IOPS MiB/s Average min max 00:23:10.290 PCIE (0000:88:00.0) NSID 1 from core 0: 84795.18 331.23 376.78 42.66 5249.78 00:23:10.290 ======================================================== 00:23:10.290 Total : 84795.18 331.23 376.78 42.66 5249.78 00:23:10.290 00:23:10.290 03:23:44 -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:10.290 EAL: No free 2048 kB hugepages reported on node 1 00:23:11.223 Initializing NVMe Controllers 00:23:11.223 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:11.223 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:11.223 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:11.223 Initialization complete. Launching workers. 00:23:11.223 ======================================================== 00:23:11.223 Latency(us) 00:23:11.223 Device Information : IOPS MiB/s Average min max 00:23:11.223 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 79.00 0.31 13133.80 231.03 48694.67 00:23:11.223 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 53.00 0.21 19705.38 5932.36 59856.67 00:23:11.223 ======================================================== 00:23:11.223 Total : 132.00 0.52 15772.39 231.03 59856.67 00:23:11.223 00:23:11.223 03:23:45 -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:11.223 EAL: No free 2048 kB hugepages reported on node 1 00:23:12.598 Initializing NVMe Controllers 00:23:12.598 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:12.598 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:12.598 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:12.598 Initialization complete. Launching workers. 00:23:12.598 ======================================================== 00:23:12.598 Latency(us) 00:23:12.598 Device Information : IOPS MiB/s Average min max 00:23:12.598 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8112.75 31.69 3944.97 618.85 7988.88 00:23:12.598 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3905.29 15.26 8218.61 5270.36 15856.03 00:23:12.598 ======================================================== 00:23:12.598 Total : 12018.03 46.95 5333.70 618.85 15856.03 00:23:12.598 00:23:12.598 03:23:47 -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:23:12.598 03:23:47 -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:23:12.598 03:23:47 -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:12.857 EAL: No free 2048 kB hugepages reported on node 1 00:23:15.387 Initializing NVMe Controllers 00:23:15.387 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:15.387 Controller IO queue size 128, less than required. 00:23:15.387 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:15.387 Controller IO queue size 128, less than required. 00:23:15.387 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:15.387 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:15.387 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:15.387 Initialization complete. Launching workers. 00:23:15.387 ======================================================== 00:23:15.387 Latency(us) 00:23:15.388 Device Information : IOPS MiB/s Average min max 00:23:15.388 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 753.60 188.40 174101.42 88470.14 248228.04 00:23:15.388 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 613.86 153.46 219391.86 85720.96 316552.37 00:23:15.388 ======================================================== 00:23:15.388 Total : 1367.45 341.86 194432.53 85720.96 316552.37 00:23:15.388 00:23:15.388 03:23:49 -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:23:15.388 EAL: No free 2048 kB hugepages reported on node 1 00:23:15.388 No valid NVMe controllers or AIO or URING devices found 00:23:15.388 Initializing NVMe Controllers 00:23:15.388 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:15.388 Controller IO queue size 128, less than required. 00:23:15.388 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:15.388 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:23:15.388 Controller IO queue size 128, less than required. 00:23:15.388 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:15.388 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:23:15.388 WARNING: Some requested NVMe devices were skipped 00:23:15.388 03:23:49 -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:23:15.388 EAL: No free 2048 kB hugepages reported on node 1 00:23:17.919 Initializing NVMe Controllers 00:23:17.919 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:17.919 Controller IO queue size 128, less than required. 00:23:17.919 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:17.919 Controller IO queue size 128, less than required. 00:23:17.919 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:17.919 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:17.919 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:17.919 Initialization complete. Launching workers. 00:23:17.919 00:23:17.919 ==================== 00:23:17.919 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:23:17.919 TCP transport: 00:23:17.919 polls: 34580 00:23:17.919 idle_polls: 11280 00:23:17.919 sock_completions: 23300 00:23:17.919 nvme_completions: 3189 00:23:17.919 submitted_requests: 4760 00:23:17.919 queued_requests: 1 00:23:17.919 00:23:17.919 ==================== 00:23:17.919 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:23:17.919 TCP transport: 00:23:17.919 polls: 38015 00:23:17.919 idle_polls: 13385 00:23:17.919 sock_completions: 24630 00:23:17.919 nvme_completions: 3375 00:23:17.919 submitted_requests: 5076 00:23:17.919 queued_requests: 1 00:23:17.919 ======================================================== 00:23:17.919 Latency(us) 00:23:17.919 Device Information : IOPS MiB/s Average min max 00:23:17.919 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 796.85 199.21 166317.95 104398.39 277460.47 00:23:17.919 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 843.34 210.83 156732.48 73823.92 224432.59 00:23:17.919 ======================================================== 00:23:17.919 Total : 1640.18 410.05 161389.36 73823.92 277460.47 00:23:17.919 00:23:17.919 03:23:52 -- host/perf.sh@66 -- # sync 00:23:17.919 03:23:52 -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:18.179 03:23:52 -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:23:18.179 03:23:52 -- host/perf.sh@71 -- # '[' -n 0000:88:00.0 ']' 00:23:18.179 03:23:52 -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:23:21.473 03:23:55 -- host/perf.sh@72 -- # ls_guid=d50b7a3c-8d06-4b00-aa04-15b061575fd7 00:23:21.473 03:23:55 -- host/perf.sh@73 -- # get_lvs_free_mb d50b7a3c-8d06-4b00-aa04-15b061575fd7 00:23:21.473 03:23:55 -- common/autotest_common.sh@1350 -- # local lvs_uuid=d50b7a3c-8d06-4b00-aa04-15b061575fd7 00:23:21.473 03:23:55 -- common/autotest_common.sh@1351 -- # local lvs_info 00:23:21.473 03:23:55 -- common/autotest_common.sh@1352 -- # local fc 00:23:21.473 03:23:55 -- common/autotest_common.sh@1353 -- # local cs 00:23:21.473 03:23:55 -- common/autotest_common.sh@1354 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:23:21.756 03:23:56 -- common/autotest_common.sh@1354 -- # lvs_info='[ 00:23:21.756 { 00:23:21.756 "uuid": "d50b7a3c-8d06-4b00-aa04-15b061575fd7", 00:23:21.756 "name": "lvs_0", 00:23:21.756 "base_bdev": "Nvme0n1", 00:23:21.756 "total_data_clusters": 238234, 00:23:21.756 "free_clusters": 238234, 00:23:21.756 "block_size": 512, 00:23:21.756 "cluster_size": 4194304 00:23:21.757 } 00:23:21.757 ]' 00:23:21.757 03:23:56 -- common/autotest_common.sh@1355 -- # jq '.[] | select(.uuid=="d50b7a3c-8d06-4b00-aa04-15b061575fd7") .free_clusters' 00:23:21.757 03:23:56 -- common/autotest_common.sh@1355 -- # fc=238234 00:23:21.757 03:23:56 -- common/autotest_common.sh@1356 -- # jq '.[] | select(.uuid=="d50b7a3c-8d06-4b00-aa04-15b061575fd7") .cluster_size' 00:23:21.757 03:23:56 -- common/autotest_common.sh@1356 -- # cs=4194304 00:23:21.757 03:23:56 -- common/autotest_common.sh@1359 -- # free_mb=952936 00:23:21.757 03:23:56 -- common/autotest_common.sh@1360 -- # echo 952936 00:23:21.757 952936 00:23:21.757 03:23:56 -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:23:21.757 03:23:56 -- host/perf.sh@78 -- # free_mb=20480 00:23:21.757 03:23:56 -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u d50b7a3c-8d06-4b00-aa04-15b061575fd7 lbd_0 20480 00:23:22.323 03:23:56 -- host/perf.sh@80 -- # lb_guid=64e8646f-dee6-4c3a-92c8-e15dc6d25af1 00:23:22.323 03:23:56 -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 64e8646f-dee6-4c3a-92c8-e15dc6d25af1 lvs_n_0 00:23:23.258 03:23:57 -- host/perf.sh@83 -- # ls_nested_guid=673156bc-8eda-438a-b11d-5ee5a2cb9168 00:23:23.258 03:23:57 -- host/perf.sh@84 -- # get_lvs_free_mb 673156bc-8eda-438a-b11d-5ee5a2cb9168 00:23:23.258 03:23:57 -- common/autotest_common.sh@1350 -- # local lvs_uuid=673156bc-8eda-438a-b11d-5ee5a2cb9168 00:23:23.258 03:23:57 -- common/autotest_common.sh@1351 -- # local lvs_info 00:23:23.258 03:23:57 -- common/autotest_common.sh@1352 -- # local fc 00:23:23.258 03:23:57 -- common/autotest_common.sh@1353 -- # local cs 00:23:23.258 03:23:57 -- common/autotest_common.sh@1354 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:23:23.515 03:23:57 -- common/autotest_common.sh@1354 -- # lvs_info='[ 00:23:23.515 { 00:23:23.515 "uuid": "d50b7a3c-8d06-4b00-aa04-15b061575fd7", 00:23:23.515 "name": "lvs_0", 00:23:23.515 "base_bdev": "Nvme0n1", 00:23:23.515 "total_data_clusters": 238234, 00:23:23.515 "free_clusters": 233114, 00:23:23.515 "block_size": 512, 00:23:23.515 "cluster_size": 4194304 00:23:23.515 }, 00:23:23.515 { 00:23:23.515 "uuid": "673156bc-8eda-438a-b11d-5ee5a2cb9168", 00:23:23.515 "name": "lvs_n_0", 00:23:23.515 "base_bdev": "64e8646f-dee6-4c3a-92c8-e15dc6d25af1", 00:23:23.515 "total_data_clusters": 5114, 00:23:23.515 "free_clusters": 5114, 00:23:23.515 "block_size": 512, 00:23:23.515 "cluster_size": 4194304 00:23:23.515 } 00:23:23.515 ]' 00:23:23.515 03:23:57 -- common/autotest_common.sh@1355 -- # jq '.[] | select(.uuid=="673156bc-8eda-438a-b11d-5ee5a2cb9168") .free_clusters' 00:23:23.515 03:23:57 -- common/autotest_common.sh@1355 -- # fc=5114 00:23:23.515 03:23:57 -- common/autotest_common.sh@1356 -- # jq '.[] | select(.uuid=="673156bc-8eda-438a-b11d-5ee5a2cb9168") .cluster_size' 00:23:23.515 03:23:57 -- common/autotest_common.sh@1356 -- # cs=4194304 00:23:23.515 03:23:57 -- common/autotest_common.sh@1359 -- # free_mb=20456 00:23:23.515 03:23:57 -- common/autotest_common.sh@1360 -- # echo 20456 00:23:23.515 20456 00:23:23.515 03:23:57 -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:23:23.515 03:23:57 -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 673156bc-8eda-438a-b11d-5ee5a2cb9168 lbd_nest_0 20456 00:23:23.773 03:23:58 -- host/perf.sh@88 -- # lb_nested_guid=19cf6b7c-0230-4fd3-97ca-7b3ed63abd07 00:23:23.773 03:23:58 -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:24.031 03:23:58 -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:23:24.031 03:23:58 -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 19cf6b7c-0230-4fd3-97ca-7b3ed63abd07 00:23:24.289 03:23:58 -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:24.548 03:23:58 -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:23:24.548 03:23:58 -- host/perf.sh@96 -- # io_size=("512" "131072") 00:23:24.548 03:23:58 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:23:24.548 03:23:58 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:23:24.548 03:23:58 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:24.548 EAL: No free 2048 kB hugepages reported on node 1 00:23:36.749 Initializing NVMe Controllers 00:23:36.749 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:36.749 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:36.749 Initialization complete. Launching workers. 00:23:36.750 ======================================================== 00:23:36.750 Latency(us) 00:23:36.750 Device Information : IOPS MiB/s Average min max 00:23:36.750 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 45.00 0.02 22272.11 260.99 47700.90 00:23:36.750 ======================================================== 00:23:36.750 Total : 45.00 0.02 22272.11 260.99 47700.90 00:23:36.750 00:23:36.750 03:24:09 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:23:36.750 03:24:09 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:36.750 EAL: No free 2048 kB hugepages reported on node 1 00:23:46.737 Initializing NVMe Controllers 00:23:46.737 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:46.737 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:46.737 Initialization complete. Launching workers. 00:23:46.737 ======================================================== 00:23:46.737 Latency(us) 00:23:46.737 Device Information : IOPS MiB/s Average min max 00:23:46.737 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 83.10 10.39 12051.75 5024.92 47899.79 00:23:46.737 ======================================================== 00:23:46.737 Total : 83.10 10.39 12051.75 5024.92 47899.79 00:23:46.737 00:23:46.737 03:24:19 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:23:46.737 03:24:19 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:23:46.737 03:24:19 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:46.737 EAL: No free 2048 kB hugepages reported on node 1 00:23:56.766 Initializing NVMe Controllers 00:23:56.766 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:56.766 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:56.766 Initialization complete. Launching workers. 00:23:56.766 ======================================================== 00:23:56.766 Latency(us) 00:23:56.766 Device Information : IOPS MiB/s Average min max 00:23:56.766 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6727.48 3.28 4755.92 319.86 12166.00 00:23:56.766 ======================================================== 00:23:56.766 Total : 6727.48 3.28 4755.92 319.86 12166.00 00:23:56.766 00:23:56.766 03:24:29 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:23:56.766 03:24:29 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:56.766 EAL: No free 2048 kB hugepages reported on node 1 00:24:06.763 Initializing NVMe Controllers 00:24:06.763 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:06.763 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:06.763 Initialization complete. Launching workers. 00:24:06.763 ======================================================== 00:24:06.763 Latency(us) 00:24:06.763 Device Information : IOPS MiB/s Average min max 00:24:06.763 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1522.20 190.27 21065.68 1712.25 45876.98 00:24:06.763 ======================================================== 00:24:06.764 Total : 1522.20 190.27 21065.68 1712.25 45876.98 00:24:06.764 00:24:06.764 03:24:40 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:24:06.764 03:24:40 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:24:06.764 03:24:40 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:06.764 EAL: No free 2048 kB hugepages reported on node 1 00:24:16.741 Initializing NVMe Controllers 00:24:16.742 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:16.742 Controller IO queue size 128, less than required. 00:24:16.742 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:16.742 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:16.742 Initialization complete. Launching workers. 00:24:16.742 ======================================================== 00:24:16.742 Latency(us) 00:24:16.742 Device Information : IOPS MiB/s Average min max 00:24:16.742 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11859.81 5.79 10792.43 1703.32 25312.69 00:24:16.742 ======================================================== 00:24:16.742 Total : 11859.81 5.79 10792.43 1703.32 25312.69 00:24:16.742 00:24:16.742 03:24:50 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:24:16.742 03:24:50 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:16.742 EAL: No free 2048 kB hugepages reported on node 1 00:24:26.774 Initializing NVMe Controllers 00:24:26.774 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:26.774 Controller IO queue size 128, less than required. 00:24:26.774 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:26.774 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:26.774 Initialization complete. Launching workers. 00:24:26.774 ======================================================== 00:24:26.774 Latency(us) 00:24:26.774 Device Information : IOPS MiB/s Average min max 00:24:26.774 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1222.59 152.82 105435.39 22095.24 215376.19 00:24:26.774 ======================================================== 00:24:26.774 Total : 1222.59 152.82 105435.39 22095.24 215376.19 00:24:26.774 00:24:26.774 03:25:01 -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:27.032 03:25:01 -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 19cf6b7c-0230-4fd3-97ca-7b3ed63abd07 00:24:27.601 03:25:02 -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:24:28.168 03:25:02 -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 64e8646f-dee6-4c3a-92c8-e15dc6d25af1 00:24:28.433 03:25:02 -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:24:28.721 03:25:02 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:24:28.721 03:25:02 -- host/perf.sh@114 -- # nvmftestfini 00:24:28.721 03:25:02 -- nvmf/common.sh@477 -- # nvmfcleanup 00:24:28.721 03:25:02 -- nvmf/common.sh@117 -- # sync 00:24:28.721 03:25:02 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:28.721 03:25:02 -- nvmf/common.sh@120 -- # set +e 00:24:28.721 03:25:02 -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:28.721 03:25:02 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:28.721 rmmod nvme_tcp 00:24:28.721 rmmod nvme_fabrics 00:24:28.721 rmmod nvme_keyring 00:24:28.721 03:25:02 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:28.721 03:25:03 -- nvmf/common.sh@124 -- # set -e 00:24:28.721 03:25:03 -- nvmf/common.sh@125 -- # return 0 00:24:28.721 03:25:03 -- nvmf/common.sh@478 -- # '[' -n 1568854 ']' 00:24:28.721 03:25:03 -- nvmf/common.sh@479 -- # killprocess 1568854 00:24:28.721 03:25:03 -- common/autotest_common.sh@936 -- # '[' -z 1568854 ']' 00:24:28.721 03:25:03 -- common/autotest_common.sh@940 -- # kill -0 1568854 00:24:28.721 03:25:03 -- common/autotest_common.sh@941 -- # uname 00:24:28.721 03:25:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:28.721 03:25:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1568854 00:24:28.721 03:25:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:28.721 03:25:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:28.721 03:25:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1568854' 00:24:28.721 killing process with pid 1568854 00:24:28.721 03:25:03 -- common/autotest_common.sh@955 -- # kill 1568854 00:24:28.721 03:25:03 -- common/autotest_common.sh@960 -- # wait 1568854 00:24:30.621 03:25:04 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:24:30.621 03:25:04 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:24:30.621 03:25:04 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:24:30.621 03:25:04 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:30.621 03:25:04 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:30.621 03:25:04 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:30.621 03:25:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:30.621 03:25:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:32.527 03:25:06 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:32.527 00:24:32.527 real 1m31.840s 00:24:32.527 user 5m40.646s 00:24:32.527 sys 0m14.723s 00:24:32.527 03:25:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:32.527 03:25:06 -- common/autotest_common.sh@10 -- # set +x 00:24:32.527 ************************************ 00:24:32.527 END TEST nvmf_perf 00:24:32.527 ************************************ 00:24:32.527 03:25:06 -- nvmf/nvmf.sh@97 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:32.527 03:25:06 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:32.527 03:25:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:32.527 03:25:06 -- common/autotest_common.sh@10 -- # set +x 00:24:32.527 ************************************ 00:24:32.527 START TEST nvmf_fio_host 00:24:32.527 ************************************ 00:24:32.527 03:25:06 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:32.527 * Looking for test storage... 00:24:32.527 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:32.527 03:25:06 -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:32.527 03:25:06 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:32.528 03:25:06 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:32.528 03:25:06 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:32.528 03:25:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:32.528 03:25:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:32.528 03:25:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:32.528 03:25:06 -- paths/export.sh@5 -- # export PATH 00:24:32.528 03:25:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:32.528 03:25:06 -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:32.528 03:25:06 -- nvmf/common.sh@7 -- # uname -s 00:24:32.528 03:25:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:32.528 03:25:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:32.528 03:25:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:32.528 03:25:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:32.528 03:25:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:32.528 03:25:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:32.528 03:25:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:32.528 03:25:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:32.528 03:25:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:32.528 03:25:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:32.528 03:25:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:32.528 03:25:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:32.528 03:25:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:32.528 03:25:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:32.528 03:25:06 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:32.528 03:25:06 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:32.528 03:25:06 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:32.528 03:25:06 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:32.528 03:25:06 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:32.528 03:25:06 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:32.528 03:25:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:32.528 03:25:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:32.528 03:25:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:32.528 03:25:06 -- paths/export.sh@5 -- # export PATH 00:24:32.528 03:25:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:32.528 03:25:06 -- nvmf/common.sh@47 -- # : 0 00:24:32.528 03:25:06 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:32.528 03:25:06 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:32.528 03:25:06 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:32.528 03:25:06 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:32.528 03:25:06 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:32.528 03:25:06 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:32.528 03:25:06 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:32.528 03:25:06 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:32.528 03:25:06 -- host/fio.sh@12 -- # nvmftestinit 00:24:32.528 03:25:06 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:24:32.528 03:25:06 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:32.528 03:25:06 -- nvmf/common.sh@437 -- # prepare_net_devs 00:24:32.528 03:25:06 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:24:32.528 03:25:06 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:24:32.528 03:25:06 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:32.528 03:25:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:32.528 03:25:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:32.528 03:25:06 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:24:32.528 03:25:06 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:24:32.528 03:25:06 -- nvmf/common.sh@285 -- # xtrace_disable 00:24:32.528 03:25:06 -- common/autotest_common.sh@10 -- # set +x 00:24:34.431 03:25:08 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:34.431 03:25:08 -- nvmf/common.sh@291 -- # pci_devs=() 00:24:34.431 03:25:08 -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:34.431 03:25:08 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:34.431 03:25:08 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:34.431 03:25:08 -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:34.431 03:25:08 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:34.431 03:25:08 -- nvmf/common.sh@295 -- # net_devs=() 00:24:34.431 03:25:08 -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:34.431 03:25:08 -- nvmf/common.sh@296 -- # e810=() 00:24:34.431 03:25:08 -- nvmf/common.sh@296 -- # local -ga e810 00:24:34.431 03:25:08 -- nvmf/common.sh@297 -- # x722=() 00:24:34.431 03:25:08 -- nvmf/common.sh@297 -- # local -ga x722 00:24:34.431 03:25:08 -- nvmf/common.sh@298 -- # mlx=() 00:24:34.431 03:25:08 -- nvmf/common.sh@298 -- # local -ga mlx 00:24:34.431 03:25:08 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:34.431 03:25:08 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:34.431 03:25:08 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:34.431 03:25:08 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:34.431 03:25:08 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:34.431 03:25:08 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:34.431 03:25:08 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:34.431 03:25:08 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:34.431 03:25:08 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:34.431 03:25:08 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:34.431 03:25:08 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:34.431 03:25:08 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:34.431 03:25:08 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:34.431 03:25:08 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:34.431 03:25:08 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:34.431 03:25:08 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:34.431 03:25:08 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:34.431 03:25:08 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:34.431 03:25:08 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:34.431 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:34.431 03:25:08 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:34.431 03:25:08 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:34.431 03:25:08 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:34.431 03:25:08 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:34.431 03:25:08 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:34.431 03:25:08 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:34.431 03:25:08 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:34.431 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:34.431 03:25:08 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:34.431 03:25:08 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:34.431 03:25:08 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:34.431 03:25:08 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:34.431 03:25:08 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:34.431 03:25:08 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:34.431 03:25:08 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:34.431 03:25:08 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:34.431 03:25:08 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:34.431 03:25:08 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:34.431 03:25:08 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:24:34.431 03:25:08 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:34.431 03:25:08 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:34.431 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:34.431 03:25:08 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:24:34.431 03:25:08 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:34.431 03:25:08 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:34.431 03:25:08 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:24:34.431 03:25:08 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:34.431 03:25:08 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:34.431 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:34.431 03:25:08 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:24:34.431 03:25:08 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:24:34.431 03:25:08 -- nvmf/common.sh@403 -- # is_hw=yes 00:24:34.431 03:25:08 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:24:34.431 03:25:08 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:24:34.431 03:25:08 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:24:34.431 03:25:08 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:34.431 03:25:08 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:34.431 03:25:08 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:34.431 03:25:08 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:34.431 03:25:08 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:34.431 03:25:08 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:34.431 03:25:08 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:34.431 03:25:08 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:34.431 03:25:08 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:34.431 03:25:08 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:34.431 03:25:08 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:34.431 03:25:08 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:34.431 03:25:08 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:34.431 03:25:08 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:34.431 03:25:08 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:34.431 03:25:08 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:34.431 03:25:08 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:34.431 03:25:08 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:34.431 03:25:08 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:34.690 03:25:08 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:34.690 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:34.690 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.123 ms 00:24:34.690 00:24:34.690 --- 10.0.0.2 ping statistics --- 00:24:34.690 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:34.690 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:24:34.690 03:25:08 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:34.690 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:34.690 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:24:34.690 00:24:34.690 --- 10.0.0.1 ping statistics --- 00:24:34.690 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:34.690 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:24:34.690 03:25:08 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:34.690 03:25:08 -- nvmf/common.sh@411 -- # return 0 00:24:34.690 03:25:08 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:24:34.690 03:25:08 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:34.690 03:25:08 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:24:34.690 03:25:08 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:24:34.690 03:25:08 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:34.690 03:25:08 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:24:34.690 03:25:08 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:24:34.690 03:25:08 -- host/fio.sh@14 -- # [[ y != y ]] 00:24:34.690 03:25:08 -- host/fio.sh@19 -- # timing_enter start_nvmf_tgt 00:24:34.690 03:25:08 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:34.690 03:25:08 -- common/autotest_common.sh@10 -- # set +x 00:24:34.690 03:25:08 -- host/fio.sh@22 -- # nvmfpid=1581591 00:24:34.690 03:25:08 -- host/fio.sh@21 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:34.690 03:25:08 -- host/fio.sh@24 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:34.690 03:25:08 -- host/fio.sh@26 -- # waitforlisten 1581591 00:24:34.690 03:25:08 -- common/autotest_common.sh@817 -- # '[' -z 1581591 ']' 00:24:34.690 03:25:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:34.690 03:25:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:34.690 03:25:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:34.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:34.690 03:25:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:34.691 03:25:08 -- common/autotest_common.sh@10 -- # set +x 00:24:34.691 [2024-04-25 03:25:09.010293] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:24:34.691 [2024-04-25 03:25:09.010378] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:34.691 EAL: No free 2048 kB hugepages reported on node 1 00:24:34.691 [2024-04-25 03:25:09.086183] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:34.950 [2024-04-25 03:25:09.204752] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:34.950 [2024-04-25 03:25:09.204802] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:34.950 [2024-04-25 03:25:09.204830] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:34.950 [2024-04-25 03:25:09.204842] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:34.950 [2024-04-25 03:25:09.204852] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:34.950 [2024-04-25 03:25:09.204921] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:34.950 [2024-04-25 03:25:09.204950] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:34.950 [2024-04-25 03:25:09.205014] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:34.950 [2024-04-25 03:25:09.205018] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:34.950 03:25:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:34.950 03:25:09 -- common/autotest_common.sh@850 -- # return 0 00:24:34.950 03:25:09 -- host/fio.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:34.950 03:25:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:34.950 03:25:09 -- common/autotest_common.sh@10 -- # set +x 00:24:34.950 [2024-04-25 03:25:09.333400] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:34.950 03:25:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:34.950 03:25:09 -- host/fio.sh@28 -- # timing_exit start_nvmf_tgt 00:24:34.950 03:25:09 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:34.950 03:25:09 -- common/autotest_common.sh@10 -- # set +x 00:24:34.950 03:25:09 -- host/fio.sh@30 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:34.950 03:25:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:34.950 03:25:09 -- common/autotest_common.sh@10 -- # set +x 00:24:34.950 Malloc1 00:24:34.950 03:25:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:34.950 03:25:09 -- host/fio.sh@31 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:34.950 03:25:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:34.950 03:25:09 -- common/autotest_common.sh@10 -- # set +x 00:24:34.950 03:25:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:34.950 03:25:09 -- host/fio.sh@32 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:34.950 03:25:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:34.950 03:25:09 -- common/autotest_common.sh@10 -- # set +x 00:24:34.950 03:25:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:34.950 03:25:09 -- host/fio.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:34.950 03:25:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:34.950 03:25:09 -- common/autotest_common.sh@10 -- # set +x 00:24:34.950 [2024-04-25 03:25:09.410941] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:34.950 03:25:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:34.950 03:25:09 -- host/fio.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:34.950 03:25:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:34.950 03:25:09 -- common/autotest_common.sh@10 -- # set +x 00:24:34.950 03:25:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:34.950 03:25:09 -- host/fio.sh@36 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:24:34.950 03:25:09 -- host/fio.sh@39 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:34.950 03:25:09 -- common/autotest_common.sh@1346 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:34.950 03:25:09 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:24:34.950 03:25:09 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:34.950 03:25:09 -- common/autotest_common.sh@1325 -- # local sanitizers 00:24:34.950 03:25:09 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:34.950 03:25:09 -- common/autotest_common.sh@1327 -- # shift 00:24:34.950 03:25:09 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:24:34.950 03:25:09 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:24:34.950 03:25:09 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:34.950 03:25:09 -- common/autotest_common.sh@1331 -- # grep libasan 00:24:34.950 03:25:09 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:24:34.950 03:25:09 -- common/autotest_common.sh@1331 -- # asan_lib= 00:24:34.950 03:25:09 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:24:34.950 03:25:09 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:24:34.950 03:25:09 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:34.950 03:25:09 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:24:34.950 03:25:09 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:24:35.208 03:25:09 -- common/autotest_common.sh@1331 -- # asan_lib= 00:24:35.208 03:25:09 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:24:35.208 03:25:09 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:35.208 03:25:09 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:35.208 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:24:35.208 fio-3.35 00:24:35.208 Starting 1 thread 00:24:35.208 EAL: No free 2048 kB hugepages reported on node 1 00:24:37.743 00:24:37.743 test: (groupid=0, jobs=1): err= 0: pid=1581806: Thu Apr 25 03:25:12 2024 00:24:37.743 read: IOPS=8795, BW=34.4MiB/s (36.0MB/s)(68.9MiB/2006msec) 00:24:37.743 slat (nsec): min=1911, max=160893, avg=2440.34, stdev=1860.12 00:24:37.743 clat (usec): min=3504, max=13700, avg=8030.75, stdev=612.33 00:24:37.743 lat (usec): min=3534, max=13702, avg=8033.19, stdev=612.20 00:24:37.743 clat percentiles (usec): 00:24:37.743 | 1.00th=[ 6652], 5.00th=[ 7111], 10.00th=[ 7308], 20.00th=[ 7570], 00:24:37.743 | 30.00th=[ 7767], 40.00th=[ 7898], 50.00th=[ 8029], 60.00th=[ 8160], 00:24:37.743 | 70.00th=[ 8291], 80.00th=[ 8455], 90.00th=[ 8717], 95.00th=[ 8979], 00:24:37.743 | 99.00th=[ 9372], 99.50th=[ 9634], 99.90th=[11600], 99.95th=[12780], 00:24:37.743 | 99.99th=[13698] 00:24:37.743 bw ( KiB/s): min=33984, max=36008, per=99.91%, avg=35150.00, stdev=848.84, samples=4 00:24:37.743 iops : min= 8496, max= 9002, avg=8787.50, stdev=212.21, samples=4 00:24:37.743 write: IOPS=8804, BW=34.4MiB/s (36.1MB/s)(69.0MiB/2006msec); 0 zone resets 00:24:37.743 slat (nsec): min=1997, max=133750, avg=2579.20, stdev=1463.94 00:24:37.743 clat (usec): min=1428, max=12647, avg=6423.81, stdev=543.82 00:24:37.743 lat (usec): min=1437, max=12649, avg=6426.39, stdev=543.75 00:24:37.743 clat percentiles (usec): 00:24:37.743 | 1.00th=[ 5211], 5.00th=[ 5604], 10.00th=[ 5800], 20.00th=[ 5997], 00:24:37.743 | 30.00th=[ 6194], 40.00th=[ 6325], 50.00th=[ 6456], 60.00th=[ 6521], 00:24:37.743 | 70.00th=[ 6652], 80.00th=[ 6849], 90.00th=[ 7046], 95.00th=[ 7242], 00:24:37.743 | 99.00th=[ 7570], 99.50th=[ 7767], 99.90th=[10290], 99.95th=[11731], 00:24:37.743 | 99.99th=[11863] 00:24:37.743 bw ( KiB/s): min=34904, max=35456, per=99.96%, avg=35206.00, stdev=240.48, samples=4 00:24:37.743 iops : min= 8726, max= 8864, avg=8801.50, stdev=60.12, samples=4 00:24:37.743 lat (msec) : 2=0.01%, 4=0.09%, 10=99.72%, 20=0.18% 00:24:37.743 cpu : usr=51.42%, sys=39.85%, ctx=58, majf=0, minf=6 00:24:37.743 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:24:37.743 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:37.743 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:37.743 issued rwts: total=17643,17662,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:37.743 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:37.743 00:24:37.743 Run status group 0 (all jobs): 00:24:37.743 READ: bw=34.4MiB/s (36.0MB/s), 34.4MiB/s-34.4MiB/s (36.0MB/s-36.0MB/s), io=68.9MiB (72.3MB), run=2006-2006msec 00:24:37.743 WRITE: bw=34.4MiB/s (36.1MB/s), 34.4MiB/s-34.4MiB/s (36.1MB/s-36.1MB/s), io=69.0MiB (72.3MB), run=2006-2006msec 00:24:37.743 03:25:12 -- host/fio.sh@43 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:37.743 03:25:12 -- common/autotest_common.sh@1346 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:37.743 03:25:12 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:24:37.743 03:25:12 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:37.743 03:25:12 -- common/autotest_common.sh@1325 -- # local sanitizers 00:24:37.743 03:25:12 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:37.743 03:25:12 -- common/autotest_common.sh@1327 -- # shift 00:24:37.743 03:25:12 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:24:37.743 03:25:12 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:24:37.743 03:25:12 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:37.743 03:25:12 -- common/autotest_common.sh@1331 -- # grep libasan 00:24:37.743 03:25:12 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:24:37.743 03:25:12 -- common/autotest_common.sh@1331 -- # asan_lib= 00:24:37.743 03:25:12 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:24:37.743 03:25:12 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:24:37.743 03:25:12 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:37.743 03:25:12 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:24:37.743 03:25:12 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:24:37.743 03:25:12 -- common/autotest_common.sh@1331 -- # asan_lib= 00:24:37.743 03:25:12 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:24:37.743 03:25:12 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:37.743 03:25:12 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:38.002 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:24:38.002 fio-3.35 00:24:38.002 Starting 1 thread 00:24:38.002 EAL: No free 2048 kB hugepages reported on node 1 00:24:40.534 00:24:40.534 test: (groupid=0, jobs=1): err= 0: pid=1582262: Thu Apr 25 03:25:14 2024 00:24:40.534 read: IOPS=7075, BW=111MiB/s (116MB/s)(223MiB/2016msec) 00:24:40.534 slat (nsec): min=2895, max=94853, avg=3492.22, stdev=1625.29 00:24:40.534 clat (usec): min=3850, max=26927, avg=10882.05, stdev=3104.92 00:24:40.534 lat (usec): min=3853, max=26931, avg=10885.54, stdev=3105.04 00:24:40.534 clat percentiles (usec): 00:24:40.534 | 1.00th=[ 5342], 5.00th=[ 6456], 10.00th=[ 7242], 20.00th=[ 8291], 00:24:40.534 | 30.00th=[ 8979], 40.00th=[ 9765], 50.00th=[10552], 60.00th=[11338], 00:24:40.534 | 70.00th=[12256], 80.00th=[13173], 90.00th=[14746], 95.00th=[16909], 00:24:40.534 | 99.00th=[20579], 99.50th=[21890], 99.90th=[23462], 99.95th=[24249], 00:24:40.534 | 99.99th=[25822] 00:24:40.534 bw ( KiB/s): min=40704, max=69524, per=50.35%, avg=57005.00, stdev=13052.59, samples=4 00:24:40.534 iops : min= 2544, max= 4345, avg=3562.75, stdev=815.71, samples=4 00:24:40.534 write: IOPS=4073, BW=63.6MiB/s (66.7MB/s)(116MiB/1830msec); 0 zone resets 00:24:40.534 slat (usec): min=30, max=140, avg=33.10, stdev= 4.47 00:24:40.534 clat (usec): min=3804, max=27752, avg=12824.34, stdev=3605.14 00:24:40.534 lat (usec): min=3836, max=27787, avg=12857.45, stdev=3605.51 00:24:40.534 clat percentiles (usec): 00:24:40.534 | 1.00th=[ 7963], 5.00th=[ 8717], 10.00th=[ 9241], 20.00th=[ 9896], 00:24:40.534 | 30.00th=[10683], 40.00th=[11207], 50.00th=[11863], 60.00th=[12518], 00:24:40.534 | 70.00th=[13566], 80.00th=[15270], 90.00th=[18482], 95.00th=[20317], 00:24:40.534 | 99.00th=[23725], 99.50th=[24511], 99.90th=[25297], 99.95th=[25560], 00:24:40.534 | 99.99th=[27657] 00:24:40.534 bw ( KiB/s): min=41984, max=72686, per=91.05%, avg=59339.50, stdev=13876.78, samples=4 00:24:40.534 iops : min= 2624, max= 4542, avg=3708.50, stdev=867.02, samples=4 00:24:40.534 lat (msec) : 4=0.03%, 10=35.40%, 20=61.81%, 50=2.76% 00:24:40.534 cpu : usr=68.98%, sys=25.71%, ctx=35, majf=0, minf=2 00:24:40.534 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:24:40.534 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:40.534 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:40.534 issued rwts: total=14264,7454,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:40.534 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:40.534 00:24:40.534 Run status group 0 (all jobs): 00:24:40.534 READ: bw=111MiB/s (116MB/s), 111MiB/s-111MiB/s (116MB/s-116MB/s), io=223MiB (234MB), run=2016-2016msec 00:24:40.534 WRITE: bw=63.6MiB/s (66.7MB/s), 63.6MiB/s-63.6MiB/s (66.7MB/s-66.7MB/s), io=116MiB (122MB), run=1830-1830msec 00:24:40.534 03:25:14 -- host/fio.sh@45 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:40.534 03:25:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:40.534 03:25:14 -- common/autotest_common.sh@10 -- # set +x 00:24:40.534 03:25:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:40.534 03:25:14 -- host/fio.sh@47 -- # '[' 1 -eq 1 ']' 00:24:40.534 03:25:14 -- host/fio.sh@49 -- # bdfs=($(get_nvme_bdfs)) 00:24:40.534 03:25:14 -- host/fio.sh@49 -- # get_nvme_bdfs 00:24:40.534 03:25:14 -- common/autotest_common.sh@1499 -- # bdfs=() 00:24:40.534 03:25:14 -- common/autotest_common.sh@1499 -- # local bdfs 00:24:40.534 03:25:14 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:24:40.534 03:25:14 -- common/autotest_common.sh@1500 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:24:40.534 03:25:14 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:24:40.534 03:25:14 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:24:40.534 03:25:14 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:88:00.0 00:24:40.534 03:25:14 -- host/fio.sh@50 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 -i 10.0.0.2 00:24:40.534 03:25:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:40.534 03:25:14 -- common/autotest_common.sh@10 -- # set +x 00:24:43.822 Nvme0n1 00:24:43.822 03:25:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:43.822 03:25:17 -- host/fio.sh@51 -- # rpc_cmd bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:24:43.822 03:25:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:43.822 03:25:17 -- common/autotest_common.sh@10 -- # set +x 00:24:46.411 03:25:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:46.411 03:25:20 -- host/fio.sh@51 -- # ls_guid=7a8d3737-8e62-42f8-9aaf-b74a04c4bd8f 00:24:46.411 03:25:20 -- host/fio.sh@52 -- # get_lvs_free_mb 7a8d3737-8e62-42f8-9aaf-b74a04c4bd8f 00:24:46.411 03:25:20 -- common/autotest_common.sh@1350 -- # local lvs_uuid=7a8d3737-8e62-42f8-9aaf-b74a04c4bd8f 00:24:46.411 03:25:20 -- common/autotest_common.sh@1351 -- # local lvs_info 00:24:46.411 03:25:20 -- common/autotest_common.sh@1352 -- # local fc 00:24:46.411 03:25:20 -- common/autotest_common.sh@1353 -- # local cs 00:24:46.411 03:25:20 -- common/autotest_common.sh@1354 -- # rpc_cmd bdev_lvol_get_lvstores 00:24:46.411 03:25:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:46.411 03:25:20 -- common/autotest_common.sh@10 -- # set +x 00:24:46.411 03:25:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:46.411 03:25:20 -- common/autotest_common.sh@1354 -- # lvs_info='[ 00:24:46.411 { 00:24:46.411 "uuid": "7a8d3737-8e62-42f8-9aaf-b74a04c4bd8f", 00:24:46.411 "name": "lvs_0", 00:24:46.411 "base_bdev": "Nvme0n1", 00:24:46.411 "total_data_clusters": 930, 00:24:46.411 "free_clusters": 930, 00:24:46.411 "block_size": 512, 00:24:46.411 "cluster_size": 1073741824 00:24:46.411 } 00:24:46.411 ]' 00:24:46.411 03:25:20 -- common/autotest_common.sh@1355 -- # jq '.[] | select(.uuid=="7a8d3737-8e62-42f8-9aaf-b74a04c4bd8f") .free_clusters' 00:24:46.411 03:25:20 -- common/autotest_common.sh@1355 -- # fc=930 00:24:46.411 03:25:20 -- common/autotest_common.sh@1356 -- # jq '.[] | select(.uuid=="7a8d3737-8e62-42f8-9aaf-b74a04c4bd8f") .cluster_size' 00:24:46.411 03:25:20 -- common/autotest_common.sh@1356 -- # cs=1073741824 00:24:46.411 03:25:20 -- common/autotest_common.sh@1359 -- # free_mb=952320 00:24:46.411 03:25:20 -- common/autotest_common.sh@1360 -- # echo 952320 00:24:46.411 952320 00:24:46.411 03:25:20 -- host/fio.sh@53 -- # rpc_cmd bdev_lvol_create -l lvs_0 lbd_0 952320 00:24:46.411 03:25:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:46.411 03:25:20 -- common/autotest_common.sh@10 -- # set +x 00:24:46.411 1aa177a0-a102-4c24-a6d2-e48e6b20a026 00:24:46.411 03:25:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:46.411 03:25:20 -- host/fio.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:24:46.411 03:25:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:46.411 03:25:20 -- common/autotest_common.sh@10 -- # set +x 00:24:46.411 03:25:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:46.411 03:25:20 -- host/fio.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:24:46.411 03:25:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:46.411 03:25:20 -- common/autotest_common.sh@10 -- # set +x 00:24:46.411 03:25:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:46.411 03:25:20 -- host/fio.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:24:46.411 03:25:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:46.411 03:25:20 -- common/autotest_common.sh@10 -- # set +x 00:24:46.411 03:25:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:46.411 03:25:20 -- host/fio.sh@57 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:46.411 03:25:20 -- common/autotest_common.sh@1346 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:46.411 03:25:20 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:24:46.411 03:25:20 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:46.411 03:25:20 -- common/autotest_common.sh@1325 -- # local sanitizers 00:24:46.411 03:25:20 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:46.411 03:25:20 -- common/autotest_common.sh@1327 -- # shift 00:24:46.411 03:25:20 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:24:46.411 03:25:20 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:24:46.411 03:25:20 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:46.411 03:25:20 -- common/autotest_common.sh@1331 -- # grep libasan 00:24:46.411 03:25:20 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:24:46.411 03:25:20 -- common/autotest_common.sh@1331 -- # asan_lib= 00:24:46.411 03:25:20 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:24:46.411 03:25:20 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:24:46.411 03:25:20 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:46.411 03:25:20 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:24:46.411 03:25:20 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:24:46.411 03:25:20 -- common/autotest_common.sh@1331 -- # asan_lib= 00:24:46.411 03:25:20 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:24:46.411 03:25:20 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:46.411 03:25:20 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:46.411 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:24:46.411 fio-3.35 00:24:46.411 Starting 1 thread 00:24:46.411 EAL: No free 2048 kB hugepages reported on node 1 00:24:48.943 00:24:48.943 test: (groupid=0, jobs=1): err= 0: pid=1583274: Thu Apr 25 03:25:23 2024 00:24:48.943 read: IOPS=6092, BW=23.8MiB/s (25.0MB/s)(47.8MiB/2007msec) 00:24:48.943 slat (nsec): min=1834, max=165990, avg=2302.94, stdev=2183.31 00:24:48.943 clat (usec): min=988, max=172418, avg=11590.22, stdev=11646.49 00:24:48.943 lat (usec): min=991, max=172458, avg=11592.52, stdev=11646.86 00:24:48.943 clat percentiles (msec): 00:24:48.943 | 1.00th=[ 9], 5.00th=[ 10], 10.00th=[ 10], 20.00th=[ 11], 00:24:48.943 | 30.00th=[ 11], 40.00th=[ 11], 50.00th=[ 11], 60.00th=[ 11], 00:24:48.943 | 70.00th=[ 12], 80.00th=[ 12], 90.00th=[ 12], 95.00th=[ 13], 00:24:48.943 | 99.00th=[ 13], 99.50th=[ 159], 99.90th=[ 174], 99.95th=[ 174], 00:24:48.943 | 99.99th=[ 174] 00:24:48.943 bw ( KiB/s): min=16918, max=26952, per=99.75%, avg=24309.50, stdev=4930.07, samples=4 00:24:48.943 iops : min= 4229, max= 6738, avg=6077.25, stdev=1232.77, samples=4 00:24:48.943 write: IOPS=6071, BW=23.7MiB/s (24.9MB/s)(47.6MiB/2007msec); 0 zone resets 00:24:48.943 slat (nsec): min=1979, max=142577, avg=2435.41, stdev=1613.30 00:24:48.943 clat (usec): min=418, max=170479, avg=9278.40, stdev=10932.88 00:24:48.943 lat (usec): min=422, max=170487, avg=9280.84, stdev=10933.28 00:24:48.943 clat percentiles (msec): 00:24:48.943 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 8], 20.00th=[ 8], 00:24:48.943 | 30.00th=[ 9], 40.00th=[ 9], 50.00th=[ 9], 60.00th=[ 9], 00:24:48.943 | 70.00th=[ 9], 80.00th=[ 10], 90.00th=[ 10], 95.00th=[ 10], 00:24:48.943 | 99.00th=[ 11], 99.50th=[ 16], 99.90th=[ 169], 99.95th=[ 171], 00:24:48.943 | 99.99th=[ 171] 00:24:48.943 bw ( KiB/s): min=17924, max=26432, per=99.84%, avg=24247.00, stdev=4215.90, samples=4 00:24:48.943 iops : min= 4481, max= 6608, avg=6061.75, stdev=1053.98, samples=4 00:24:48.943 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:24:48.943 lat (msec) : 2=0.02%, 4=0.17%, 10=57.38%, 20=41.88%, 250=0.52% 00:24:48.943 cpu : usr=51.99%, sys=41.08%, ctx=56, majf=0, minf=6 00:24:48.943 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:24:48.943 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:48.943 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:48.943 issued rwts: total=12228,12186,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:48.943 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:48.943 00:24:48.943 Run status group 0 (all jobs): 00:24:48.943 READ: bw=23.8MiB/s (25.0MB/s), 23.8MiB/s-23.8MiB/s (25.0MB/s-25.0MB/s), io=47.8MiB (50.1MB), run=2007-2007msec 00:24:48.943 WRITE: bw=23.7MiB/s (24.9MB/s), 23.7MiB/s-23.7MiB/s (24.9MB/s-24.9MB/s), io=47.6MiB (49.9MB), run=2007-2007msec 00:24:48.943 03:25:23 -- host/fio.sh@59 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:24:48.943 03:25:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:48.943 03:25:23 -- common/autotest_common.sh@10 -- # set +x 00:24:48.943 03:25:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:48.943 03:25:23 -- host/fio.sh@62 -- # rpc_cmd bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:24:48.943 03:25:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:48.943 03:25:23 -- common/autotest_common.sh@10 -- # set +x 00:24:49.507 03:25:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:49.507 03:25:23 -- host/fio.sh@62 -- # ls_nested_guid=629ed7df-4b2f-4232-bd8e-8df37b3180a2 00:24:49.507 03:25:23 -- host/fio.sh@63 -- # get_lvs_free_mb 629ed7df-4b2f-4232-bd8e-8df37b3180a2 00:24:49.507 03:25:23 -- common/autotest_common.sh@1350 -- # local lvs_uuid=629ed7df-4b2f-4232-bd8e-8df37b3180a2 00:24:49.507 03:25:23 -- common/autotest_common.sh@1351 -- # local lvs_info 00:24:49.507 03:25:23 -- common/autotest_common.sh@1352 -- # local fc 00:24:49.507 03:25:23 -- common/autotest_common.sh@1353 -- # local cs 00:24:49.508 03:25:23 -- common/autotest_common.sh@1354 -- # rpc_cmd bdev_lvol_get_lvstores 00:24:49.508 03:25:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:49.508 03:25:23 -- common/autotest_common.sh@10 -- # set +x 00:24:49.508 03:25:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:49.508 03:25:23 -- common/autotest_common.sh@1354 -- # lvs_info='[ 00:24:49.508 { 00:24:49.508 "uuid": "7a8d3737-8e62-42f8-9aaf-b74a04c4bd8f", 00:24:49.508 "name": "lvs_0", 00:24:49.508 "base_bdev": "Nvme0n1", 00:24:49.508 "total_data_clusters": 930, 00:24:49.508 "free_clusters": 0, 00:24:49.508 "block_size": 512, 00:24:49.508 "cluster_size": 1073741824 00:24:49.508 }, 00:24:49.508 { 00:24:49.508 "uuid": "629ed7df-4b2f-4232-bd8e-8df37b3180a2", 00:24:49.508 "name": "lvs_n_0", 00:24:49.508 "base_bdev": "1aa177a0-a102-4c24-a6d2-e48e6b20a026", 00:24:49.508 "total_data_clusters": 237847, 00:24:49.508 "free_clusters": 237847, 00:24:49.508 "block_size": 512, 00:24:49.508 "cluster_size": 4194304 00:24:49.508 } 00:24:49.508 ]' 00:24:49.508 03:25:23 -- common/autotest_common.sh@1355 -- # jq '.[] | select(.uuid=="629ed7df-4b2f-4232-bd8e-8df37b3180a2") .free_clusters' 00:24:49.767 03:25:24 -- common/autotest_common.sh@1355 -- # fc=237847 00:24:49.767 03:25:24 -- common/autotest_common.sh@1356 -- # jq '.[] | select(.uuid=="629ed7df-4b2f-4232-bd8e-8df37b3180a2") .cluster_size' 00:24:49.767 03:25:24 -- common/autotest_common.sh@1356 -- # cs=4194304 00:24:49.767 03:25:24 -- common/autotest_common.sh@1359 -- # free_mb=951388 00:24:49.767 03:25:24 -- common/autotest_common.sh@1360 -- # echo 951388 00:24:49.767 951388 00:24:49.767 03:25:24 -- host/fio.sh@64 -- # rpc_cmd bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:24:49.767 03:25:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:49.767 03:25:24 -- common/autotest_common.sh@10 -- # set +x 00:24:50.026 9c6da916-40ed-41af-9b81-4e282b6b0d22 00:24:50.026 03:25:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:50.026 03:25:24 -- host/fio.sh@65 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:24:50.026 03:25:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:50.026 03:25:24 -- common/autotest_common.sh@10 -- # set +x 00:24:50.026 03:25:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:50.026 03:25:24 -- host/fio.sh@66 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:24:50.026 03:25:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:50.026 03:25:24 -- common/autotest_common.sh@10 -- # set +x 00:24:50.026 03:25:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:50.026 03:25:24 -- host/fio.sh@67 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:24:50.026 03:25:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:50.026 03:25:24 -- common/autotest_common.sh@10 -- # set +x 00:24:50.285 03:25:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:50.285 03:25:24 -- host/fio.sh@68 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:50.285 03:25:24 -- common/autotest_common.sh@1346 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:50.285 03:25:24 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:24:50.285 03:25:24 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:50.285 03:25:24 -- common/autotest_common.sh@1325 -- # local sanitizers 00:24:50.285 03:25:24 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:50.285 03:25:24 -- common/autotest_common.sh@1327 -- # shift 00:24:50.285 03:25:24 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:24:50.285 03:25:24 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:24:50.285 03:25:24 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:50.285 03:25:24 -- common/autotest_common.sh@1331 -- # grep libasan 00:24:50.285 03:25:24 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:24:50.285 03:25:24 -- common/autotest_common.sh@1331 -- # asan_lib= 00:24:50.285 03:25:24 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:24:50.285 03:25:24 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:24:50.285 03:25:24 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:50.285 03:25:24 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:24:50.285 03:25:24 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:24:50.285 03:25:24 -- common/autotest_common.sh@1331 -- # asan_lib= 00:24:50.285 03:25:24 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:24:50.285 03:25:24 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:50.285 03:25:24 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:50.285 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:24:50.285 fio-3.35 00:24:50.285 Starting 1 thread 00:24:50.285 EAL: No free 2048 kB hugepages reported on node 1 00:24:52.814 00:24:52.814 test: (groupid=0, jobs=1): err= 0: pid=1583869: Thu Apr 25 03:25:27 2024 00:24:52.814 read: IOPS=5854, BW=22.9MiB/s (24.0MB/s)(45.9MiB/2009msec) 00:24:52.814 slat (nsec): min=1944, max=125118, avg=2439.88, stdev=1888.29 00:24:52.814 clat (usec): min=4422, max=21375, avg=12135.89, stdev=998.02 00:24:52.814 lat (usec): min=4427, max=21378, avg=12138.33, stdev=997.88 00:24:52.814 clat percentiles (usec): 00:24:52.814 | 1.00th=[10028], 5.00th=[10683], 10.00th=[10945], 20.00th=[11338], 00:24:52.814 | 30.00th=[11600], 40.00th=[11863], 50.00th=[12125], 60.00th=[12387], 00:24:52.814 | 70.00th=[12649], 80.00th=[12911], 90.00th=[13304], 95.00th=[13698], 00:24:52.814 | 99.00th=[14353], 99.50th=[14615], 99.90th=[18482], 99.95th=[20317], 00:24:52.814 | 99.99th=[21365] 00:24:52.814 bw ( KiB/s): min=22472, max=23776, per=99.93%, avg=23402.00, stdev=622.20, samples=4 00:24:52.814 iops : min= 5618, max= 5944, avg=5850.50, stdev=155.55, samples=4 00:24:52.814 write: IOPS=5848, BW=22.8MiB/s (24.0MB/s)(45.9MiB/2009msec); 0 zone resets 00:24:52.814 slat (nsec): min=2062, max=93496, avg=2615.92, stdev=1623.49 00:24:52.814 clat (usec): min=2548, max=18666, avg=9622.97, stdev=903.37 00:24:52.814 lat (usec): min=2554, max=18669, avg=9625.58, stdev=903.28 00:24:52.814 clat percentiles (usec): 00:24:52.814 | 1.00th=[ 7635], 5.00th=[ 8291], 10.00th=[ 8586], 20.00th=[ 8979], 00:24:52.814 | 30.00th=[ 9241], 40.00th=[ 9372], 50.00th=[ 9634], 60.00th=[ 9765], 00:24:52.814 | 70.00th=[10028], 80.00th=[10290], 90.00th=[10683], 95.00th=[10945], 00:24:52.814 | 99.00th=[11600], 99.50th=[11994], 99.90th=[16450], 99.95th=[17695], 00:24:52.814 | 99.99th=[18482] 00:24:52.814 bw ( KiB/s): min=23232, max=23488, per=99.89%, avg=23366.00, stdev=105.20, samples=4 00:24:52.814 iops : min= 5808, max= 5872, avg=5841.50, stdev=26.30, samples=4 00:24:52.814 lat (msec) : 4=0.05%, 10=34.89%, 20=65.02%, 50=0.04% 00:24:52.814 cpu : usr=55.28%, sys=38.60%, ctx=85, majf=0, minf=6 00:24:52.814 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:24:52.814 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:52.814 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:52.814 issued rwts: total=11762,11749,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:52.814 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:52.814 00:24:52.814 Run status group 0 (all jobs): 00:24:52.814 READ: bw=22.9MiB/s (24.0MB/s), 22.9MiB/s-22.9MiB/s (24.0MB/s-24.0MB/s), io=45.9MiB (48.2MB), run=2009-2009msec 00:24:52.814 WRITE: bw=22.8MiB/s (24.0MB/s), 22.8MiB/s-22.8MiB/s (24.0MB/s-24.0MB/s), io=45.9MiB (48.1MB), run=2009-2009msec 00:24:52.814 03:25:27 -- host/fio.sh@70 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:24:52.814 03:25:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:52.814 03:25:27 -- common/autotest_common.sh@10 -- # set +x 00:24:52.814 03:25:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:52.814 03:25:27 -- host/fio.sh@72 -- # sync 00:24:52.814 03:25:27 -- host/fio.sh@74 -- # rpc_cmd bdev_lvol_delete lvs_n_0/lbd_nest_0 00:24:52.814 03:25:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:52.814 03:25:27 -- common/autotest_common.sh@10 -- # set +x 00:24:57.007 03:25:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:57.007 03:25:30 -- host/fio.sh@75 -- # rpc_cmd bdev_lvol_delete_lvstore -l lvs_n_0 00:24:57.007 03:25:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:57.007 03:25:30 -- common/autotest_common.sh@10 -- # set +x 00:24:57.007 03:25:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:57.007 03:25:30 -- host/fio.sh@76 -- # rpc_cmd bdev_lvol_delete lvs_0/lbd_0 00:24:57.007 03:25:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:57.007 03:25:30 -- common/autotest_common.sh@10 -- # set +x 00:24:58.910 03:25:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:58.910 03:25:33 -- host/fio.sh@77 -- # rpc_cmd bdev_lvol_delete_lvstore -l lvs_0 00:24:58.910 03:25:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:58.910 03:25:33 -- common/autotest_common.sh@10 -- # set +x 00:24:58.910 03:25:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:58.910 03:25:33 -- host/fio.sh@78 -- # rpc_cmd bdev_nvme_detach_controller Nvme0 00:24:58.910 03:25:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:58.910 03:25:33 -- common/autotest_common.sh@10 -- # set +x 00:25:00.816 03:25:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:00.816 03:25:35 -- host/fio.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:25:00.816 03:25:35 -- host/fio.sh@83 -- # rm -f ./local-test-0-verify.state 00:25:00.816 03:25:35 -- host/fio.sh@84 -- # nvmftestfini 00:25:00.816 03:25:35 -- nvmf/common.sh@477 -- # nvmfcleanup 00:25:00.816 03:25:35 -- nvmf/common.sh@117 -- # sync 00:25:00.816 03:25:35 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:00.816 03:25:35 -- nvmf/common.sh@120 -- # set +e 00:25:00.816 03:25:35 -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:00.816 03:25:35 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:00.816 rmmod nvme_tcp 00:25:00.816 rmmod nvme_fabrics 00:25:00.816 rmmod nvme_keyring 00:25:00.816 03:25:35 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:00.816 03:25:35 -- nvmf/common.sh@124 -- # set -e 00:25:00.816 03:25:35 -- nvmf/common.sh@125 -- # return 0 00:25:00.816 03:25:35 -- nvmf/common.sh@478 -- # '[' -n 1581591 ']' 00:25:00.816 03:25:35 -- nvmf/common.sh@479 -- # killprocess 1581591 00:25:00.816 03:25:35 -- common/autotest_common.sh@936 -- # '[' -z 1581591 ']' 00:25:00.816 03:25:35 -- common/autotest_common.sh@940 -- # kill -0 1581591 00:25:00.816 03:25:35 -- common/autotest_common.sh@941 -- # uname 00:25:00.816 03:25:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:00.816 03:25:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1581591 00:25:00.816 03:25:35 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:00.816 03:25:35 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:00.816 03:25:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1581591' 00:25:00.816 killing process with pid 1581591 00:25:00.816 03:25:35 -- common/autotest_common.sh@955 -- # kill 1581591 00:25:00.816 03:25:35 -- common/autotest_common.sh@960 -- # wait 1581591 00:25:01.078 03:25:35 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:25:01.078 03:25:35 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:25:01.078 03:25:35 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:25:01.078 03:25:35 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:01.078 03:25:35 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:01.078 03:25:35 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:01.078 03:25:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:01.078 03:25:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:02.988 03:25:37 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:02.988 00:25:02.988 real 0m30.626s 00:25:02.988 user 1m48.485s 00:25:02.988 sys 0m6.715s 00:25:02.988 03:25:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:02.988 03:25:37 -- common/autotest_common.sh@10 -- # set +x 00:25:02.988 ************************************ 00:25:02.988 END TEST nvmf_fio_host 00:25:02.988 ************************************ 00:25:02.988 03:25:37 -- nvmf/nvmf.sh@98 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:25:02.988 03:25:37 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:02.988 03:25:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:02.988 03:25:37 -- common/autotest_common.sh@10 -- # set +x 00:25:03.247 ************************************ 00:25:03.247 START TEST nvmf_failover 00:25:03.247 ************************************ 00:25:03.247 03:25:37 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:25:03.247 * Looking for test storage... 00:25:03.247 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:03.247 03:25:37 -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:03.247 03:25:37 -- nvmf/common.sh@7 -- # uname -s 00:25:03.247 03:25:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:03.247 03:25:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:03.247 03:25:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:03.247 03:25:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:03.247 03:25:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:03.247 03:25:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:03.247 03:25:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:03.247 03:25:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:03.247 03:25:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:03.247 03:25:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:03.247 03:25:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:03.247 03:25:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:03.247 03:25:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:03.247 03:25:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:03.247 03:25:37 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:03.247 03:25:37 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:03.247 03:25:37 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:03.247 03:25:37 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:03.247 03:25:37 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:03.247 03:25:37 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:03.247 03:25:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.247 03:25:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.247 03:25:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.247 03:25:37 -- paths/export.sh@5 -- # export PATH 00:25:03.247 03:25:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.247 03:25:37 -- nvmf/common.sh@47 -- # : 0 00:25:03.247 03:25:37 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:03.247 03:25:37 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:03.247 03:25:37 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:03.247 03:25:37 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:03.247 03:25:37 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:03.247 03:25:37 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:03.247 03:25:37 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:03.247 03:25:37 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:03.247 03:25:37 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:03.247 03:25:37 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:03.247 03:25:37 -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:03.247 03:25:37 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:03.247 03:25:37 -- host/failover.sh@18 -- # nvmftestinit 00:25:03.247 03:25:37 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:25:03.247 03:25:37 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:03.247 03:25:37 -- nvmf/common.sh@437 -- # prepare_net_devs 00:25:03.247 03:25:37 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:25:03.247 03:25:37 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:25:03.247 03:25:37 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:03.247 03:25:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:03.247 03:25:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:03.247 03:25:37 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:25:03.247 03:25:37 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:25:03.247 03:25:37 -- nvmf/common.sh@285 -- # xtrace_disable 00:25:03.247 03:25:37 -- common/autotest_common.sh@10 -- # set +x 00:25:05.196 03:25:39 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:05.196 03:25:39 -- nvmf/common.sh@291 -- # pci_devs=() 00:25:05.196 03:25:39 -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:05.196 03:25:39 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:05.196 03:25:39 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:05.196 03:25:39 -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:05.196 03:25:39 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:05.196 03:25:39 -- nvmf/common.sh@295 -- # net_devs=() 00:25:05.196 03:25:39 -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:05.196 03:25:39 -- nvmf/common.sh@296 -- # e810=() 00:25:05.196 03:25:39 -- nvmf/common.sh@296 -- # local -ga e810 00:25:05.196 03:25:39 -- nvmf/common.sh@297 -- # x722=() 00:25:05.196 03:25:39 -- nvmf/common.sh@297 -- # local -ga x722 00:25:05.196 03:25:39 -- nvmf/common.sh@298 -- # mlx=() 00:25:05.196 03:25:39 -- nvmf/common.sh@298 -- # local -ga mlx 00:25:05.196 03:25:39 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:05.196 03:25:39 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:05.196 03:25:39 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:05.196 03:25:39 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:05.196 03:25:39 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:05.196 03:25:39 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:05.196 03:25:39 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:05.196 03:25:39 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:05.196 03:25:39 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:05.196 03:25:39 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:05.197 03:25:39 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:05.197 03:25:39 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:05.197 03:25:39 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:05.197 03:25:39 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:05.197 03:25:39 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:05.197 03:25:39 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:05.197 03:25:39 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:05.197 03:25:39 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:05.197 03:25:39 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:05.197 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:05.197 03:25:39 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:05.197 03:25:39 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:05.197 03:25:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:05.197 03:25:39 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:05.197 03:25:39 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:05.197 03:25:39 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:05.197 03:25:39 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:05.197 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:05.197 03:25:39 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:05.197 03:25:39 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:05.197 03:25:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:05.197 03:25:39 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:05.197 03:25:39 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:05.197 03:25:39 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:05.197 03:25:39 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:05.197 03:25:39 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:05.197 03:25:39 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:05.197 03:25:39 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:05.197 03:25:39 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:25:05.197 03:25:39 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:05.197 03:25:39 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:05.197 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:05.197 03:25:39 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:25:05.197 03:25:39 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:05.197 03:25:39 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:05.197 03:25:39 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:25:05.197 03:25:39 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:05.197 03:25:39 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:05.197 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:05.197 03:25:39 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:25:05.197 03:25:39 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:25:05.197 03:25:39 -- nvmf/common.sh@403 -- # is_hw=yes 00:25:05.197 03:25:39 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:25:05.197 03:25:39 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:25:05.197 03:25:39 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:25:05.197 03:25:39 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:05.197 03:25:39 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:05.197 03:25:39 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:05.197 03:25:39 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:05.197 03:25:39 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:05.197 03:25:39 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:05.197 03:25:39 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:05.197 03:25:39 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:05.197 03:25:39 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:05.197 03:25:39 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:05.197 03:25:39 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:05.197 03:25:39 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:05.197 03:25:39 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:05.197 03:25:39 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:05.197 03:25:39 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:05.197 03:25:39 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:05.197 03:25:39 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:05.197 03:25:39 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:05.197 03:25:39 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:05.197 03:25:39 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:05.197 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:05.197 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.125 ms 00:25:05.197 00:25:05.197 --- 10.0.0.2 ping statistics --- 00:25:05.197 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:05.197 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:25:05.197 03:25:39 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:05.197 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:05.197 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms 00:25:05.197 00:25:05.197 --- 10.0.0.1 ping statistics --- 00:25:05.197 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:05.197 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:25:05.197 03:25:39 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:05.197 03:25:39 -- nvmf/common.sh@411 -- # return 0 00:25:05.197 03:25:39 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:25:05.197 03:25:39 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:05.197 03:25:39 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:25:05.197 03:25:39 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:25:05.197 03:25:39 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:05.197 03:25:39 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:25:05.197 03:25:39 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:25:05.197 03:25:39 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:25:05.197 03:25:39 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:25:05.197 03:25:39 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:05.197 03:25:39 -- common/autotest_common.sh@10 -- # set +x 00:25:05.197 03:25:39 -- nvmf/common.sh@470 -- # nvmfpid=1586981 00:25:05.197 03:25:39 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:05.197 03:25:39 -- nvmf/common.sh@471 -- # waitforlisten 1586981 00:25:05.197 03:25:39 -- common/autotest_common.sh@817 -- # '[' -z 1586981 ']' 00:25:05.197 03:25:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:05.197 03:25:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:05.197 03:25:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:05.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:05.197 03:25:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:05.197 03:25:39 -- common/autotest_common.sh@10 -- # set +x 00:25:05.456 [2024-04-25 03:25:39.701358] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:25:05.456 [2024-04-25 03:25:39.701431] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:05.456 EAL: No free 2048 kB hugepages reported on node 1 00:25:05.456 [2024-04-25 03:25:39.768705] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:05.456 [2024-04-25 03:25:39.873672] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:05.456 [2024-04-25 03:25:39.873730] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:05.456 [2024-04-25 03:25:39.873759] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:05.456 [2024-04-25 03:25:39.873770] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:05.456 [2024-04-25 03:25:39.873780] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:05.456 [2024-04-25 03:25:39.873835] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:05.456 [2024-04-25 03:25:39.873908] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:05.456 [2024-04-25 03:25:39.873912] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:05.715 03:25:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:05.715 03:25:39 -- common/autotest_common.sh@850 -- # return 0 00:25:05.715 03:25:39 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:25:05.715 03:25:39 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:05.715 03:25:39 -- common/autotest_common.sh@10 -- # set +x 00:25:05.715 03:25:39 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:05.715 03:25:39 -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:05.975 [2024-04-25 03:25:40.223392] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:05.975 03:25:40 -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:06.234 Malloc0 00:25:06.234 03:25:40 -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:06.493 03:25:40 -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:06.752 03:25:41 -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:06.752 [2024-04-25 03:25:41.227654] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:06.752 03:25:41 -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:07.010 [2024-04-25 03:25:41.464384] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:07.010 03:25:41 -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:07.269 [2024-04-25 03:25:41.709258] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:07.269 03:25:41 -- host/failover.sh@31 -- # bdevperf_pid=1587169 00:25:07.269 03:25:41 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:07.269 03:25:41 -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:25:07.269 03:25:41 -- host/failover.sh@34 -- # waitforlisten 1587169 /var/tmp/bdevperf.sock 00:25:07.269 03:25:41 -- common/autotest_common.sh@817 -- # '[' -z 1587169 ']' 00:25:07.269 03:25:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:07.269 03:25:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:07.269 03:25:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:07.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:07.269 03:25:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:07.269 03:25:41 -- common/autotest_common.sh@10 -- # set +x 00:25:07.838 03:25:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:07.838 03:25:42 -- common/autotest_common.sh@850 -- # return 0 00:25:07.838 03:25:42 -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:08.097 NVMe0n1 00:25:08.097 03:25:42 -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:08.666 00:25:08.666 03:25:42 -- host/failover.sh@39 -- # run_test_pid=1587334 00:25:08.666 03:25:42 -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:08.666 03:25:42 -- host/failover.sh@41 -- # sleep 1 00:25:09.598 03:25:43 -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:09.858 [2024-04-25 03:25:44.133792] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a29ad0 is same with the state(5) to be set 00:25:09.858 [2024-04-25 03:25:44.133894] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a29ad0 is same with the state(5) to be set 00:25:09.858 [2024-04-25 03:25:44.133928] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a29ad0 is same with the state(5) to be set 00:25:09.858 [2024-04-25 03:25:44.133940] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a29ad0 is same with the state(5) to be set 00:25:09.858 [2024-04-25 03:25:44.133952] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a29ad0 is same with the state(5) to be set 00:25:09.858 [2024-04-25 03:25:44.133963] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a29ad0 is same with the state(5) to be set 00:25:09.858 [2024-04-25 03:25:44.133975] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a29ad0 is same with the state(5) to be set 00:25:09.858 03:25:44 -- host/failover.sh@45 -- # sleep 3 00:25:13.146 03:25:47 -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:13.146 00:25:13.146 03:25:47 -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:13.407 [2024-04-25 03:25:47.697146] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2a9e0 is same with the state(5) to be set 00:25:13.407 [2024-04-25 03:25:47.697202] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2a9e0 is same with the state(5) to be set 00:25:13.407 [2024-04-25 03:25:47.697231] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2a9e0 is same with the state(5) to be set 00:25:13.407 [2024-04-25 03:25:47.697243] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2a9e0 is same with the state(5) to be set 00:25:13.407 [2024-04-25 03:25:47.697256] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2a9e0 is same with the state(5) to be set 00:25:13.407 [2024-04-25 03:25:47.697268] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2a9e0 is same with the state(5) to be set 00:25:13.407 [2024-04-25 03:25:47.697280] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2a9e0 is same with the state(5) to be set 00:25:13.407 [2024-04-25 03:25:47.697312] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2a9e0 is same with the state(5) to be set 00:25:13.407 [2024-04-25 03:25:47.697324] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2a9e0 is same with the state(5) to be set 00:25:13.407 [2024-04-25 03:25:47.697337] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2a9e0 is same with the state(5) to be set 00:25:13.407 [2024-04-25 03:25:47.697350] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2a9e0 is same with the state(5) to be set 00:25:13.407 [2024-04-25 03:25:47.697373] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2a9e0 is same with the state(5) to be set 00:25:13.407 [2024-04-25 03:25:47.697384] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2a9e0 is same with the state(5) to be set 00:25:13.407 [2024-04-25 03:25:47.697396] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2a9e0 is same with the state(5) to be set 00:25:13.407 [2024-04-25 03:25:47.697408] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2a9e0 is same with the state(5) to be set 00:25:13.407 [2024-04-25 03:25:47.697420] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2a9e0 is same with the state(5) to be set 00:25:13.407 [2024-04-25 03:25:47.697431] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2a9e0 is same with the state(5) to be set 00:25:13.407 [2024-04-25 03:25:47.697442] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2a9e0 is same with the state(5) to be set 00:25:13.407 [2024-04-25 03:25:47.697454] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2a9e0 is same with the state(5) to be set 00:25:13.407 [2024-04-25 03:25:47.697465] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2a9e0 is same with the state(5) to be set 00:25:13.407 [2024-04-25 03:25:47.697476] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2a9e0 is same with the state(5) to be set 00:25:13.407 [2024-04-25 03:25:47.697488] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2a9e0 is same with the state(5) to be set 00:25:13.407 [2024-04-25 03:25:47.697499] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2a9e0 is same with the state(5) to be set 00:25:13.407 [2024-04-25 03:25:47.697510] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2a9e0 is same with the state(5) to be set 00:25:13.407 [2024-04-25 03:25:47.697522] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2a9e0 is same with the state(5) to be set 00:25:13.407 [2024-04-25 03:25:47.697534] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2a9e0 is same with the state(5) to be set 00:25:13.407 [2024-04-25 03:25:47.697546] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2a9e0 is same with the state(5) to be set 00:25:13.407 [2024-04-25 03:25:47.697557] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2a9e0 is same with the state(5) to be set 00:25:13.407 [2024-04-25 03:25:47.697569] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2a9e0 is same with the state(5) to be set 00:25:13.407 [2024-04-25 03:25:47.697580] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2a9e0 is same with the state(5) to be set 00:25:13.407 [2024-04-25 03:25:47.697591] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2a9e0 is same with the state(5) to be set 00:25:13.407 [2024-04-25 03:25:47.697602] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2a9e0 is same with the state(5) to be set 00:25:13.407 [2024-04-25 03:25:47.697639] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2a9e0 is same with the state(5) to be set 00:25:13.407 [2024-04-25 03:25:47.697652] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2a9e0 is same with the state(5) to be set 00:25:13.407 [2024-04-25 03:25:47.697668] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2a9e0 is same with the state(5) to be set 00:25:13.407 [2024-04-25 03:25:47.697680] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2a9e0 is same with the state(5) to be set 00:25:13.407 [2024-04-25 03:25:47.697692] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2a9e0 is same with the state(5) to be set 00:25:13.407 [2024-04-25 03:25:47.697704] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2a9e0 is same with the state(5) to be set 00:25:13.407 [2024-04-25 03:25:47.697716] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2a9e0 is same with the state(5) to be set 00:25:13.407 [2024-04-25 03:25:47.697728] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2a9e0 is same with the state(5) to be set 00:25:13.407 [2024-04-25 03:25:47.697740] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2a9e0 is same with the state(5) to be set 00:25:13.407 [2024-04-25 03:25:47.697752] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2a9e0 is same with the state(5) to be set 00:25:13.407 [2024-04-25 03:25:47.697764] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2a9e0 is same with the state(5) to be set 00:25:13.407 [2024-04-25 03:25:47.697775] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2a9e0 is same with the state(5) to be set 00:25:13.407 [2024-04-25 03:25:47.697787] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2a9e0 is same with the state(5) to be set 00:25:13.407 [2024-04-25 03:25:47.697798] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2a9e0 is same with the state(5) to be set 00:25:13.407 [2024-04-25 03:25:47.697810] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2a9e0 is same with the state(5) to be set 00:25:13.407 [2024-04-25 03:25:47.697822] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2a9e0 is same with the state(5) to be set 00:25:13.407 [2024-04-25 03:25:47.697835] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2a9e0 is same with the state(5) to be set 00:25:13.407 [2024-04-25 03:25:47.697847] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2a9e0 is same with the state(5) to be set 00:25:13.407 [2024-04-25 03:25:47.697859] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2a9e0 is same with the state(5) to be set 00:25:13.407 [2024-04-25 03:25:47.697871] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2a9e0 is same with the state(5) to be set 00:25:13.407 [2024-04-25 03:25:47.697882] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2a9e0 is same with the state(5) to be set 00:25:13.407 [2024-04-25 03:25:47.697894] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2a9e0 is same with the state(5) to be set 00:25:13.407 [2024-04-25 03:25:47.697906] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2a9e0 is same with the state(5) to be set 00:25:13.407 03:25:47 -- host/failover.sh@50 -- # sleep 3 00:25:16.698 03:25:50 -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:16.698 [2024-04-25 03:25:50.960300] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:16.698 03:25:50 -- host/failover.sh@55 -- # sleep 1 00:25:17.632 03:25:51 -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:17.894 [2024-04-25 03:25:52.258537] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2b610 is same with the state(5) to be set 00:25:17.894 [2024-04-25 03:25:52.258597] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2b610 is same with the state(5) to be set 00:25:17.894 [2024-04-25 03:25:52.258640] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2b610 is same with the state(5) to be set 00:25:17.894 [2024-04-25 03:25:52.258655] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2b610 is same with the state(5) to be set 00:25:17.894 [2024-04-25 03:25:52.258667] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2b610 is same with the state(5) to be set 00:25:17.894 [2024-04-25 03:25:52.258679] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2b610 is same with the state(5) to be set 00:25:17.894 [2024-04-25 03:25:52.258691] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2b610 is same with the state(5) to be set 00:25:17.894 [2024-04-25 03:25:52.258703] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2b610 is same with the state(5) to be set 00:25:17.894 [2024-04-25 03:25:52.258714] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2b610 is same with the state(5) to be set 00:25:17.894 [2024-04-25 03:25:52.258726] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2b610 is same with the state(5) to be set 00:25:17.894 [2024-04-25 03:25:52.258738] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2b610 is same with the state(5) to be set 00:25:17.894 [2024-04-25 03:25:52.258750] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2b610 is same with the state(5) to be set 00:25:17.894 [2024-04-25 03:25:52.258761] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2b610 is same with the state(5) to be set 00:25:17.894 [2024-04-25 03:25:52.258774] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2b610 is same with the state(5) to be set 00:25:17.894 [2024-04-25 03:25:52.258785] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2b610 is same with the state(5) to be set 00:25:17.894 [2024-04-25 03:25:52.258798] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2b610 is same with the state(5) to be set 00:25:17.894 [2024-04-25 03:25:52.258809] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2b610 is same with the state(5) to be set 00:25:17.894 [2024-04-25 03:25:52.258821] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2b610 is same with the state(5) to be set 00:25:17.894 [2024-04-25 03:25:52.258833] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2b610 is same with the state(5) to be set 00:25:17.894 [2024-04-25 03:25:52.258845] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2b610 is same with the state(5) to be set 00:25:17.894 [2024-04-25 03:25:52.258856] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2b610 is same with the state(5) to be set 00:25:17.894 [2024-04-25 03:25:52.258869] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2b610 is same with the state(5) to be set 00:25:17.894 [2024-04-25 03:25:52.258881] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2b610 is same with the state(5) to be set 00:25:17.894 [2024-04-25 03:25:52.258893] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2b610 is same with the state(5) to be set 00:25:17.894 [2024-04-25 03:25:52.258905] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2b610 is same with the state(5) to be set 00:25:17.894 [2024-04-25 03:25:52.258926] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2b610 is same with the state(5) to be set 00:25:17.894 [2024-04-25 03:25:52.258938] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2b610 is same with the state(5) to be set 00:25:17.894 [2024-04-25 03:25:52.258950] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2b610 is same with the state(5) to be set 00:25:17.894 [2024-04-25 03:25:52.258962] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2b610 is same with the state(5) to be set 00:25:17.894 [2024-04-25 03:25:52.258977] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2b610 is same with the state(5) to be set 00:25:17.894 [2024-04-25 03:25:52.258989] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2b610 is same with the state(5) to be set 00:25:17.894 [2024-04-25 03:25:52.259002] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2b610 is same with the state(5) to be set 00:25:17.894 [2024-04-25 03:25:52.259014] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2b610 is same with the state(5) to be set 00:25:17.894 [2024-04-25 03:25:52.259025] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2b610 is same with the state(5) to be set 00:25:17.894 [2024-04-25 03:25:52.259038] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2b610 is same with the state(5) to be set 00:25:17.894 [2024-04-25 03:25:52.259050] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2b610 is same with the state(5) to be set 00:25:17.894 [2024-04-25 03:25:52.259061] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2b610 is same with the state(5) to be set 00:25:17.894 [2024-04-25 03:25:52.259073] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2b610 is same with the state(5) to be set 00:25:17.894 [2024-04-25 03:25:52.259085] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2b610 is same with the state(5) to be set 00:25:17.894 [2024-04-25 03:25:52.259097] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2b610 is same with the state(5) to be set 00:25:17.894 [2024-04-25 03:25:52.259109] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2b610 is same with the state(5) to be set 00:25:17.894 [2024-04-25 03:25:52.259121] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2b610 is same with the state(5) to be set 00:25:17.894 [2024-04-25 03:25:52.259133] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2b610 is same with the state(5) to be set 00:25:17.894 [2024-04-25 03:25:52.259144] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2b610 is same with the state(5) to be set 00:25:17.894 [2024-04-25 03:25:52.259156] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2b610 is same with the state(5) to be set 00:25:17.894 [2024-04-25 03:25:52.259168] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2b610 is same with the state(5) to be set 00:25:17.894 [2024-04-25 03:25:52.259180] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2b610 is same with the state(5) to be set 00:25:17.894 [2024-04-25 03:25:52.259192] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2b610 is same with the state(5) to be set 00:25:17.894 [2024-04-25 03:25:52.259203] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2b610 is same with the state(5) to be set 00:25:17.894 [2024-04-25 03:25:52.259215] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2b610 is same with the state(5) to be set 00:25:17.894 [2024-04-25 03:25:52.259228] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2b610 is same with the state(5) to be set 00:25:17.894 [2024-04-25 03:25:52.259240] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2b610 is same with the state(5) to be set 00:25:17.894 [2024-04-25 03:25:52.259268] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2b610 is same with the state(5) to be set 00:25:17.894 [2024-04-25 03:25:52.259280] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2b610 is same with the state(5) to be set 00:25:17.894 [2024-04-25 03:25:52.259291] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2b610 is same with the state(5) to be set 00:25:17.894 [2024-04-25 03:25:52.259303] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2b610 is same with the state(5) to be set 00:25:17.894 [2024-04-25 03:25:52.259321] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2b610 is same with the state(5) to be set 00:25:17.894 [2024-04-25 03:25:52.259334] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2b610 is same with the state(5) to be set 00:25:17.894 [2024-04-25 03:25:52.259345] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2b610 is same with the state(5) to be set 00:25:17.894 [2024-04-25 03:25:52.259357] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2b610 is same with the state(5) to be set 00:25:17.894 [2024-04-25 03:25:52.259368] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2b610 is same with the state(5) to be set 00:25:17.894 [2024-04-25 03:25:52.259379] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2b610 is same with the state(5) to be set 00:25:17.894 [2024-04-25 03:25:52.259391] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2b610 is same with the state(5) to be set 00:25:17.894 [2024-04-25 03:25:52.259402] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2b610 is same with the state(5) to be set 00:25:17.894 [2024-04-25 03:25:52.259413] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2b610 is same with the state(5) to be set 00:25:17.894 [2024-04-25 03:25:52.259425] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2b610 is same with the state(5) to be set 00:25:17.894 [2024-04-25 03:25:52.259437] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2b610 is same with the state(5) to be set 00:25:17.894 [2024-04-25 03:25:52.259449] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2b610 is same with the state(5) to be set 00:25:17.894 [2024-04-25 03:25:52.259460] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2b610 is same with the state(5) to be set 00:25:17.894 [2024-04-25 03:25:52.259472] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2b610 is same with the state(5) to be set 00:25:17.895 [2024-04-25 03:25:52.259484] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2b610 is same with the state(5) to be set 00:25:17.895 [2024-04-25 03:25:52.259495] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2b610 is same with the state(5) to be set 00:25:17.895 [2024-04-25 03:25:52.259507] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2b610 is same with the state(5) to be set 00:25:17.895 [2024-04-25 03:25:52.259518] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2b610 is same with the state(5) to be set 00:25:17.895 [2024-04-25 03:25:52.259530] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2b610 is same with the state(5) to be set 00:25:17.895 [2024-04-25 03:25:52.259542] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2b610 is same with the state(5) to be set 00:25:17.895 [2024-04-25 03:25:52.259553] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2b610 is same with the state(5) to be set 00:25:17.895 [2024-04-25 03:25:52.259565] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2b610 is same with the state(5) to be set 00:25:17.895 [2024-04-25 03:25:52.259577] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2b610 is same with the state(5) to be set 00:25:17.895 [2024-04-25 03:25:52.259588] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2b610 is same with the state(5) to be set 00:25:17.895 [2024-04-25 03:25:52.259600] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2b610 is same with the state(5) to be set 00:25:17.895 [2024-04-25 03:25:52.259636] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2b610 is same with the state(5) to be set 00:25:17.895 [2024-04-25 03:25:52.259651] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2b610 is same with the state(5) to be set 00:25:17.895 [2024-04-25 03:25:52.259667] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2b610 is same with the state(5) to be set 00:25:17.895 [2024-04-25 03:25:52.259680] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2b610 is same with the state(5) to be set 00:25:17.895 [2024-04-25 03:25:52.259692] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2b610 is same with the state(5) to be set 00:25:17.895 [2024-04-25 03:25:52.259704] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2b610 is same with the state(5) to be set 00:25:17.895 [2024-04-25 03:25:52.259715] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2b610 is same with the state(5) to be set 00:25:17.895 [2024-04-25 03:25:52.259727] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2b610 is same with the state(5) to be set 00:25:17.895 [2024-04-25 03:25:52.259739] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2b610 is same with the state(5) to be set 00:25:17.895 [2024-04-25 03:25:52.259751] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2b610 is same with the state(5) to be set 00:25:17.895 [2024-04-25 03:25:52.259762] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2b610 is same with the state(5) to be set 00:25:17.895 [2024-04-25 03:25:52.259774] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2b610 is same with the state(5) to be set 00:25:17.895 [2024-04-25 03:25:52.259786] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2b610 is same with the state(5) to be set 00:25:17.895 [2024-04-25 03:25:52.259798] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2b610 is same with the state(5) to be set 00:25:17.895 [2024-04-25 03:25:52.259810] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2b610 is same with the state(5) to be set 00:25:17.895 [2024-04-25 03:25:52.259822] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2b610 is same with the state(5) to be set 00:25:17.895 [2024-04-25 03:25:52.259834] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2b610 is same with the state(5) to be set 00:25:17.895 [2024-04-25 03:25:52.259846] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2b610 is same with the state(5) to be set 00:25:17.895 [2024-04-25 03:25:52.259858] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2b610 is same with the state(5) to be set 00:25:17.895 [2024-04-25 03:25:52.259870] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2b610 is same with the state(5) to be set 00:25:17.895 [2024-04-25 03:25:52.259882] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2b610 is same with the state(5) to be set 00:25:17.895 [2024-04-25 03:25:52.259894] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2b610 is same with the state(5) to be set 00:25:17.895 [2024-04-25 03:25:52.259906] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2b610 is same with the state(5) to be set 00:25:17.895 [2024-04-25 03:25:52.259933] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2b610 is same with the state(5) to be set 00:25:17.895 [2024-04-25 03:25:52.259944] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2b610 is same with the state(5) to be set 00:25:17.895 [2024-04-25 03:25:52.259956] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2b610 is same with the state(5) to be set 00:25:17.895 [2024-04-25 03:25:52.259968] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2b610 is same with the state(5) to be set 00:25:17.895 [2024-04-25 03:25:52.259980] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2b610 is same with the state(5) to be set 00:25:17.895 [2024-04-25 03:25:52.259992] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2b610 is same with the state(5) to be set 00:25:17.895 [2024-04-25 03:25:52.260004] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2b610 is same with the state(5) to be set 00:25:17.895 [2024-04-25 03:25:52.260018] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2b610 is same with the state(5) to be set 00:25:17.895 [2024-04-25 03:25:52.260031] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2b610 is same with the state(5) to be set 00:25:17.895 [2024-04-25 03:25:52.260042] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2b610 is same with the state(5) to be set 00:25:17.895 [2024-04-25 03:25:52.260054] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2b610 is same with the state(5) to be set 00:25:17.895 [2024-04-25 03:25:52.260066] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2b610 is same with the state(5) to be set 00:25:17.895 [2024-04-25 03:25:52.260077] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2b610 is same with the state(5) to be set 00:25:17.895 [2024-04-25 03:25:52.260089] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2b610 is same with the state(5) to be set 00:25:17.895 [2024-04-25 03:25:52.260101] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2b610 is same with the state(5) to be set 00:25:17.895 [2024-04-25 03:25:52.260113] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2b610 is same with the state(5) to be set 00:25:17.895 [2024-04-25 03:25:52.260125] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2b610 is same with the state(5) to be set 00:25:17.895 [2024-04-25 03:25:52.260136] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2b610 is same with the state(5) to be set 00:25:17.895 [2024-04-25 03:25:52.260148] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2b610 is same with the state(5) to be set 00:25:17.895 [2024-04-25 03:25:52.260159] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2b610 is same with the state(5) to be set 00:25:17.895 [2024-04-25 03:25:52.260171] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2b610 is same with the state(5) to be set 00:25:17.895 [2024-04-25 03:25:52.260183] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2b610 is same with the state(5) to be set 00:25:17.895 03:25:52 -- host/failover.sh@59 -- # wait 1587334 00:25:24.515 0 00:25:24.515 03:25:58 -- host/failover.sh@61 -- # killprocess 1587169 00:25:24.515 03:25:58 -- common/autotest_common.sh@936 -- # '[' -z 1587169 ']' 00:25:24.515 03:25:58 -- common/autotest_common.sh@940 -- # kill -0 1587169 00:25:24.515 03:25:58 -- common/autotest_common.sh@941 -- # uname 00:25:24.515 03:25:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:24.515 03:25:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1587169 00:25:24.515 03:25:58 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:24.515 03:25:58 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:24.515 03:25:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1587169' 00:25:24.515 killing process with pid 1587169 00:25:24.515 03:25:58 -- common/autotest_common.sh@955 -- # kill 1587169 00:25:24.515 03:25:58 -- common/autotest_common.sh@960 -- # wait 1587169 00:25:24.515 03:25:58 -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:24.515 [2024-04-25 03:25:41.775487] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:25:24.515 [2024-04-25 03:25:41.775578] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1587169 ] 00:25:24.515 EAL: No free 2048 kB hugepages reported on node 1 00:25:24.515 [2024-04-25 03:25:41.838846] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:24.515 [2024-04-25 03:25:41.945648] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:24.515 Running I/O for 15 seconds... 00:25:24.515 [2024-04-25 03:25:44.134367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:76744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.515 [2024-04-25 03:25:44.134409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.515 [2024-04-25 03:25:44.134437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:77088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.515 [2024-04-25 03:25:44.134452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.515 [2024-04-25 03:25:44.134469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:77096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.515 [2024-04-25 03:25:44.134483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.515 [2024-04-25 03:25:44.134498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:77104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.515 [2024-04-25 03:25:44.134512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.515 [2024-04-25 03:25:44.134527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:77112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.515 [2024-04-25 03:25:44.134541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.515 [2024-04-25 03:25:44.134556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:77120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.515 [2024-04-25 03:25:44.134569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.515 [2024-04-25 03:25:44.134585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:77128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.515 [2024-04-25 03:25:44.134598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.515 [2024-04-25 03:25:44.134613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:77136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.515 [2024-04-25 03:25:44.134635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.515 [2024-04-25 03:25:44.134653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:77144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.515 [2024-04-25 03:25:44.134667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.515 [2024-04-25 03:25:44.134682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:77152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.515 [2024-04-25 03:25:44.134696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.516 [2024-04-25 03:25:44.134711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:77160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.516 [2024-04-25 03:25:44.134725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.516 [2024-04-25 03:25:44.134749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:77168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.516 [2024-04-25 03:25:44.134764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.516 [2024-04-25 03:25:44.134779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:77176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.516 [2024-04-25 03:25:44.134793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.516 [2024-04-25 03:25:44.134808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:77184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.516 [2024-04-25 03:25:44.134821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.516 [2024-04-25 03:25:44.134836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.516 [2024-04-25 03:25:44.134849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.516 [2024-04-25 03:25:44.134864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:77200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.516 [2024-04-25 03:25:44.134877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.516 [2024-04-25 03:25:44.134892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:77208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.516 [2024-04-25 03:25:44.134905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.516 [2024-04-25 03:25:44.134920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:77216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.516 [2024-04-25 03:25:44.134948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.516 [2024-04-25 03:25:44.134963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:77224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.516 [2024-04-25 03:25:44.134977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.516 [2024-04-25 03:25:44.134991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:77232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.516 [2024-04-25 03:25:44.135004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.516 [2024-04-25 03:25:44.135018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:77240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.516 [2024-04-25 03:25:44.135031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.516 [2024-04-25 03:25:44.135045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:77248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.516 [2024-04-25 03:25:44.135058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.516 [2024-04-25 03:25:44.135072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:77256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.516 [2024-04-25 03:25:44.135085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.516 [2024-04-25 03:25:44.135099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:77264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.516 [2024-04-25 03:25:44.135116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.516 [2024-04-25 03:25:44.135131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:77272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.516 [2024-04-25 03:25:44.135144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.516 [2024-04-25 03:25:44.135159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:77280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.516 [2024-04-25 03:25:44.135171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.516 [2024-04-25 03:25:44.135186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:77288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.516 [2024-04-25 03:25:44.135198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.516 [2024-04-25 03:25:44.135213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:77296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.516 [2024-04-25 03:25:44.135225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.516 [2024-04-25 03:25:44.135240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:77304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.516 [2024-04-25 03:25:44.135252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.516 [2024-04-25 03:25:44.135267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:77312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.516 [2024-04-25 03:25:44.135279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.516 [2024-04-25 03:25:44.135293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:77320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.516 [2024-04-25 03:25:44.135306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.516 [2024-04-25 03:25:44.135321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:77328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.516 [2024-04-25 03:25:44.135334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.516 [2024-04-25 03:25:44.135349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:77336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.516 [2024-04-25 03:25:44.135361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.516 [2024-04-25 03:25:44.135375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:77344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.516 [2024-04-25 03:25:44.135388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.516 [2024-04-25 03:25:44.135403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:77352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.516 [2024-04-25 03:25:44.135416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.516 [2024-04-25 03:25:44.135431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:77360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.516 [2024-04-25 03:25:44.135444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.516 [2024-04-25 03:25:44.135462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:77368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.516 [2024-04-25 03:25:44.135476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.516 [2024-04-25 03:25:44.135490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:77376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.516 [2024-04-25 03:25:44.135503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.516 [2024-04-25 03:25:44.135517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:77384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.516 [2024-04-25 03:25:44.135530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.516 [2024-04-25 03:25:44.135544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:77392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.516 [2024-04-25 03:25:44.135557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.516 [2024-04-25 03:25:44.135571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:77400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.516 [2024-04-25 03:25:44.135584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.516 [2024-04-25 03:25:44.135598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:77408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.516 [2024-04-25 03:25:44.135638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.516 [2024-04-25 03:25:44.135665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:77416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.516 [2024-04-25 03:25:44.135686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.516 [2024-04-25 03:25:44.135705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:77424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.516 [2024-04-25 03:25:44.135719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.516 [2024-04-25 03:25:44.135734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:77432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.516 [2024-04-25 03:25:44.135747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.516 [2024-04-25 03:25:44.135762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:77440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.516 [2024-04-25 03:25:44.135776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.516 [2024-04-25 03:25:44.135790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:77448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.516 [2024-04-25 03:25:44.135803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.516 [2024-04-25 03:25:44.135818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:77456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.516 [2024-04-25 03:25:44.135831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.516 [2024-04-25 03:25:44.135846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:77464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.516 [2024-04-25 03:25:44.135860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.516 [2024-04-25 03:25:44.135879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:77472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.516 [2024-04-25 03:25:44.135894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.517 [2024-04-25 03:25:44.135908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:77480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.517 [2024-04-25 03:25:44.135922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.517 [2024-04-25 03:25:44.135937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:77488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.517 [2024-04-25 03:25:44.135965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.517 [2024-04-25 03:25:44.135979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:77496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.517 [2024-04-25 03:25:44.135992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.517 [2024-04-25 03:25:44.136017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:77504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.517 [2024-04-25 03:25:44.136030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.517 [2024-04-25 03:25:44.136044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:77512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.517 [2024-04-25 03:25:44.136057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.517 [2024-04-25 03:25:44.136071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:77520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.517 [2024-04-25 03:25:44.136084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.517 [2024-04-25 03:25:44.136098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:77528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.517 [2024-04-25 03:25:44.136111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.517 [2024-04-25 03:25:44.136125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:77536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.517 [2024-04-25 03:25:44.136138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.517 [2024-04-25 03:25:44.136152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:77544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.517 [2024-04-25 03:25:44.136165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.517 [2024-04-25 03:25:44.136179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:77552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.517 [2024-04-25 03:25:44.136192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.517 [2024-04-25 03:25:44.136206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:77560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.517 [2024-04-25 03:25:44.136219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.517 [2024-04-25 03:25:44.136233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:77568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.517 [2024-04-25 03:25:44.136250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.517 [2024-04-25 03:25:44.136264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:77576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.517 [2024-04-25 03:25:44.136278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.517 [2024-04-25 03:25:44.136292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:77584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.517 [2024-04-25 03:25:44.136304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.517 [2024-04-25 03:25:44.136331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:77592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.517 [2024-04-25 03:25:44.136344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.517 [2024-04-25 03:25:44.136358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:77600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.517 [2024-04-25 03:25:44.136371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.517 [2024-04-25 03:25:44.136385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:77608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.517 [2024-04-25 03:25:44.136398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.517 [2024-04-25 03:25:44.136412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:77616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.517 [2024-04-25 03:25:44.136424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.517 [2024-04-25 03:25:44.136439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:77624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.517 [2024-04-25 03:25:44.136451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.517 [2024-04-25 03:25:44.136466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:77632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.517 [2024-04-25 03:25:44.136479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.517 [2024-04-25 03:25:44.136492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:77640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.517 [2024-04-25 03:25:44.136505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.517 [2024-04-25 03:25:44.136519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:77648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.517 [2024-04-25 03:25:44.136532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.517 [2024-04-25 03:25:44.136546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:77656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.517 [2024-04-25 03:25:44.136559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.517 [2024-04-25 03:25:44.136573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:77664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.517 [2024-04-25 03:25:44.136586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.517 [2024-04-25 03:25:44.136604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:77672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.517 [2024-04-25 03:25:44.136642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.517 [2024-04-25 03:25:44.136659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:77680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.517 [2024-04-25 03:25:44.136673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.517 [2024-04-25 03:25:44.136687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:77688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.517 [2024-04-25 03:25:44.136701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.517 [2024-04-25 03:25:44.136715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:77696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.517 [2024-04-25 03:25:44.136728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.517 [2024-04-25 03:25:44.136743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:77704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.517 [2024-04-25 03:25:44.136756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.517 [2024-04-25 03:25:44.136771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:76752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.517 [2024-04-25 03:25:44.136785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.517 [2024-04-25 03:25:44.136801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:76760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.517 [2024-04-25 03:25:44.136814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.517 [2024-04-25 03:25:44.136829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:76768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.517 [2024-04-25 03:25:44.136843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.517 [2024-04-25 03:25:44.136858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:76776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.517 [2024-04-25 03:25:44.136871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.517 [2024-04-25 03:25:44.136886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:76784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.517 [2024-04-25 03:25:44.136899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.517 [2024-04-25 03:25:44.136930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:76792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.517 [2024-04-25 03:25:44.136943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.517 [2024-04-25 03:25:44.136957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:76800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.517 [2024-04-25 03:25:44.136970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.517 [2024-04-25 03:25:44.137000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:77712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.517 [2024-04-25 03:25:44.137017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.517 [2024-04-25 03:25:44.137032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:77720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.517 [2024-04-25 03:25:44.137046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.517 [2024-04-25 03:25:44.137061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:77728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.517 [2024-04-25 03:25:44.137074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.517 [2024-04-25 03:25:44.137089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:77736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.517 [2024-04-25 03:25:44.137102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.518 [2024-04-25 03:25:44.137117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:77744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.518 [2024-04-25 03:25:44.137130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.518 [2024-04-25 03:25:44.137144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:77752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.518 [2024-04-25 03:25:44.137157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.518 [2024-04-25 03:25:44.137172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:77760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.518 [2024-04-25 03:25:44.137185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.518 [2024-04-25 03:25:44.137200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:76808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.518 [2024-04-25 03:25:44.137213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.518 [2024-04-25 03:25:44.137228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:76816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.518 [2024-04-25 03:25:44.137242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.518 [2024-04-25 03:25:44.137257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:76824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.518 [2024-04-25 03:25:44.137270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.518 [2024-04-25 03:25:44.137285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:76832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.518 [2024-04-25 03:25:44.137298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.518 [2024-04-25 03:25:44.137313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:76840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.518 [2024-04-25 03:25:44.137327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.518 [2024-04-25 03:25:44.137342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:76848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.518 [2024-04-25 03:25:44.137355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.518 [2024-04-25 03:25:44.137370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:76856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.518 [2024-04-25 03:25:44.137387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.518 [2024-04-25 03:25:44.137402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:76864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.518 [2024-04-25 03:25:44.137416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.518 [2024-04-25 03:25:44.137431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:76872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.518 [2024-04-25 03:25:44.137444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.518 [2024-04-25 03:25:44.137459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:76880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.518 [2024-04-25 03:25:44.137473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.518 [2024-04-25 03:25:44.137487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:76888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.518 [2024-04-25 03:25:44.137500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.518 [2024-04-25 03:25:44.137515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:76896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.518 [2024-04-25 03:25:44.137528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.518 [2024-04-25 03:25:44.137543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:76904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.518 [2024-04-25 03:25:44.137556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.518 [2024-04-25 03:25:44.137571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:76912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.518 [2024-04-25 03:25:44.137584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.518 [2024-04-25 03:25:44.137599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:76920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.518 [2024-04-25 03:25:44.137612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.518 [2024-04-25 03:25:44.137633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:76928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.518 [2024-04-25 03:25:44.137649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.518 [2024-04-25 03:25:44.137664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:76936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.518 [2024-04-25 03:25:44.137677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.518 [2024-04-25 03:25:44.137692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:76944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.518 [2024-04-25 03:25:44.137705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.518 [2024-04-25 03:25:44.137720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:76952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.518 [2024-04-25 03:25:44.137733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.518 [2024-04-25 03:25:44.137753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:76960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.518 [2024-04-25 03:25:44.137767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.518 [2024-04-25 03:25:44.137782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:76968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.518 [2024-04-25 03:25:44.137795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.518 [2024-04-25 03:25:44.137810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:76976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.518 [2024-04-25 03:25:44.137823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.518 [2024-04-25 03:25:44.137838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:76984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.518 [2024-04-25 03:25:44.137852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.518 [2024-04-25 03:25:44.137867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:76992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.518 [2024-04-25 03:25:44.137880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.518 [2024-04-25 03:25:44.137895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:77000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.518 [2024-04-25 03:25:44.137908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.518 [2024-04-25 03:25:44.137923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:77008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.518 [2024-04-25 03:25:44.137936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.518 [2024-04-25 03:25:44.137951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:77016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.518 [2024-04-25 03:25:44.137964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.518 [2024-04-25 03:25:44.137978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:77024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.518 [2024-04-25 03:25:44.137992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.518 [2024-04-25 03:25:44.138007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:77032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.518 [2024-04-25 03:25:44.138020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.518 [2024-04-25 03:25:44.138034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:77040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.518 [2024-04-25 03:25:44.138048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.518 [2024-04-25 03:25:44.138063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:77048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.518 [2024-04-25 03:25:44.138076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.518 [2024-04-25 03:25:44.138090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:77056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.518 [2024-04-25 03:25:44.138110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.518 [2024-04-25 03:25:44.138126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:77064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.518 [2024-04-25 03:25:44.138139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.518 [2024-04-25 03:25:44.138154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:77072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.518 [2024-04-25 03:25:44.138168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.518 [2024-04-25 03:25:44.138182] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd50840 is same with the state(5) to be set 00:25:24.518 [2024-04-25 03:25:44.138198] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:24.518 [2024-04-25 03:25:44.138210] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:24.518 [2024-04-25 03:25:44.138221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77080 len:8 PRP1 0x0 PRP2 0x0 00:25:24.518 [2024-04-25 03:25:44.138234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.518 [2024-04-25 03:25:44.138295] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xd50840 was disconnected and freed. reset controller. 00:25:24.518 [2024-04-25 03:25:44.138313] bdev_nvme.c:1856:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:24.519 [2024-04-25 03:25:44.138345] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:24.519 [2024-04-25 03:25:44.138362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.519 [2024-04-25 03:25:44.138377] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:24.519 [2024-04-25 03:25:44.138390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.519 [2024-04-25 03:25:44.138403] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:24.519 [2024-04-25 03:25:44.138416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.519 [2024-04-25 03:25:44.138429] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:24.519 [2024-04-25 03:25:44.138441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.519 [2024-04-25 03:25:44.138454] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:24.519 [2024-04-25 03:25:44.141714] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:24.519 [2024-04-25 03:25:44.141752] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd31f30 (9): Bad file descriptor 00:25:24.519 [2024-04-25 03:25:44.171502] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:24.519 [2024-04-25 03:25:47.698165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:82016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.519 [2024-04-25 03:25:47.698208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.519 [2024-04-25 03:25:47.698236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:82024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.519 [2024-04-25 03:25:47.698258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.519 [2024-04-25 03:25:47.698276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:82032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.519 [2024-04-25 03:25:47.698291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.519 [2024-04-25 03:25:47.698306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:82040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.519 [2024-04-25 03:25:47.698319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.519 [2024-04-25 03:25:47.698334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:82048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.519 [2024-04-25 03:25:47.698347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.519 [2024-04-25 03:25:47.698363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:82056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.519 [2024-04-25 03:25:47.698387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.519 [2024-04-25 03:25:47.698417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:82064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.519 [2024-04-25 03:25:47.698430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.519 [2024-04-25 03:25:47.698445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:82072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.519 [2024-04-25 03:25:47.698459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.519 [2024-04-25 03:25:47.698473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:82080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.519 [2024-04-25 03:25:47.698486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.519 [2024-04-25 03:25:47.698500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:82088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.519 [2024-04-25 03:25:47.698513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.519 [2024-04-25 03:25:47.698528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:82096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.519 [2024-04-25 03:25:47.698540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.519 [2024-04-25 03:25:47.698555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:82104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.519 [2024-04-25 03:25:47.698567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.519 [2024-04-25 03:25:47.698582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:82112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.519 [2024-04-25 03:25:47.698595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.519 [2024-04-25 03:25:47.698624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:82120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.519 [2024-04-25 03:25:47.698648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.519 [2024-04-25 03:25:47.698668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:82128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.519 [2024-04-25 03:25:47.698683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.519 [2024-04-25 03:25:47.698704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:82136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.519 [2024-04-25 03:25:47.698718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.519 [2024-04-25 03:25:47.698732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:82144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.519 [2024-04-25 03:25:47.698746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.519 [2024-04-25 03:25:47.698761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:82152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.519 [2024-04-25 03:25:47.698774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.519 [2024-04-25 03:25:47.698789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:82160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.519 [2024-04-25 03:25:47.698802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.519 [2024-04-25 03:25:47.698817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:82168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.519 [2024-04-25 03:25:47.698830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.519 [2024-04-25 03:25:47.698844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:82176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.519 [2024-04-25 03:25:47.698858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.519 [2024-04-25 03:25:47.698872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:82184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.519 [2024-04-25 03:25:47.698885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.519 [2024-04-25 03:25:47.698900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:82192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.519 [2024-04-25 03:25:47.698920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.519 [2024-04-25 03:25:47.698950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:82200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.519 [2024-04-25 03:25:47.698963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.519 [2024-04-25 03:25:47.698978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:82784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.519 [2024-04-25 03:25:47.698990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.519 [2024-04-25 03:25:47.699005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:82792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.519 [2024-04-25 03:25:47.699017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.519 [2024-04-25 03:25:47.699031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:82800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.519 [2024-04-25 03:25:47.699044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.519 [2024-04-25 03:25:47.699062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:82808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.519 [2024-04-25 03:25:47.699075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.519 [2024-04-25 03:25:47.699089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:82816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.519 [2024-04-25 03:25:47.699102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.519 [2024-04-25 03:25:47.699117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:82824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.519 [2024-04-25 03:25:47.699130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.519 [2024-04-25 03:25:47.699144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:82832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.519 [2024-04-25 03:25:47.699157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.519 [2024-04-25 03:25:47.699172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:82840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.519 [2024-04-25 03:25:47.699196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.520 [2024-04-25 03:25:47.699211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:82208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.520 [2024-04-25 03:25:47.699224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.520 [2024-04-25 03:25:47.699239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:82216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.520 [2024-04-25 03:25:47.699251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.520 [2024-04-25 03:25:47.699266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:82224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.520 [2024-04-25 03:25:47.699278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.520 [2024-04-25 03:25:47.699293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:82232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.520 [2024-04-25 03:25:47.699306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.520 [2024-04-25 03:25:47.699320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:82240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.520 [2024-04-25 03:25:47.699334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.520 [2024-04-25 03:25:47.699348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:82248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.520 [2024-04-25 03:25:47.699360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.520 [2024-04-25 03:25:47.699375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:82256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.520 [2024-04-25 03:25:47.699387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.520 [2024-04-25 03:25:47.699402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:82264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.520 [2024-04-25 03:25:47.699418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.520 [2024-04-25 03:25:47.699433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:82272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.520 [2024-04-25 03:25:47.699447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.520 [2024-04-25 03:25:47.699461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:82280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.520 [2024-04-25 03:25:47.699474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.520 [2024-04-25 03:25:47.699489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:82288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.520 [2024-04-25 03:25:47.699502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.520 [2024-04-25 03:25:47.699516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:82296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.520 [2024-04-25 03:25:47.699529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.520 [2024-04-25 03:25:47.699544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:82304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.520 [2024-04-25 03:25:47.699557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.520 [2024-04-25 03:25:47.699571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:82312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.520 [2024-04-25 03:25:47.699584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.520 [2024-04-25 03:25:47.699599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:82320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.520 [2024-04-25 03:25:47.699633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.520 [2024-04-25 03:25:47.699656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:82328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.520 [2024-04-25 03:25:47.699678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.520 [2024-04-25 03:25:47.699695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:82336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.520 [2024-04-25 03:25:47.699709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.520 [2024-04-25 03:25:47.699725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:82344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.520 [2024-04-25 03:25:47.699739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.520 [2024-04-25 03:25:47.699754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:82352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.520 [2024-04-25 03:25:47.699767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.520 [2024-04-25 03:25:47.699782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:82360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.520 [2024-04-25 03:25:47.699795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.520 [2024-04-25 03:25:47.699814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:82368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.520 [2024-04-25 03:25:47.699828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.520 [2024-04-25 03:25:47.699843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:82376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.520 [2024-04-25 03:25:47.699856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.520 [2024-04-25 03:25:47.699870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:82384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.520 [2024-04-25 03:25:47.699884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.520 [2024-04-25 03:25:47.699899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:82392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.520 [2024-04-25 03:25:47.699921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.520 [2024-04-25 03:25:47.699951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:82400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.520 [2024-04-25 03:25:47.699964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.520 [2024-04-25 03:25:47.699988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:82408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.520 [2024-04-25 03:25:47.700000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.520 [2024-04-25 03:25:47.700015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:82416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.520 [2024-04-25 03:25:47.700027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.520 [2024-04-25 03:25:47.700041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:82424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.520 [2024-04-25 03:25:47.700054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.520 [2024-04-25 03:25:47.700069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:82432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.520 [2024-04-25 03:25:47.700081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.520 [2024-04-25 03:25:47.700095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:82440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.520 [2024-04-25 03:25:47.700108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.520 [2024-04-25 03:25:47.700122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:82448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.520 [2024-04-25 03:25:47.700135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.520 [2024-04-25 03:25:47.700149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:82456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.520 [2024-04-25 03:25:47.700162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.520 [2024-04-25 03:25:47.700176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:82848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.520 [2024-04-25 03:25:47.700193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.520 [2024-04-25 03:25:47.700209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:82856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.521 [2024-04-25 03:25:47.700222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.521 [2024-04-25 03:25:47.700236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:82864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.521 [2024-04-25 03:25:47.700249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.521 [2024-04-25 03:25:47.700263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:82872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.521 [2024-04-25 03:25:47.700276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.521 [2024-04-25 03:25:47.700290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:82880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.521 [2024-04-25 03:25:47.700303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.521 [2024-04-25 03:25:47.700317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.521 [2024-04-25 03:25:47.700329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.521 [2024-04-25 03:25:47.700344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:82896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.521 [2024-04-25 03:25:47.700357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.521 [2024-04-25 03:25:47.700371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:82904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.521 [2024-04-25 03:25:47.700384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.521 [2024-04-25 03:25:47.700398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:82464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.521 [2024-04-25 03:25:47.700410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.521 [2024-04-25 03:25:47.700425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:82472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.521 [2024-04-25 03:25:47.700438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.521 [2024-04-25 03:25:47.700452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:82480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.521 [2024-04-25 03:25:47.700465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.521 [2024-04-25 03:25:47.700479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:82488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.521 [2024-04-25 03:25:47.700492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.521 [2024-04-25 03:25:47.700506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:82496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.521 [2024-04-25 03:25:47.700519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.521 [2024-04-25 03:25:47.700540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:82504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.521 [2024-04-25 03:25:47.700554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.521 [2024-04-25 03:25:47.700568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:82512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.521 [2024-04-25 03:25:47.700581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.521 [2024-04-25 03:25:47.700595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:82520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.521 [2024-04-25 03:25:47.700643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.521 [2024-04-25 03:25:47.700662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:82912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.521 [2024-04-25 03:25:47.700677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.521 [2024-04-25 03:25:47.700692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:82920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.521 [2024-04-25 03:25:47.700706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.521 [2024-04-25 03:25:47.700721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:82928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.521 [2024-04-25 03:25:47.700736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.521 [2024-04-25 03:25:47.700750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.521 [2024-04-25 03:25:47.700764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.521 [2024-04-25 03:25:47.700780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:82944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.521 [2024-04-25 03:25:47.700793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.521 [2024-04-25 03:25:47.700808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:82952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.521 [2024-04-25 03:25:47.700822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.521 [2024-04-25 03:25:47.700836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:82960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.521 [2024-04-25 03:25:47.700850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.521 [2024-04-25 03:25:47.700865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.521 [2024-04-25 03:25:47.700879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.521 [2024-04-25 03:25:47.700894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:82528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.521 [2024-04-25 03:25:47.700907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.521 [2024-04-25 03:25:47.700921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:82536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.521 [2024-04-25 03:25:47.700936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.521 [2024-04-25 03:25:47.700955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:82544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.521 [2024-04-25 03:25:47.700969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.521 [2024-04-25 03:25:47.700984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:82552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.521 [2024-04-25 03:25:47.700997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.521 [2024-04-25 03:25:47.701013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:82560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.521 [2024-04-25 03:25:47.701026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.521 [2024-04-25 03:25:47.701041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:82568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.521 [2024-04-25 03:25:47.701054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.521 [2024-04-25 03:25:47.701069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:82576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.521 [2024-04-25 03:25:47.701082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.521 [2024-04-25 03:25:47.701097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:82584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.521 [2024-04-25 03:25:47.701111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.521 [2024-04-25 03:25:47.701126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.521 [2024-04-25 03:25:47.701139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.521 [2024-04-25 03:25:47.701154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:82984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.521 [2024-04-25 03:25:47.701168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.521 [2024-04-25 03:25:47.701182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:82992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.521 [2024-04-25 03:25:47.701196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.521 [2024-04-25 03:25:47.701210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:83000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.521 [2024-04-25 03:25:47.701224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.521 [2024-04-25 03:25:47.701239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:83008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.521 [2024-04-25 03:25:47.701253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.521 [2024-04-25 03:25:47.701268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:83016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.521 [2024-04-25 03:25:47.701281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.521 [2024-04-25 03:25:47.701296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:83024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.521 [2024-04-25 03:25:47.701313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.521 [2024-04-25 03:25:47.701329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:83032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.521 [2024-04-25 03:25:47.701342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.521 [2024-04-25 03:25:47.701357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:82592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.521 [2024-04-25 03:25:47.701371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.521 [2024-04-25 03:25:47.701386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:82600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.522 [2024-04-25 03:25:47.701399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.522 [2024-04-25 03:25:47.701414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:82608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.522 [2024-04-25 03:25:47.701427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.522 [2024-04-25 03:25:47.701443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:82616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.522 [2024-04-25 03:25:47.701456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.522 [2024-04-25 03:25:47.701471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:82624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.522 [2024-04-25 03:25:47.701484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.522 [2024-04-25 03:25:47.701499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:82632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.522 [2024-04-25 03:25:47.701513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.522 [2024-04-25 03:25:47.701528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:82640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.522 [2024-04-25 03:25:47.701542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.522 [2024-04-25 03:25:47.701557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:82648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.522 [2024-04-25 03:25:47.701571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.522 [2024-04-25 03:25:47.701586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:82656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.522 [2024-04-25 03:25:47.701600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.522 [2024-04-25 03:25:47.701626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:82664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.522 [2024-04-25 03:25:47.701648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.522 [2024-04-25 03:25:47.701664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:82672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.522 [2024-04-25 03:25:47.701677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.522 [2024-04-25 03:25:47.701697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:82680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.522 [2024-04-25 03:25:47.701712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.522 [2024-04-25 03:25:47.701727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:82688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.522 [2024-04-25 03:25:47.701740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.522 [2024-04-25 03:25:47.701756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:82696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.522 [2024-04-25 03:25:47.701769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.522 [2024-04-25 03:25:47.701785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:82704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.522 [2024-04-25 03:25:47.701798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.522 [2024-04-25 03:25:47.701813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:82712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.522 [2024-04-25 03:25:47.701827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.522 [2024-04-25 03:25:47.701841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:82720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.522 [2024-04-25 03:25:47.701855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.522 [2024-04-25 03:25:47.701870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:82728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.522 [2024-04-25 03:25:47.701883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.522 [2024-04-25 03:25:47.701898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:82736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.522 [2024-04-25 03:25:47.701911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.522 [2024-04-25 03:25:47.701930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:82744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.522 [2024-04-25 03:25:47.701943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.522 [2024-04-25 03:25:47.701959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:82752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.522 [2024-04-25 03:25:47.701972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.522 [2024-04-25 03:25:47.701987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:82760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.522 [2024-04-25 03:25:47.702001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.522 [2024-04-25 03:25:47.702016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:82768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.522 [2024-04-25 03:25:47.702037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.522 [2024-04-25 03:25:47.702052] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3e420 is same with the state(5) to be set 00:25:24.522 [2024-04-25 03:25:47.702073] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:24.522 [2024-04-25 03:25:47.702084] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:24.522 [2024-04-25 03:25:47.702096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82776 len:8 PRP1 0x0 PRP2 0x0 00:25:24.522 [2024-04-25 03:25:47.702109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.522 [2024-04-25 03:25:47.702173] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xd3e420 was disconnected and freed. reset controller. 00:25:24.522 [2024-04-25 03:25:47.702191] bdev_nvme.c:1856:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:25:24.522 [2024-04-25 03:25:47.702224] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:24.522 [2024-04-25 03:25:47.702242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.522 [2024-04-25 03:25:47.702257] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:24.522 [2024-04-25 03:25:47.702270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.522 [2024-04-25 03:25:47.702283] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:24.522 [2024-04-25 03:25:47.702297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.522 [2024-04-25 03:25:47.702310] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:24.522 [2024-04-25 03:25:47.702323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.522 [2024-04-25 03:25:47.702336] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:24.522 [2024-04-25 03:25:47.705596] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:24.522 [2024-04-25 03:25:47.705652] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd31f30 (9): Bad file descriptor 00:25:24.522 [2024-04-25 03:25:47.830880] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:24.522 [2024-04-25 03:25:52.261439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:37128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.522 [2024-04-25 03:25:52.261481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.522 [2024-04-25 03:25:52.261507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:37136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.522 [2024-04-25 03:25:52.261523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.522 [2024-04-25 03:25:52.261538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:37144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.522 [2024-04-25 03:25:52.261551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.522 [2024-04-25 03:25:52.261566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:37152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.522 [2024-04-25 03:25:52.261579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.522 [2024-04-25 03:25:52.261595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:37160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.522 [2024-04-25 03:25:52.261623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.522 [2024-04-25 03:25:52.261653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:37168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.522 [2024-04-25 03:25:52.261671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.522 [2024-04-25 03:25:52.261687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:37176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.522 [2024-04-25 03:25:52.261700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.522 [2024-04-25 03:25:52.261715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:37184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.522 [2024-04-25 03:25:52.261728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.522 [2024-04-25 03:25:52.261742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:37192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.522 [2024-04-25 03:25:52.261755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.522 [2024-04-25 03:25:52.261770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:37200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.522 [2024-04-25 03:25:52.261783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.523 [2024-04-25 03:25:52.261798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:37208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.523 [2024-04-25 03:25:52.261810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.523 [2024-04-25 03:25:52.261825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:37216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.523 [2024-04-25 03:25:52.261838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.523 [2024-04-25 03:25:52.261852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:37224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.523 [2024-04-25 03:25:52.261866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.523 [2024-04-25 03:25:52.261880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:37232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.523 [2024-04-25 03:25:52.261893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.523 [2024-04-25 03:25:52.261908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:37240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.523 [2024-04-25 03:25:52.261936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.523 [2024-04-25 03:25:52.261950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:37248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.523 [2024-04-25 03:25:52.261963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.523 [2024-04-25 03:25:52.261977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:37256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.523 [2024-04-25 03:25:52.261990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.523 [2024-04-25 03:25:52.262005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:37264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.523 [2024-04-25 03:25:52.262022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.523 [2024-04-25 03:25:52.262036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:37272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.523 [2024-04-25 03:25:52.262049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.523 [2024-04-25 03:25:52.262063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:37280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.523 [2024-04-25 03:25:52.262076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.523 [2024-04-25 03:25:52.262090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:37288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.523 [2024-04-25 03:25:52.262103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.523 [2024-04-25 03:25:52.262131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:37296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.523 [2024-04-25 03:25:52.262145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.523 [2024-04-25 03:25:52.262160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:37304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.523 [2024-04-25 03:25:52.262173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.523 [2024-04-25 03:25:52.262188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:37312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.523 [2024-04-25 03:25:52.262201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.523 [2024-04-25 03:25:52.262215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:37320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.523 [2024-04-25 03:25:52.262228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.523 [2024-04-25 03:25:52.262243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:37328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.523 [2024-04-25 03:25:52.262256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.523 [2024-04-25 03:25:52.262270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:37336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.523 [2024-04-25 03:25:52.262283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.523 [2024-04-25 03:25:52.262298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:37344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.523 [2024-04-25 03:25:52.262311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.523 [2024-04-25 03:25:52.262325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:37352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.523 [2024-04-25 03:25:52.262338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.523 [2024-04-25 03:25:52.262352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:37360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.523 [2024-04-25 03:25:52.262365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.523 [2024-04-25 03:25:52.262384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:37368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.523 [2024-04-25 03:25:52.262398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.523 [2024-04-25 03:25:52.262413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:37376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.523 [2024-04-25 03:25:52.262426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.523 [2024-04-25 03:25:52.262440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:37384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.523 [2024-04-25 03:25:52.262454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.523 [2024-04-25 03:25:52.262468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:37392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.523 [2024-04-25 03:25:52.262482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.523 [2024-04-25 03:25:52.262496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:37400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.523 [2024-04-25 03:25:52.262509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.523 [2024-04-25 03:25:52.262524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:37408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.523 [2024-04-25 03:25:52.262537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.523 [2024-04-25 03:25:52.262552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:37416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.523 [2024-04-25 03:25:52.262565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.523 [2024-04-25 03:25:52.262579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:37424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.523 [2024-04-25 03:25:52.262592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.523 [2024-04-25 03:25:52.262607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:37432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.523 [2024-04-25 03:25:52.262620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.523 [2024-04-25 03:25:52.262650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:37440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.523 [2024-04-25 03:25:52.262670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.523 [2024-04-25 03:25:52.262686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:37448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.523 [2024-04-25 03:25:52.262699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.523 [2024-04-25 03:25:52.262715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:37456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.523 [2024-04-25 03:25:52.262729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.523 [2024-04-25 03:25:52.262743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:37464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.523 [2024-04-25 03:25:52.262761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.523 [2024-04-25 03:25:52.262776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:37472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.523 [2024-04-25 03:25:52.262789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.523 [2024-04-25 03:25:52.262804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:37480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.523 [2024-04-25 03:25:52.262817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.523 [2024-04-25 03:25:52.262832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:37488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.523 [2024-04-25 03:25:52.262845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.523 [2024-04-25 03:25:52.262859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:37496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.523 [2024-04-25 03:25:52.262880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.523 [2024-04-25 03:25:52.262895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:37504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.523 [2024-04-25 03:25:52.262909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.523 [2024-04-25 03:25:52.262923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:37512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.523 [2024-04-25 03:25:52.262937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.523 [2024-04-25 03:25:52.262951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:37520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.523 [2024-04-25 03:25:52.262964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.524 [2024-04-25 03:25:52.262979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:37528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.524 [2024-04-25 03:25:52.262992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.524 [2024-04-25 03:25:52.263007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:37536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.524 [2024-04-25 03:25:52.263020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.524 [2024-04-25 03:25:52.263034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:37544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.524 [2024-04-25 03:25:52.263047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.524 [2024-04-25 03:25:52.263062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:37552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.524 [2024-04-25 03:25:52.263075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.524 [2024-04-25 03:25:52.263089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:37560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.524 [2024-04-25 03:25:52.263102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.524 [2024-04-25 03:25:52.263120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:37568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.524 [2024-04-25 03:25:52.263134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.524 [2024-04-25 03:25:52.263149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:37576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.524 [2024-04-25 03:25:52.263162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.524 [2024-04-25 03:25:52.263177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:37584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.524 [2024-04-25 03:25:52.263190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.524 [2024-04-25 03:25:52.263205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:37592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.524 [2024-04-25 03:25:52.263218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.524 [2024-04-25 03:25:52.263232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:37600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.524 [2024-04-25 03:25:52.263245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.524 [2024-04-25 03:25:52.263260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:37608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.524 [2024-04-25 03:25:52.263273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.524 [2024-04-25 03:25:52.263288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:37616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.524 [2024-04-25 03:25:52.263301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.524 [2024-04-25 03:25:52.263316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:37624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.524 [2024-04-25 03:25:52.263330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.524 [2024-04-25 03:25:52.263344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:37632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.524 [2024-04-25 03:25:52.263357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.524 [2024-04-25 03:25:52.263372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.524 [2024-04-25 03:25:52.263384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.524 [2024-04-25 03:25:52.263399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:37648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.524 [2024-04-25 03:25:52.263412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.524 [2024-04-25 03:25:52.263426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:37656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.524 [2024-04-25 03:25:52.263439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.524 [2024-04-25 03:25:52.263454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:37664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.524 [2024-04-25 03:25:52.263467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.524 [2024-04-25 03:25:52.263484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:37672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.524 [2024-04-25 03:25:52.263498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.524 [2024-04-25 03:25:52.263512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:37680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.524 [2024-04-25 03:25:52.263525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.524 [2024-04-25 03:25:52.263539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:37688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.524 [2024-04-25 03:25:52.263552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.524 [2024-04-25 03:25:52.263566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:37696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.524 [2024-04-25 03:25:52.263579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.524 [2024-04-25 03:25:52.263594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:37704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.524 [2024-04-25 03:25:52.263607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.524 [2024-04-25 03:25:52.263621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:37712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.524 [2024-04-25 03:25:52.263641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.524 [2024-04-25 03:25:52.263665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:37720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.524 [2024-04-25 03:25:52.263680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.524 [2024-04-25 03:25:52.263695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:37728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.524 [2024-04-25 03:25:52.263707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.524 [2024-04-25 03:25:52.263722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:37736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.524 [2024-04-25 03:25:52.263735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.524 [2024-04-25 03:25:52.263749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:37744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.524 [2024-04-25 03:25:52.263763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.524 [2024-04-25 03:25:52.263777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:37752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.524 [2024-04-25 03:25:52.263790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.524 [2024-04-25 03:25:52.263805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:37760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.524 [2024-04-25 03:25:52.263818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.524 [2024-04-25 03:25:52.263833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:37768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.524 [2024-04-25 03:25:52.263850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.524 [2024-04-25 03:25:52.263865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:37776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.524 [2024-04-25 03:25:52.263878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.524 [2024-04-25 03:25:52.263893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:37784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.524 [2024-04-25 03:25:52.263906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.524 [2024-04-25 03:25:52.263921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:37792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.524 [2024-04-25 03:25:52.263943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.524 [2024-04-25 03:25:52.263958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:37800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.524 [2024-04-25 03:25:52.263972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.525 [2024-04-25 03:25:52.263987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:37808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.525 [2024-04-25 03:25:52.264000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.525 [2024-04-25 03:25:52.264014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:37816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.525 [2024-04-25 03:25:52.264027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.525 [2024-04-25 03:25:52.264042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:37824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.525 [2024-04-25 03:25:52.264055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.525 [2024-04-25 03:25:52.264070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:37832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.525 [2024-04-25 03:25:52.264083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.525 [2024-04-25 03:25:52.264113] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:24.525 [2024-04-25 03:25:52.264129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37840 len:8 PRP1 0x0 PRP2 0x0 00:25:24.525 [2024-04-25 03:25:52.264143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.525 [2024-04-25 03:25:52.264162] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:24.525 [2024-04-25 03:25:52.264173] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:24.525 [2024-04-25 03:25:52.264184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37848 len:8 PRP1 0x0 PRP2 0x0 00:25:24.525 [2024-04-25 03:25:52.264197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.525 [2024-04-25 03:25:52.264210] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:24.525 [2024-04-25 03:25:52.264221] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:24.525 [2024-04-25 03:25:52.264232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37856 len:8 PRP1 0x0 PRP2 0x0 00:25:24.525 [2024-04-25 03:25:52.264248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.525 [2024-04-25 03:25:52.264262] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:24.525 [2024-04-25 03:25:52.264273] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:24.525 [2024-04-25 03:25:52.264284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37864 len:8 PRP1 0x0 PRP2 0x0 00:25:24.525 [2024-04-25 03:25:52.264296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.525 [2024-04-25 03:25:52.264309] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:24.525 [2024-04-25 03:25:52.264319] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:24.525 [2024-04-25 03:25:52.264330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37872 len:8 PRP1 0x0 PRP2 0x0 00:25:24.525 [2024-04-25 03:25:52.264342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.525 [2024-04-25 03:25:52.264355] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:24.525 [2024-04-25 03:25:52.264365] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:24.525 [2024-04-25 03:25:52.264376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37880 len:8 PRP1 0x0 PRP2 0x0 00:25:24.525 [2024-04-25 03:25:52.264388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.525 [2024-04-25 03:25:52.264401] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:24.525 [2024-04-25 03:25:52.264411] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:24.525 [2024-04-25 03:25:52.264422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37888 len:8 PRP1 0x0 PRP2 0x0 00:25:24.525 [2024-04-25 03:25:52.264434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.525 [2024-04-25 03:25:52.264447] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:24.525 [2024-04-25 03:25:52.264457] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:24.525 [2024-04-25 03:25:52.264467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37896 len:8 PRP1 0x0 PRP2 0x0 00:25:24.525 [2024-04-25 03:25:52.264479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.525 [2024-04-25 03:25:52.264492] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:24.525 [2024-04-25 03:25:52.264502] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:24.525 [2024-04-25 03:25:52.264513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37904 len:8 PRP1 0x0 PRP2 0x0 00:25:24.525 [2024-04-25 03:25:52.264525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.525 [2024-04-25 03:25:52.264538] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:24.525 [2024-04-25 03:25:52.264548] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:24.525 [2024-04-25 03:25:52.264559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37912 len:8 PRP1 0x0 PRP2 0x0 00:25:24.525 [2024-04-25 03:25:52.264571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.525 [2024-04-25 03:25:52.264584] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:24.525 [2024-04-25 03:25:52.264594] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:24.525 [2024-04-25 03:25:52.264608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37920 len:8 PRP1 0x0 PRP2 0x0 00:25:24.525 [2024-04-25 03:25:52.264622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.525 [2024-04-25 03:25:52.264646] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:24.525 [2024-04-25 03:25:52.264658] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:24.525 [2024-04-25 03:25:52.264669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37928 len:8 PRP1 0x0 PRP2 0x0 00:25:24.525 [2024-04-25 03:25:52.264686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.525 [2024-04-25 03:25:52.264698] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:24.525 [2024-04-25 03:25:52.264709] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:24.525 [2024-04-25 03:25:52.264720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37936 len:8 PRP1 0x0 PRP2 0x0 00:25:24.525 [2024-04-25 03:25:52.264733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.525 [2024-04-25 03:25:52.264745] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:24.525 [2024-04-25 03:25:52.264756] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:24.525 [2024-04-25 03:25:52.264767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37944 len:8 PRP1 0x0 PRP2 0x0 00:25:24.525 [2024-04-25 03:25:52.264779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.525 [2024-04-25 03:25:52.264792] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:24.525 [2024-04-25 03:25:52.264802] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:24.525 [2024-04-25 03:25:52.264813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37952 len:8 PRP1 0x0 PRP2 0x0 00:25:24.525 [2024-04-25 03:25:52.264825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.525 [2024-04-25 03:25:52.264838] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:24.525 [2024-04-25 03:25:52.264848] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:24.525 [2024-04-25 03:25:52.264859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37960 len:8 PRP1 0x0 PRP2 0x0 00:25:24.525 [2024-04-25 03:25:52.264872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.525 [2024-04-25 03:25:52.264885] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:24.525 [2024-04-25 03:25:52.264896] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:24.525 [2024-04-25 03:25:52.264906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37968 len:8 PRP1 0x0 PRP2 0x0 00:25:24.525 [2024-04-25 03:25:52.264918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.525 [2024-04-25 03:25:52.264931] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:24.525 [2024-04-25 03:25:52.264941] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:24.525 [2024-04-25 03:25:52.264952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37976 len:8 PRP1 0x0 PRP2 0x0 00:25:24.525 [2024-04-25 03:25:52.264964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.525 [2024-04-25 03:25:52.264980] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:24.525 [2024-04-25 03:25:52.264991] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:24.525 [2024-04-25 03:25:52.265003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37984 len:8 PRP1 0x0 PRP2 0x0 00:25:24.525 [2024-04-25 03:25:52.265015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.525 [2024-04-25 03:25:52.265034] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:24.525 [2024-04-25 03:25:52.265045] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:24.525 [2024-04-25 03:25:52.265056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37992 len:8 PRP1 0x0 PRP2 0x0 00:25:24.525 [2024-04-25 03:25:52.265068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.525 [2024-04-25 03:25:52.265081] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:24.525 [2024-04-25 03:25:52.265092] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:24.525 [2024-04-25 03:25:52.265103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38000 len:8 PRP1 0x0 PRP2 0x0 00:25:24.525 [2024-04-25 03:25:52.265115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.525 [2024-04-25 03:25:52.265128] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:24.526 [2024-04-25 03:25:52.265138] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:24.526 [2024-04-25 03:25:52.265149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38008 len:8 PRP1 0x0 PRP2 0x0 00:25:24.526 [2024-04-25 03:25:52.265162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.526 [2024-04-25 03:25:52.265174] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:24.526 [2024-04-25 03:25:52.265185] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:24.526 [2024-04-25 03:25:52.265196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38016 len:8 PRP1 0x0 PRP2 0x0 00:25:24.526 [2024-04-25 03:25:52.265208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.526 [2024-04-25 03:25:52.265221] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:24.526 [2024-04-25 03:25:52.265231] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:24.526 [2024-04-25 03:25:52.265242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38024 len:8 PRP1 0x0 PRP2 0x0 00:25:24.526 [2024-04-25 03:25:52.265254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.526 [2024-04-25 03:25:52.265267] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:24.526 [2024-04-25 03:25:52.265277] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:24.526 [2024-04-25 03:25:52.265289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38032 len:8 PRP1 0x0 PRP2 0x0 00:25:24.526 [2024-04-25 03:25:52.265301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.526 [2024-04-25 03:25:52.265314] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:24.526 [2024-04-25 03:25:52.265325] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:24.526 [2024-04-25 03:25:52.265336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38040 len:8 PRP1 0x0 PRP2 0x0 00:25:24.526 [2024-04-25 03:25:52.265354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.526 [2024-04-25 03:25:52.265368] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:24.526 [2024-04-25 03:25:52.265379] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:24.526 [2024-04-25 03:25:52.265390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38048 len:8 PRP1 0x0 PRP2 0x0 00:25:24.526 [2024-04-25 03:25:52.265403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.526 [2024-04-25 03:25:52.265421] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:24.526 [2024-04-25 03:25:52.265432] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:24.526 [2024-04-25 03:25:52.265443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38056 len:8 PRP1 0x0 PRP2 0x0 00:25:24.526 [2024-04-25 03:25:52.265456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.526 [2024-04-25 03:25:52.265469] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:24.526 [2024-04-25 03:25:52.265480] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:24.526 [2024-04-25 03:25:52.265491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38064 len:8 PRP1 0x0 PRP2 0x0 00:25:24.526 [2024-04-25 03:25:52.265504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.526 [2024-04-25 03:25:52.265517] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:24.526 [2024-04-25 03:25:52.265527] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:24.526 [2024-04-25 03:25:52.265538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38072 len:8 PRP1 0x0 PRP2 0x0 00:25:24.526 [2024-04-25 03:25:52.265551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.526 [2024-04-25 03:25:52.265564] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:24.526 [2024-04-25 03:25:52.265574] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:24.526 [2024-04-25 03:25:52.265586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38080 len:8 PRP1 0x0 PRP2 0x0 00:25:24.526 [2024-04-25 03:25:52.265598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.526 [2024-04-25 03:25:52.265611] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:24.526 [2024-04-25 03:25:52.265621] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:24.526 [2024-04-25 03:25:52.265639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38088 len:8 PRP1 0x0 PRP2 0x0 00:25:24.526 [2024-04-25 03:25:52.265653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.526 [2024-04-25 03:25:52.265678] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:24.526 [2024-04-25 03:25:52.265689] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:24.526 [2024-04-25 03:25:52.265700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38096 len:8 PRP1 0x0 PRP2 0x0 00:25:24.526 [2024-04-25 03:25:52.265712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.526 [2024-04-25 03:25:52.265725] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:24.526 [2024-04-25 03:25:52.265735] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:24.526 [2024-04-25 03:25:52.265750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38104 len:8 PRP1 0x0 PRP2 0x0 00:25:24.526 [2024-04-25 03:25:52.265764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.526 [2024-04-25 03:25:52.265777] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:24.526 [2024-04-25 03:25:52.265787] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:24.526 [2024-04-25 03:25:52.265798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38112 len:8 PRP1 0x0 PRP2 0x0 00:25:24.526 [2024-04-25 03:25:52.265811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.526 [2024-04-25 03:25:52.265828] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:24.526 [2024-04-25 03:25:52.265840] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:24.526 [2024-04-25 03:25:52.265851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38120 len:8 PRP1 0x0 PRP2 0x0 00:25:24.526 [2024-04-25 03:25:52.265863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.526 [2024-04-25 03:25:52.265876] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:24.526 [2024-04-25 03:25:52.265887] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:24.526 [2024-04-25 03:25:52.265897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38128 len:8 PRP1 0x0 PRP2 0x0 00:25:24.526 [2024-04-25 03:25:52.265910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.526 [2024-04-25 03:25:52.265927] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:24.526 [2024-04-25 03:25:52.265937] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:24.526 [2024-04-25 03:25:52.265949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38136 len:8 PRP1 0x0 PRP2 0x0 00:25:24.526 [2024-04-25 03:25:52.265961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.526 [2024-04-25 03:25:52.265974] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:24.526 [2024-04-25 03:25:52.265985] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:24.526 [2024-04-25 03:25:52.265996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38144 len:8 PRP1 0x0 PRP2 0x0 00:25:24.526 [2024-04-25 03:25:52.266008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.526 [2024-04-25 03:25:52.266074] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xd5e800 was disconnected and freed. reset controller. 00:25:24.526 [2024-04-25 03:25:52.266093] bdev_nvme.c:1856:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:25:24.526 [2024-04-25 03:25:52.266126] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:24.526 [2024-04-25 03:25:52.266144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.526 [2024-04-25 03:25:52.266159] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:24.526 [2024-04-25 03:25:52.266171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.526 [2024-04-25 03:25:52.266185] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:24.526 [2024-04-25 03:25:52.266197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.526 [2024-04-25 03:25:52.266215] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:24.526 [2024-04-25 03:25:52.266228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.526 [2024-04-25 03:25:52.266241] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:24.526 [2024-04-25 03:25:52.269502] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:24.526 [2024-04-25 03:25:52.269542] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd31f30 (9): Bad file descriptor 00:25:24.526 [2024-04-25 03:25:52.315962] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:24.526 00:25:24.526 Latency(us) 00:25:24.526 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:24.526 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:24.526 Verification LBA range: start 0x0 length 0x4000 00:25:24.526 NVMe0n1 : 15.01 8704.36 34.00 512.52 0.00 13861.08 801.00 16505.36 00:25:24.526 =================================================================================================================== 00:25:24.526 Total : 8704.36 34.00 512.52 0.00 13861.08 801.00 16505.36 00:25:24.526 Received shutdown signal, test time was about 15.000000 seconds 00:25:24.526 00:25:24.526 Latency(us) 00:25:24.527 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:24.527 =================================================================================================================== 00:25:24.527 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:24.527 03:25:58 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:25:24.527 03:25:58 -- host/failover.sh@65 -- # count=3 00:25:24.527 03:25:58 -- host/failover.sh@67 -- # (( count != 3 )) 00:25:24.527 03:25:58 -- host/failover.sh@73 -- # bdevperf_pid=1589125 00:25:24.527 03:25:58 -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:25:24.527 03:25:58 -- host/failover.sh@75 -- # waitforlisten 1589125 /var/tmp/bdevperf.sock 00:25:24.527 03:25:58 -- common/autotest_common.sh@817 -- # '[' -z 1589125 ']' 00:25:24.527 03:25:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:24.527 03:25:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:24.527 03:25:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:24.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:24.527 03:25:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:24.527 03:25:58 -- common/autotest_common.sh@10 -- # set +x 00:25:24.527 03:25:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:24.527 03:25:58 -- common/autotest_common.sh@850 -- # return 0 00:25:24.527 03:25:58 -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:24.527 [2024-04-25 03:25:58.916103] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:24.527 03:25:58 -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:24.785 [2024-04-25 03:25:59.152792] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:24.785 03:25:59 -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:25.043 NVMe0n1 00:25:25.043 03:25:59 -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:25.610 00:25:25.610 03:25:59 -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:26.177 00:25:26.177 03:26:00 -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:26.177 03:26:00 -- host/failover.sh@82 -- # grep -q NVMe0 00:25:26.177 03:26:00 -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:26.435 03:26:00 -- host/failover.sh@87 -- # sleep 3 00:25:29.723 03:26:03 -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:29.723 03:26:03 -- host/failover.sh@88 -- # grep -q NVMe0 00:25:29.723 03:26:04 -- host/failover.sh@90 -- # run_test_pid=1589794 00:25:29.723 03:26:04 -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:29.723 03:26:04 -- host/failover.sh@92 -- # wait 1589794 00:25:31.100 0 00:25:31.100 03:26:05 -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:31.100 [2024-04-25 03:25:58.407717] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:25:31.100 [2024-04-25 03:25:58.407804] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1589125 ] 00:25:31.100 EAL: No free 2048 kB hugepages reported on node 1 00:25:31.100 [2024-04-25 03:25:58.467529] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:31.100 [2024-04-25 03:25:58.570774] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:31.100 [2024-04-25 03:26:00.858667] bdev_nvme.c:1856:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:31.100 [2024-04-25 03:26:00.858751] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:31.100 [2024-04-25 03:26:00.858775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.100 [2024-04-25 03:26:00.858793] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:31.100 [2024-04-25 03:26:00.858807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.100 [2024-04-25 03:26:00.858821] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:31.100 [2024-04-25 03:26:00.858834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.100 [2024-04-25 03:26:00.858849] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:31.100 [2024-04-25 03:26:00.858862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.100 [2024-04-25 03:26:00.858876] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:31.100 [2024-04-25 03:26:00.858920] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:31.100 [2024-04-25 03:26:00.858951] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe2ef30 (9): Bad file descriptor 00:25:31.100 [2024-04-25 03:26:00.951206] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:31.100 Running I/O for 1 seconds... 00:25:31.100 00:25:31.100 Latency(us) 00:25:31.100 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:31.100 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:31.100 Verification LBA range: start 0x0 length 0x4000 00:25:31.100 NVMe0n1 : 1.01 8588.20 33.55 0.00 0.00 14839.76 2888.44 12233.39 00:25:31.100 =================================================================================================================== 00:25:31.100 Total : 8588.20 33.55 0.00 0.00 14839.76 2888.44 12233.39 00:25:31.100 03:26:05 -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:31.100 03:26:05 -- host/failover.sh@95 -- # grep -q NVMe0 00:25:31.100 03:26:05 -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:31.357 03:26:05 -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:31.357 03:26:05 -- host/failover.sh@99 -- # grep -q NVMe0 00:25:31.615 03:26:06 -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:31.873 03:26:06 -- host/failover.sh@101 -- # sleep 3 00:25:35.160 03:26:09 -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:35.160 03:26:09 -- host/failover.sh@103 -- # grep -q NVMe0 00:25:35.160 03:26:09 -- host/failover.sh@108 -- # killprocess 1589125 00:25:35.160 03:26:09 -- common/autotest_common.sh@936 -- # '[' -z 1589125 ']' 00:25:35.160 03:26:09 -- common/autotest_common.sh@940 -- # kill -0 1589125 00:25:35.160 03:26:09 -- common/autotest_common.sh@941 -- # uname 00:25:35.160 03:26:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:35.160 03:26:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1589125 00:25:35.160 03:26:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:35.160 03:26:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:35.160 03:26:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1589125' 00:25:35.160 killing process with pid 1589125 00:25:35.160 03:26:09 -- common/autotest_common.sh@955 -- # kill 1589125 00:25:35.160 03:26:09 -- common/autotest_common.sh@960 -- # wait 1589125 00:25:35.418 03:26:09 -- host/failover.sh@110 -- # sync 00:25:35.418 03:26:09 -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:35.994 03:26:10 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:25:35.994 03:26:10 -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:35.994 03:26:10 -- host/failover.sh@116 -- # nvmftestfini 00:25:35.994 03:26:10 -- nvmf/common.sh@477 -- # nvmfcleanup 00:25:35.994 03:26:10 -- nvmf/common.sh@117 -- # sync 00:25:35.994 03:26:10 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:35.994 03:26:10 -- nvmf/common.sh@120 -- # set +e 00:25:35.994 03:26:10 -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:35.994 03:26:10 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:35.994 rmmod nvme_tcp 00:25:35.994 rmmod nvme_fabrics 00:25:35.994 rmmod nvme_keyring 00:25:35.994 03:26:10 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:35.994 03:26:10 -- nvmf/common.sh@124 -- # set -e 00:25:35.994 03:26:10 -- nvmf/common.sh@125 -- # return 0 00:25:35.994 03:26:10 -- nvmf/common.sh@478 -- # '[' -n 1586981 ']' 00:25:35.994 03:26:10 -- nvmf/common.sh@479 -- # killprocess 1586981 00:25:35.994 03:26:10 -- common/autotest_common.sh@936 -- # '[' -z 1586981 ']' 00:25:35.994 03:26:10 -- common/autotest_common.sh@940 -- # kill -0 1586981 00:25:35.994 03:26:10 -- common/autotest_common.sh@941 -- # uname 00:25:35.994 03:26:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:35.994 03:26:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1586981 00:25:35.994 03:26:10 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:35.994 03:26:10 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:35.994 03:26:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1586981' 00:25:35.994 killing process with pid 1586981 00:25:35.994 03:26:10 -- common/autotest_common.sh@955 -- # kill 1586981 00:25:35.994 03:26:10 -- common/autotest_common.sh@960 -- # wait 1586981 00:25:36.253 03:26:10 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:25:36.253 03:26:10 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:25:36.253 03:26:10 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:25:36.253 03:26:10 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:36.253 03:26:10 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:36.253 03:26:10 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:36.253 03:26:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:36.253 03:26:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:38.160 03:26:12 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:38.160 00:25:38.160 real 0m35.068s 00:25:38.160 user 2m4.074s 00:25:38.160 sys 0m5.667s 00:25:38.160 03:26:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:38.160 03:26:12 -- common/autotest_common.sh@10 -- # set +x 00:25:38.160 ************************************ 00:25:38.160 END TEST nvmf_failover 00:25:38.160 ************************************ 00:25:38.419 03:26:12 -- nvmf/nvmf.sh@99 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:38.419 03:26:12 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:38.419 03:26:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:38.419 03:26:12 -- common/autotest_common.sh@10 -- # set +x 00:25:38.419 ************************************ 00:25:38.419 START TEST nvmf_discovery 00:25:38.419 ************************************ 00:25:38.419 03:26:12 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:38.419 * Looking for test storage... 00:25:38.419 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:38.419 03:26:12 -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:38.419 03:26:12 -- nvmf/common.sh@7 -- # uname -s 00:25:38.419 03:26:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:38.419 03:26:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:38.419 03:26:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:38.419 03:26:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:38.419 03:26:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:38.419 03:26:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:38.419 03:26:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:38.419 03:26:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:38.419 03:26:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:38.419 03:26:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:38.419 03:26:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:38.419 03:26:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:38.419 03:26:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:38.419 03:26:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:38.419 03:26:12 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:38.419 03:26:12 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:38.419 03:26:12 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:38.419 03:26:12 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:38.419 03:26:12 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:38.419 03:26:12 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:38.419 03:26:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:38.419 03:26:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:38.419 03:26:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:38.419 03:26:12 -- paths/export.sh@5 -- # export PATH 00:25:38.419 03:26:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:38.419 03:26:12 -- nvmf/common.sh@47 -- # : 0 00:25:38.419 03:26:12 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:38.419 03:26:12 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:38.419 03:26:12 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:38.419 03:26:12 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:38.419 03:26:12 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:38.419 03:26:12 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:38.419 03:26:12 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:38.419 03:26:12 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:38.419 03:26:12 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:25:38.419 03:26:12 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:25:38.419 03:26:12 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:25:38.419 03:26:12 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:25:38.419 03:26:12 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:25:38.419 03:26:12 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:25:38.419 03:26:12 -- host/discovery.sh@25 -- # nvmftestinit 00:25:38.419 03:26:12 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:25:38.419 03:26:12 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:38.419 03:26:12 -- nvmf/common.sh@437 -- # prepare_net_devs 00:25:38.419 03:26:12 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:25:38.419 03:26:12 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:25:38.419 03:26:12 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:38.419 03:26:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:38.419 03:26:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:38.419 03:26:12 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:25:38.419 03:26:12 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:25:38.419 03:26:12 -- nvmf/common.sh@285 -- # xtrace_disable 00:25:38.419 03:26:12 -- common/autotest_common.sh@10 -- # set +x 00:25:40.327 03:26:14 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:40.327 03:26:14 -- nvmf/common.sh@291 -- # pci_devs=() 00:25:40.327 03:26:14 -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:40.327 03:26:14 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:40.327 03:26:14 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:40.327 03:26:14 -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:40.327 03:26:14 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:40.327 03:26:14 -- nvmf/common.sh@295 -- # net_devs=() 00:25:40.327 03:26:14 -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:40.327 03:26:14 -- nvmf/common.sh@296 -- # e810=() 00:25:40.327 03:26:14 -- nvmf/common.sh@296 -- # local -ga e810 00:25:40.327 03:26:14 -- nvmf/common.sh@297 -- # x722=() 00:25:40.327 03:26:14 -- nvmf/common.sh@297 -- # local -ga x722 00:25:40.327 03:26:14 -- nvmf/common.sh@298 -- # mlx=() 00:25:40.327 03:26:14 -- nvmf/common.sh@298 -- # local -ga mlx 00:25:40.327 03:26:14 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:40.327 03:26:14 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:40.327 03:26:14 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:40.327 03:26:14 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:40.327 03:26:14 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:40.327 03:26:14 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:40.327 03:26:14 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:40.327 03:26:14 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:40.327 03:26:14 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:40.327 03:26:14 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:40.327 03:26:14 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:40.327 03:26:14 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:40.327 03:26:14 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:40.327 03:26:14 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:40.327 03:26:14 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:40.327 03:26:14 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:40.327 03:26:14 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:40.327 03:26:14 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:40.327 03:26:14 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:40.327 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:40.327 03:26:14 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:40.327 03:26:14 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:40.327 03:26:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:40.327 03:26:14 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:40.327 03:26:14 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:40.327 03:26:14 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:40.327 03:26:14 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:40.327 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:40.327 03:26:14 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:40.327 03:26:14 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:40.327 03:26:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:40.327 03:26:14 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:40.327 03:26:14 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:40.327 03:26:14 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:40.327 03:26:14 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:40.327 03:26:14 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:40.327 03:26:14 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:40.327 03:26:14 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:40.327 03:26:14 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:25:40.327 03:26:14 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:40.327 03:26:14 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:40.327 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:40.327 03:26:14 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:25:40.327 03:26:14 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:40.327 03:26:14 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:40.327 03:26:14 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:25:40.327 03:26:14 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:40.327 03:26:14 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:40.327 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:40.327 03:26:14 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:25:40.327 03:26:14 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:25:40.327 03:26:14 -- nvmf/common.sh@403 -- # is_hw=yes 00:25:40.327 03:26:14 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:25:40.327 03:26:14 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:25:40.327 03:26:14 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:25:40.327 03:26:14 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:40.328 03:26:14 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:40.328 03:26:14 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:40.328 03:26:14 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:40.328 03:26:14 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:40.328 03:26:14 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:40.328 03:26:14 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:40.328 03:26:14 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:40.328 03:26:14 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:40.328 03:26:14 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:40.328 03:26:14 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:40.328 03:26:14 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:40.328 03:26:14 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:40.592 03:26:14 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:40.593 03:26:14 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:40.593 03:26:14 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:40.593 03:26:14 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:40.593 03:26:14 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:40.593 03:26:14 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:40.593 03:26:14 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:40.593 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:40.593 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.205 ms 00:25:40.593 00:25:40.593 --- 10.0.0.2 ping statistics --- 00:25:40.593 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:40.593 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:25:40.593 03:26:14 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:40.593 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:40.593 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:25:40.593 00:25:40.593 --- 10.0.0.1 ping statistics --- 00:25:40.593 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:40.593 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:25:40.593 03:26:14 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:40.593 03:26:14 -- nvmf/common.sh@411 -- # return 0 00:25:40.593 03:26:14 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:25:40.593 03:26:14 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:40.593 03:26:14 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:25:40.593 03:26:14 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:25:40.593 03:26:14 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:40.593 03:26:14 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:25:40.593 03:26:14 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:25:40.593 03:26:14 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:25:40.593 03:26:14 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:25:40.593 03:26:14 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:40.593 03:26:14 -- common/autotest_common.sh@10 -- # set +x 00:25:40.593 03:26:14 -- nvmf/common.sh@470 -- # nvmfpid=1592513 00:25:40.593 03:26:14 -- nvmf/common.sh@471 -- # waitforlisten 1592513 00:25:40.593 03:26:14 -- common/autotest_common.sh@817 -- # '[' -z 1592513 ']' 00:25:40.593 03:26:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:40.593 03:26:14 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:40.593 03:26:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:40.593 03:26:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:40.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:40.593 03:26:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:40.593 03:26:14 -- common/autotest_common.sh@10 -- # set +x 00:25:40.593 [2024-04-25 03:26:14.980593] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:25:40.593 [2024-04-25 03:26:14.980711] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:40.593 EAL: No free 2048 kB hugepages reported on node 1 00:25:40.593 [2024-04-25 03:26:15.050788] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:40.853 [2024-04-25 03:26:15.164512] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:40.853 [2024-04-25 03:26:15.164583] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:40.853 [2024-04-25 03:26:15.164608] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:40.853 [2024-04-25 03:26:15.164622] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:40.853 [2024-04-25 03:26:15.164645] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:40.853 [2024-04-25 03:26:15.164680] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:41.787 03:26:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:41.787 03:26:15 -- common/autotest_common.sh@850 -- # return 0 00:25:41.788 03:26:15 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:25:41.788 03:26:15 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:41.788 03:26:15 -- common/autotest_common.sh@10 -- # set +x 00:25:41.788 03:26:15 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:41.788 03:26:15 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:41.788 03:26:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:41.788 03:26:15 -- common/autotest_common.sh@10 -- # set +x 00:25:41.788 [2024-04-25 03:26:15.997125] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:41.788 03:26:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:41.788 03:26:16 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:25:41.788 03:26:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:41.788 03:26:16 -- common/autotest_common.sh@10 -- # set +x 00:25:41.788 [2024-04-25 03:26:16.005268] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:41.788 03:26:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:41.788 03:26:16 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:25:41.788 03:26:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:41.788 03:26:16 -- common/autotest_common.sh@10 -- # set +x 00:25:41.788 null0 00:25:41.788 03:26:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:41.788 03:26:16 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:25:41.788 03:26:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:41.788 03:26:16 -- common/autotest_common.sh@10 -- # set +x 00:25:41.788 null1 00:25:41.788 03:26:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:41.788 03:26:16 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:25:41.788 03:26:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:41.788 03:26:16 -- common/autotest_common.sh@10 -- # set +x 00:25:41.788 03:26:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:41.788 03:26:16 -- host/discovery.sh@45 -- # hostpid=1592670 00:25:41.788 03:26:16 -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:25:41.788 03:26:16 -- host/discovery.sh@46 -- # waitforlisten 1592670 /tmp/host.sock 00:25:41.788 03:26:16 -- common/autotest_common.sh@817 -- # '[' -z 1592670 ']' 00:25:41.788 03:26:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/tmp/host.sock 00:25:41.788 03:26:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:41.788 03:26:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:41.788 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:41.788 03:26:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:41.788 03:26:16 -- common/autotest_common.sh@10 -- # set +x 00:25:41.788 [2024-04-25 03:26:16.074562] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:25:41.788 [2024-04-25 03:26:16.074664] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1592670 ] 00:25:41.788 EAL: No free 2048 kB hugepages reported on node 1 00:25:41.788 [2024-04-25 03:26:16.133323] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:41.788 [2024-04-25 03:26:16.238177] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:42.047 03:26:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:42.047 03:26:16 -- common/autotest_common.sh@850 -- # return 0 00:25:42.047 03:26:16 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:42.047 03:26:16 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:25:42.047 03:26:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:42.047 03:26:16 -- common/autotest_common.sh@10 -- # set +x 00:25:42.047 03:26:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:42.047 03:26:16 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:25:42.047 03:26:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:42.047 03:26:16 -- common/autotest_common.sh@10 -- # set +x 00:25:42.047 03:26:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:42.047 03:26:16 -- host/discovery.sh@72 -- # notify_id=0 00:25:42.047 03:26:16 -- host/discovery.sh@83 -- # get_subsystem_names 00:25:42.047 03:26:16 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:42.047 03:26:16 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:42.047 03:26:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:42.047 03:26:16 -- common/autotest_common.sh@10 -- # set +x 00:25:42.047 03:26:16 -- host/discovery.sh@59 -- # sort 00:25:42.047 03:26:16 -- host/discovery.sh@59 -- # xargs 00:25:42.047 03:26:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:42.047 03:26:16 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:25:42.047 03:26:16 -- host/discovery.sh@84 -- # get_bdev_list 00:25:42.047 03:26:16 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:42.048 03:26:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:42.048 03:26:16 -- common/autotest_common.sh@10 -- # set +x 00:25:42.048 03:26:16 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:42.048 03:26:16 -- host/discovery.sh@55 -- # sort 00:25:42.048 03:26:16 -- host/discovery.sh@55 -- # xargs 00:25:42.048 03:26:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:42.048 03:26:16 -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:25:42.048 03:26:16 -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:25:42.048 03:26:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:42.048 03:26:16 -- common/autotest_common.sh@10 -- # set +x 00:25:42.048 03:26:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:42.048 03:26:16 -- host/discovery.sh@87 -- # get_subsystem_names 00:25:42.048 03:26:16 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:42.048 03:26:16 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:42.048 03:26:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:42.048 03:26:16 -- common/autotest_common.sh@10 -- # set +x 00:25:42.048 03:26:16 -- host/discovery.sh@59 -- # sort 00:25:42.048 03:26:16 -- host/discovery.sh@59 -- # xargs 00:25:42.048 03:26:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:42.048 03:26:16 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:25:42.048 03:26:16 -- host/discovery.sh@88 -- # get_bdev_list 00:25:42.048 03:26:16 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:42.048 03:26:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:42.048 03:26:16 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:42.048 03:26:16 -- common/autotest_common.sh@10 -- # set +x 00:25:42.048 03:26:16 -- host/discovery.sh@55 -- # sort 00:25:42.048 03:26:16 -- host/discovery.sh@55 -- # xargs 00:25:42.048 03:26:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:42.048 03:26:16 -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:25:42.048 03:26:16 -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:25:42.048 03:26:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:42.048 03:26:16 -- common/autotest_common.sh@10 -- # set +x 00:25:42.305 03:26:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:42.305 03:26:16 -- host/discovery.sh@91 -- # get_subsystem_names 00:25:42.305 03:26:16 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:42.305 03:26:16 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:42.305 03:26:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:42.305 03:26:16 -- common/autotest_common.sh@10 -- # set +x 00:25:42.305 03:26:16 -- host/discovery.sh@59 -- # sort 00:25:42.305 03:26:16 -- host/discovery.sh@59 -- # xargs 00:25:42.305 03:26:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:42.305 03:26:16 -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:25:42.305 03:26:16 -- host/discovery.sh@92 -- # get_bdev_list 00:25:42.306 03:26:16 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:42.306 03:26:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:42.306 03:26:16 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:42.306 03:26:16 -- common/autotest_common.sh@10 -- # set +x 00:25:42.306 03:26:16 -- host/discovery.sh@55 -- # sort 00:25:42.306 03:26:16 -- host/discovery.sh@55 -- # xargs 00:25:42.306 03:26:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:42.306 03:26:16 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:25:42.306 03:26:16 -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:42.306 03:26:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:42.306 03:26:16 -- common/autotest_common.sh@10 -- # set +x 00:25:42.306 [2024-04-25 03:26:16.643037] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:42.306 03:26:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:42.306 03:26:16 -- host/discovery.sh@97 -- # get_subsystem_names 00:25:42.306 03:26:16 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:42.306 03:26:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:42.306 03:26:16 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:42.306 03:26:16 -- common/autotest_common.sh@10 -- # set +x 00:25:42.306 03:26:16 -- host/discovery.sh@59 -- # sort 00:25:42.306 03:26:16 -- host/discovery.sh@59 -- # xargs 00:25:42.306 03:26:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:42.306 03:26:16 -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:25:42.306 03:26:16 -- host/discovery.sh@98 -- # get_bdev_list 00:25:42.306 03:26:16 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:42.306 03:26:16 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:42.306 03:26:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:42.306 03:26:16 -- common/autotest_common.sh@10 -- # set +x 00:25:42.306 03:26:16 -- host/discovery.sh@55 -- # sort 00:25:42.306 03:26:16 -- host/discovery.sh@55 -- # xargs 00:25:42.306 03:26:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:42.306 03:26:16 -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:25:42.306 03:26:16 -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:25:42.306 03:26:16 -- host/discovery.sh@79 -- # expected_count=0 00:25:42.306 03:26:16 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:42.306 03:26:16 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:42.306 03:26:16 -- common/autotest_common.sh@901 -- # local max=10 00:25:42.306 03:26:16 -- common/autotest_common.sh@902 -- # (( max-- )) 00:25:42.306 03:26:16 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:42.306 03:26:16 -- common/autotest_common.sh@903 -- # get_notification_count 00:25:42.306 03:26:16 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:42.306 03:26:16 -- host/discovery.sh@74 -- # jq '. | length' 00:25:42.306 03:26:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:42.306 03:26:16 -- common/autotest_common.sh@10 -- # set +x 00:25:42.306 03:26:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:42.306 03:26:16 -- host/discovery.sh@74 -- # notification_count=0 00:25:42.306 03:26:16 -- host/discovery.sh@75 -- # notify_id=0 00:25:42.306 03:26:16 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:25:42.306 03:26:16 -- common/autotest_common.sh@904 -- # return 0 00:25:42.306 03:26:16 -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:25:42.306 03:26:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:42.306 03:26:16 -- common/autotest_common.sh@10 -- # set +x 00:25:42.306 03:26:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:42.306 03:26:16 -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:42.306 03:26:16 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:42.306 03:26:16 -- common/autotest_common.sh@901 -- # local max=10 00:25:42.306 03:26:16 -- common/autotest_common.sh@902 -- # (( max-- )) 00:25:42.306 03:26:16 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:42.306 03:26:16 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:25:42.306 03:26:16 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:42.306 03:26:16 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:42.306 03:26:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:42.306 03:26:16 -- host/discovery.sh@59 -- # sort 00:25:42.306 03:26:16 -- common/autotest_common.sh@10 -- # set +x 00:25:42.306 03:26:16 -- host/discovery.sh@59 -- # xargs 00:25:42.306 03:26:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:42.565 03:26:16 -- common/autotest_common.sh@903 -- # [[ '' == \n\v\m\e\0 ]] 00:25:42.565 03:26:16 -- common/autotest_common.sh@906 -- # sleep 1 00:25:43.134 [2024-04-25 03:26:17.415854] bdev_nvme.c:6919:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:43.134 [2024-04-25 03:26:17.415882] bdev_nvme.c:6999:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:43.134 [2024-04-25 03:26:17.415904] bdev_nvme.c:6882:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:43.134 [2024-04-25 03:26:17.502210] bdev_nvme.c:6848:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:43.135 [2024-04-25 03:26:17.564907] bdev_nvme.c:6738:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:43.135 [2024-04-25 03:26:17.564929] bdev_nvme.c:6697:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:43.393 03:26:17 -- common/autotest_common.sh@902 -- # (( max-- )) 00:25:43.393 03:26:17 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:43.393 03:26:17 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:25:43.393 03:26:17 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:43.393 03:26:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:43.393 03:26:17 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:43.393 03:26:17 -- common/autotest_common.sh@10 -- # set +x 00:25:43.393 03:26:17 -- host/discovery.sh@59 -- # sort 00:25:43.393 03:26:17 -- host/discovery.sh@59 -- # xargs 00:25:43.393 03:26:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:43.393 03:26:17 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:43.393 03:26:17 -- common/autotest_common.sh@904 -- # return 0 00:25:43.393 03:26:17 -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:43.393 03:26:17 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:43.393 03:26:17 -- common/autotest_common.sh@901 -- # local max=10 00:25:43.393 03:26:17 -- common/autotest_common.sh@902 -- # (( max-- )) 00:25:43.393 03:26:17 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:25:43.393 03:26:17 -- common/autotest_common.sh@903 -- # get_bdev_list 00:25:43.393 03:26:17 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:43.393 03:26:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:43.393 03:26:17 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:43.393 03:26:17 -- common/autotest_common.sh@10 -- # set +x 00:25:43.393 03:26:17 -- host/discovery.sh@55 -- # sort 00:25:43.393 03:26:17 -- host/discovery.sh@55 -- # xargs 00:25:43.393 03:26:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:43.652 03:26:17 -- common/autotest_common.sh@903 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:25:43.652 03:26:17 -- common/autotest_common.sh@904 -- # return 0 00:25:43.652 03:26:17 -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:43.652 03:26:17 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:43.652 03:26:17 -- common/autotest_common.sh@901 -- # local max=10 00:25:43.652 03:26:17 -- common/autotest_common.sh@902 -- # (( max-- )) 00:25:43.652 03:26:17 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:25:43.652 03:26:17 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:25:43.652 03:26:17 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:43.652 03:26:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:43.652 03:26:17 -- common/autotest_common.sh@10 -- # set +x 00:25:43.652 03:26:17 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:43.652 03:26:17 -- host/discovery.sh@63 -- # sort -n 00:25:43.652 03:26:17 -- host/discovery.sh@63 -- # xargs 00:25:43.652 03:26:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:43.652 03:26:17 -- common/autotest_common.sh@903 -- # [[ 4420 == \4\4\2\0 ]] 00:25:43.652 03:26:17 -- common/autotest_common.sh@904 -- # return 0 00:25:43.652 03:26:17 -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:25:43.652 03:26:17 -- host/discovery.sh@79 -- # expected_count=1 00:25:43.652 03:26:17 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:43.652 03:26:17 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:43.652 03:26:17 -- common/autotest_common.sh@901 -- # local max=10 00:25:43.652 03:26:17 -- common/autotest_common.sh@902 -- # (( max-- )) 00:25:43.652 03:26:17 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:43.652 03:26:17 -- common/autotest_common.sh@903 -- # get_notification_count 00:25:43.652 03:26:17 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:43.652 03:26:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:43.652 03:26:17 -- common/autotest_common.sh@10 -- # set +x 00:25:43.652 03:26:17 -- host/discovery.sh@74 -- # jq '. | length' 00:25:43.652 03:26:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:43.652 03:26:17 -- host/discovery.sh@74 -- # notification_count=1 00:25:43.652 03:26:17 -- host/discovery.sh@75 -- # notify_id=1 00:25:43.652 03:26:17 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:25:43.652 03:26:17 -- common/autotest_common.sh@904 -- # return 0 00:25:43.652 03:26:17 -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:25:43.652 03:26:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:43.652 03:26:17 -- common/autotest_common.sh@10 -- # set +x 00:25:43.652 03:26:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:43.652 03:26:17 -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:43.652 03:26:17 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:43.652 03:26:17 -- common/autotest_common.sh@901 -- # local max=10 00:25:43.652 03:26:17 -- common/autotest_common.sh@902 -- # (( max-- )) 00:25:43.652 03:26:17 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:43.652 03:26:17 -- common/autotest_common.sh@903 -- # get_bdev_list 00:25:43.652 03:26:17 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:43.652 03:26:17 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:43.652 03:26:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:43.652 03:26:18 -- host/discovery.sh@55 -- # sort 00:25:43.652 03:26:18 -- common/autotest_common.sh@10 -- # set +x 00:25:43.652 03:26:18 -- host/discovery.sh@55 -- # xargs 00:25:43.652 03:26:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:43.652 03:26:18 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:43.652 03:26:18 -- common/autotest_common.sh@904 -- # return 0 00:25:43.652 03:26:18 -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:25:43.652 03:26:18 -- host/discovery.sh@79 -- # expected_count=1 00:25:43.652 03:26:18 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:43.652 03:26:18 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:43.652 03:26:18 -- common/autotest_common.sh@901 -- # local max=10 00:25:43.652 03:26:18 -- common/autotest_common.sh@902 -- # (( max-- )) 00:25:43.652 03:26:18 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:43.652 03:26:18 -- common/autotest_common.sh@903 -- # get_notification_count 00:25:43.912 03:26:18 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:25:43.912 03:26:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:43.912 03:26:18 -- host/discovery.sh@74 -- # jq '. | length' 00:25:43.912 03:26:18 -- common/autotest_common.sh@10 -- # set +x 00:25:43.913 03:26:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:43.913 03:26:18 -- host/discovery.sh@74 -- # notification_count=1 00:25:43.913 03:26:18 -- host/discovery.sh@75 -- # notify_id=2 00:25:43.913 03:26:18 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:25:43.913 03:26:18 -- common/autotest_common.sh@904 -- # return 0 00:25:43.913 03:26:18 -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:25:43.913 03:26:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:43.913 03:26:18 -- common/autotest_common.sh@10 -- # set +x 00:25:43.913 [2024-04-25 03:26:18.191479] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:43.913 [2024-04-25 03:26:18.192517] bdev_nvme.c:6901:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:43.913 [2024-04-25 03:26:18.192556] bdev_nvme.c:6882:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:43.913 03:26:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:43.913 03:26:18 -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:43.913 03:26:18 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:43.913 03:26:18 -- common/autotest_common.sh@901 -- # local max=10 00:25:43.913 03:26:18 -- common/autotest_common.sh@902 -- # (( max-- )) 00:25:43.913 03:26:18 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:43.913 03:26:18 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:25:43.913 03:26:18 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:43.913 03:26:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:43.913 03:26:18 -- common/autotest_common.sh@10 -- # set +x 00:25:43.913 03:26:18 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:43.913 03:26:18 -- host/discovery.sh@59 -- # sort 00:25:43.913 03:26:18 -- host/discovery.sh@59 -- # xargs 00:25:43.913 03:26:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:43.913 03:26:18 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:43.913 03:26:18 -- common/autotest_common.sh@904 -- # return 0 00:25:43.913 03:26:18 -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:43.913 03:26:18 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:43.913 03:26:18 -- common/autotest_common.sh@901 -- # local max=10 00:25:43.913 03:26:18 -- common/autotest_common.sh@902 -- # (( max-- )) 00:25:43.913 03:26:18 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:43.913 03:26:18 -- common/autotest_common.sh@903 -- # get_bdev_list 00:25:43.913 03:26:18 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:43.913 03:26:18 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:43.913 03:26:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:43.913 03:26:18 -- common/autotest_common.sh@10 -- # set +x 00:25:43.913 03:26:18 -- host/discovery.sh@55 -- # sort 00:25:43.913 03:26:18 -- host/discovery.sh@55 -- # xargs 00:25:43.913 03:26:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:43.913 03:26:18 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:43.913 03:26:18 -- common/autotest_common.sh@904 -- # return 0 00:25:43.913 03:26:18 -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:43.913 03:26:18 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:43.913 03:26:18 -- common/autotest_common.sh@901 -- # local max=10 00:25:43.913 03:26:18 -- common/autotest_common.sh@902 -- # (( max-- )) 00:25:43.913 03:26:18 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:43.913 03:26:18 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:25:43.913 03:26:18 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:43.913 03:26:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:43.913 03:26:18 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:43.913 03:26:18 -- common/autotest_common.sh@10 -- # set +x 00:25:43.913 03:26:18 -- host/discovery.sh@63 -- # sort -n 00:25:43.913 03:26:18 -- host/discovery.sh@63 -- # xargs 00:25:43.913 03:26:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:43.913 [2024-04-25 03:26:18.320377] bdev_nvme.c:6843:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:25:43.913 03:26:18 -- common/autotest_common.sh@903 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:25:43.913 03:26:18 -- common/autotest_common.sh@906 -- # sleep 1 00:25:44.173 [2024-04-25 03:26:18.419151] bdev_nvme.c:6738:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:44.173 [2024-04-25 03:26:18.419177] bdev_nvme.c:6697:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:44.173 [2024-04-25 03:26:18.419188] bdev_nvme.c:6697:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:45.114 03:26:19 -- common/autotest_common.sh@902 -- # (( max-- )) 00:25:45.114 03:26:19 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:45.114 03:26:19 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:25:45.114 03:26:19 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:45.114 03:26:19 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:45.114 03:26:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:45.114 03:26:19 -- common/autotest_common.sh@10 -- # set +x 00:25:45.114 03:26:19 -- host/discovery.sh@63 -- # sort -n 00:25:45.114 03:26:19 -- host/discovery.sh@63 -- # xargs 00:25:45.114 03:26:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:45.114 03:26:19 -- common/autotest_common.sh@903 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:25:45.114 03:26:19 -- common/autotest_common.sh@904 -- # return 0 00:25:45.114 03:26:19 -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:25:45.114 03:26:19 -- host/discovery.sh@79 -- # expected_count=0 00:25:45.114 03:26:19 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:45.114 03:26:19 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:45.114 03:26:19 -- common/autotest_common.sh@901 -- # local max=10 00:25:45.114 03:26:19 -- common/autotest_common.sh@902 -- # (( max-- )) 00:25:45.114 03:26:19 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:45.114 03:26:19 -- common/autotest_common.sh@903 -- # get_notification_count 00:25:45.114 03:26:19 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:45.114 03:26:19 -- host/discovery.sh@74 -- # jq '. | length' 00:25:45.114 03:26:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:45.114 03:26:19 -- common/autotest_common.sh@10 -- # set +x 00:25:45.114 03:26:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:45.114 03:26:19 -- host/discovery.sh@74 -- # notification_count=0 00:25:45.114 03:26:19 -- host/discovery.sh@75 -- # notify_id=2 00:25:45.114 03:26:19 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:25:45.114 03:26:19 -- common/autotest_common.sh@904 -- # return 0 00:25:45.114 03:26:19 -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:45.114 03:26:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:45.114 03:26:19 -- common/autotest_common.sh@10 -- # set +x 00:25:45.114 [2024-04-25 03:26:19.419343] bdev_nvme.c:6901:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:45.114 [2024-04-25 03:26:19.419383] bdev_nvme.c:6882:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:45.114 [2024-04-25 03:26:19.422649] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:45.114 [2024-04-25 03:26:19.422704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.114 [2024-04-25 03:26:19.422723] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:45.114 [2024-04-25 03:26:19.422764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.114 [2024-04-25 03:26:19.422779] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:45.114 [2024-04-25 03:26:19.422793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.114 [2024-04-25 03:26:19.422808] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:45.114 [2024-04-25 03:26:19.422823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.114 [2024-04-25 03:26:19.422836] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb370 is same with the state(5) to be set 00:25:45.114 03:26:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:45.114 03:26:19 -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:45.114 03:26:19 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:45.114 03:26:19 -- common/autotest_common.sh@901 -- # local max=10 00:25:45.114 03:26:19 -- common/autotest_common.sh@902 -- # (( max-- )) 00:25:45.114 03:26:19 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:45.114 03:26:19 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:25:45.114 03:26:19 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:45.114 03:26:19 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:45.114 03:26:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:45.114 03:26:19 -- common/autotest_common.sh@10 -- # set +x 00:25:45.114 03:26:19 -- host/discovery.sh@59 -- # sort 00:25:45.114 03:26:19 -- host/discovery.sh@59 -- # xargs 00:25:45.114 [2024-04-25 03:26:19.432653] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecb370 (9): Bad file descriptor 00:25:45.114 03:26:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:45.114 [2024-04-25 03:26:19.442695] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:45.114 [2024-04-25 03:26:19.442965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.114 [2024-04-25 03:26:19.443177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.114 [2024-04-25 03:26:19.443204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecb370 with addr=10.0.0.2, port=4420 00:25:45.114 [2024-04-25 03:26:19.443221] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb370 is same with the state(5) to be set 00:25:45.114 [2024-04-25 03:26:19.443244] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecb370 (9): Bad file descriptor 00:25:45.114 [2024-04-25 03:26:19.443267] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:45.114 [2024-04-25 03:26:19.443282] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:45.114 [2024-04-25 03:26:19.443298] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:45.114 [2024-04-25 03:26:19.443333] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.114 [2024-04-25 03:26:19.452775] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:45.114 [2024-04-25 03:26:19.453072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.114 [2024-04-25 03:26:19.453370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.114 [2024-04-25 03:26:19.453396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecb370 with addr=10.0.0.2, port=4420 00:25:45.114 [2024-04-25 03:26:19.453412] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb370 is same with the state(5) to be set 00:25:45.114 [2024-04-25 03:26:19.453433] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecb370 (9): Bad file descriptor 00:25:45.114 [2024-04-25 03:26:19.453491] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:45.114 [2024-04-25 03:26:19.453525] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:45.114 [2024-04-25 03:26:19.453540] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:45.114 [2024-04-25 03:26:19.453559] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.114 [2024-04-25 03:26:19.462846] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:45.114 [2024-04-25 03:26:19.463122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.114 [2024-04-25 03:26:19.463341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.114 [2024-04-25 03:26:19.463369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecb370 with addr=10.0.0.2, port=4420 00:25:45.114 [2024-04-25 03:26:19.463385] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb370 is same with the state(5) to be set 00:25:45.115 [2024-04-25 03:26:19.463414] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecb370 (9): Bad file descriptor 00:25:45.115 [2024-04-25 03:26:19.463436] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:45.115 [2024-04-25 03:26:19.463466] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:45.115 [2024-04-25 03:26:19.463480] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:45.115 [2024-04-25 03:26:19.463500] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.115 03:26:19 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:45.115 03:26:19 -- common/autotest_common.sh@904 -- # return 0 00:25:45.115 03:26:19 -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:45.115 03:26:19 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:45.115 03:26:19 -- common/autotest_common.sh@901 -- # local max=10 00:25:45.115 03:26:19 -- common/autotest_common.sh@902 -- # (( max-- )) 00:25:45.115 03:26:19 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:45.115 03:26:19 -- common/autotest_common.sh@903 -- # get_bdev_list 00:25:45.115 03:26:19 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:45.115 03:26:19 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:45.115 03:26:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:45.115 03:26:19 -- common/autotest_common.sh@10 -- # set +x 00:25:45.115 03:26:19 -- host/discovery.sh@55 -- # sort 00:25:45.115 [2024-04-25 03:26:19.472920] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:45.115 [2024-04-25 03:26:19.473170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.115 03:26:19 -- host/discovery.sh@55 -- # xargs 00:25:45.115 [2024-04-25 03:26:19.473400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.115 [2024-04-25 03:26:19.473426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecb370 with addr=10.0.0.2, port=4420 00:25:45.115 [2024-04-25 03:26:19.473445] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb370 is same with the state(5) to be set 00:25:45.115 [2024-04-25 03:26:19.473468] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecb370 (9): Bad file descriptor 00:25:45.115 [2024-04-25 03:26:19.473491] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:45.115 [2024-04-25 03:26:19.473505] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:45.115 [2024-04-25 03:26:19.473520] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:45.115 [2024-04-25 03:26:19.473569] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.115 [2024-04-25 03:26:19.483009] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:45.115 [2024-04-25 03:26:19.483298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.115 [2024-04-25 03:26:19.483491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.115 [2024-04-25 03:26:19.483517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecb370 with addr=10.0.0.2, port=4420 00:25:45.115 [2024-04-25 03:26:19.483533] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb370 is same with the state(5) to be set 00:25:45.115 [2024-04-25 03:26:19.483556] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecb370 (9): Bad file descriptor 00:25:45.115 [2024-04-25 03:26:19.483590] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:45.115 [2024-04-25 03:26:19.483608] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:45.115 [2024-04-25 03:26:19.483638] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:45.115 [2024-04-25 03:26:19.483662] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.115 [2024-04-25 03:26:19.493081] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:45.115 [2024-04-25 03:26:19.493369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.115 [2024-04-25 03:26:19.493580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.115 [2024-04-25 03:26:19.493607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecb370 with addr=10.0.0.2, port=4420 00:25:45.115 [2024-04-25 03:26:19.493623] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb370 is same with the state(5) to be set 00:25:45.115 [2024-04-25 03:26:19.493654] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecb370 (9): Bad file descriptor 00:25:45.115 [2024-04-25 03:26:19.493700] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:45.115 [2024-04-25 03:26:19.493720] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:45.115 [2024-04-25 03:26:19.493734] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:45.115 [2024-04-25 03:26:19.493753] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.115 03:26:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:45.115 [2024-04-25 03:26:19.503150] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:45.115 [2024-04-25 03:26:19.503417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.115 [2024-04-25 03:26:19.503651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.115 [2024-04-25 03:26:19.503678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecb370 with addr=10.0.0.2, port=4420 00:25:45.115 [2024-04-25 03:26:19.503695] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb370 is same with the state(5) to be set 00:25:45.115 [2024-04-25 03:26:19.503717] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecb370 (9): Bad file descriptor 00:25:45.115 [2024-04-25 03:26:19.503750] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:45.115 [2024-04-25 03:26:19.503769] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:45.115 [2024-04-25 03:26:19.503782] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:45.115 [2024-04-25 03:26:19.503802] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.115 [2024-04-25 03:26:19.507757] bdev_nvme.c:6706:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:25:45.115 [2024-04-25 03:26:19.507786] bdev_nvme.c:6697:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:45.115 03:26:19 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:45.115 03:26:19 -- common/autotest_common.sh@904 -- # return 0 00:25:45.115 03:26:19 -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:45.115 03:26:19 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:45.115 03:26:19 -- common/autotest_common.sh@901 -- # local max=10 00:25:45.115 03:26:19 -- common/autotest_common.sh@902 -- # (( max-- )) 00:25:45.115 03:26:19 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:25:45.115 03:26:19 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:25:45.115 03:26:19 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:45.115 03:26:19 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:45.115 03:26:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:45.115 03:26:19 -- host/discovery.sh@63 -- # sort -n 00:25:45.115 03:26:19 -- common/autotest_common.sh@10 -- # set +x 00:25:45.115 03:26:19 -- host/discovery.sh@63 -- # xargs 00:25:45.115 03:26:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:45.115 03:26:19 -- common/autotest_common.sh@903 -- # [[ 4421 == \4\4\2\1 ]] 00:25:45.115 03:26:19 -- common/autotest_common.sh@904 -- # return 0 00:25:45.115 03:26:19 -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:25:45.115 03:26:19 -- host/discovery.sh@79 -- # expected_count=0 00:25:45.115 03:26:19 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:45.115 03:26:19 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:45.115 03:26:19 -- common/autotest_common.sh@901 -- # local max=10 00:25:45.115 03:26:19 -- common/autotest_common.sh@902 -- # (( max-- )) 00:25:45.115 03:26:19 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:45.115 03:26:19 -- common/autotest_common.sh@903 -- # get_notification_count 00:25:45.115 03:26:19 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:45.115 03:26:19 -- host/discovery.sh@74 -- # jq '. | length' 00:25:45.115 03:26:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:45.115 03:26:19 -- common/autotest_common.sh@10 -- # set +x 00:25:45.115 03:26:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:45.115 03:26:19 -- host/discovery.sh@74 -- # notification_count=0 00:25:45.115 03:26:19 -- host/discovery.sh@75 -- # notify_id=2 00:25:45.115 03:26:19 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:25:45.115 03:26:19 -- common/autotest_common.sh@904 -- # return 0 00:25:45.115 03:26:19 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:25:45.115 03:26:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:45.115 03:26:19 -- common/autotest_common.sh@10 -- # set +x 00:25:45.375 03:26:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:45.375 03:26:19 -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:25:45.375 03:26:19 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:25:45.375 03:26:19 -- common/autotest_common.sh@901 -- # local max=10 00:25:45.375 03:26:19 -- common/autotest_common.sh@902 -- # (( max-- )) 00:25:45.375 03:26:19 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:25:45.375 03:26:19 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:25:45.375 03:26:19 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:45.375 03:26:19 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:45.375 03:26:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:45.375 03:26:19 -- common/autotest_common.sh@10 -- # set +x 00:25:45.375 03:26:19 -- host/discovery.sh@59 -- # sort 00:25:45.375 03:26:19 -- host/discovery.sh@59 -- # xargs 00:25:45.375 03:26:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:45.375 03:26:19 -- common/autotest_common.sh@903 -- # [[ '' == '' ]] 00:25:45.375 03:26:19 -- common/autotest_common.sh@904 -- # return 0 00:25:45.375 03:26:19 -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:25:45.375 03:26:19 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:25:45.375 03:26:19 -- common/autotest_common.sh@901 -- # local max=10 00:25:45.375 03:26:19 -- common/autotest_common.sh@902 -- # (( max-- )) 00:25:45.375 03:26:19 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:25:45.375 03:26:19 -- common/autotest_common.sh@903 -- # get_bdev_list 00:25:45.375 03:26:19 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:45.375 03:26:19 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:45.375 03:26:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:45.375 03:26:19 -- host/discovery.sh@55 -- # sort 00:25:45.375 03:26:19 -- common/autotest_common.sh@10 -- # set +x 00:25:45.375 03:26:19 -- host/discovery.sh@55 -- # xargs 00:25:45.375 03:26:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:45.375 03:26:19 -- common/autotest_common.sh@903 -- # [[ '' == '' ]] 00:25:45.375 03:26:19 -- common/autotest_common.sh@904 -- # return 0 00:25:45.375 03:26:19 -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:25:45.375 03:26:19 -- host/discovery.sh@79 -- # expected_count=2 00:25:45.375 03:26:19 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:45.375 03:26:19 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:45.375 03:26:19 -- common/autotest_common.sh@901 -- # local max=10 00:25:45.375 03:26:19 -- common/autotest_common.sh@902 -- # (( max-- )) 00:25:45.375 03:26:19 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:45.375 03:26:19 -- common/autotest_common.sh@903 -- # get_notification_count 00:25:45.375 03:26:19 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:45.375 03:26:19 -- host/discovery.sh@74 -- # jq '. | length' 00:25:45.375 03:26:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:45.375 03:26:19 -- common/autotest_common.sh@10 -- # set +x 00:25:45.375 03:26:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:45.375 03:26:19 -- host/discovery.sh@74 -- # notification_count=2 00:25:45.375 03:26:19 -- host/discovery.sh@75 -- # notify_id=4 00:25:45.375 03:26:19 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:25:45.375 03:26:19 -- common/autotest_common.sh@904 -- # return 0 00:25:45.375 03:26:19 -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:45.375 03:26:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:45.375 03:26:19 -- common/autotest_common.sh@10 -- # set +x 00:25:46.314 [2024-04-25 03:26:20.764789] bdev_nvme.c:6919:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:46.315 [2024-04-25 03:26:20.764833] bdev_nvme.c:6999:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:46.315 [2024-04-25 03:26:20.764857] bdev_nvme.c:6882:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:46.574 [2024-04-25 03:26:20.851127] bdev_nvme.c:6848:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:25:46.574 [2024-04-25 03:26:21.037952] bdev_nvme.c:6738:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:46.574 [2024-04-25 03:26:21.038004] bdev_nvme.c:6697:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:46.574 03:26:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:46.574 03:26:21 -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:46.574 03:26:21 -- common/autotest_common.sh@638 -- # local es=0 00:25:46.574 03:26:21 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:46.574 03:26:21 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:25:46.574 03:26:21 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:25:46.574 03:26:21 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:25:46.574 03:26:21 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:25:46.574 03:26:21 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:46.574 03:26:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:46.574 03:26:21 -- common/autotest_common.sh@10 -- # set +x 00:25:46.574 request: 00:25:46.574 { 00:25:46.574 "name": "nvme", 00:25:46.574 "trtype": "tcp", 00:25:46.574 "traddr": "10.0.0.2", 00:25:46.574 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:46.574 "adrfam": "ipv4", 00:25:46.574 "trsvcid": "8009", 00:25:46.574 "wait_for_attach": true, 00:25:46.574 "method": "bdev_nvme_start_discovery", 00:25:46.574 "req_id": 1 00:25:46.574 } 00:25:46.574 Got JSON-RPC error response 00:25:46.574 response: 00:25:46.574 { 00:25:46.574 "code": -17, 00:25:46.574 "message": "File exists" 00:25:46.574 } 00:25:46.574 03:26:21 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:25:46.574 03:26:21 -- common/autotest_common.sh@641 -- # es=1 00:25:46.574 03:26:21 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:25:46.574 03:26:21 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:25:46.574 03:26:21 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:25:46.574 03:26:21 -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:25:46.574 03:26:21 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:46.574 03:26:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:46.574 03:26:21 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:46.574 03:26:21 -- common/autotest_common.sh@10 -- # set +x 00:25:46.574 03:26:21 -- host/discovery.sh@67 -- # sort 00:25:46.574 03:26:21 -- host/discovery.sh@67 -- # xargs 00:25:46.574 03:26:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:46.834 03:26:21 -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:25:46.834 03:26:21 -- host/discovery.sh@146 -- # get_bdev_list 00:25:46.834 03:26:21 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:46.834 03:26:21 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:46.834 03:26:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:46.834 03:26:21 -- common/autotest_common.sh@10 -- # set +x 00:25:46.834 03:26:21 -- host/discovery.sh@55 -- # sort 00:25:46.834 03:26:21 -- host/discovery.sh@55 -- # xargs 00:25:46.834 03:26:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:46.834 03:26:21 -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:46.834 03:26:21 -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:46.834 03:26:21 -- common/autotest_common.sh@638 -- # local es=0 00:25:46.834 03:26:21 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:46.834 03:26:21 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:25:46.834 03:26:21 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:25:46.834 03:26:21 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:25:46.834 03:26:21 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:25:46.835 03:26:21 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:46.835 03:26:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:46.835 03:26:21 -- common/autotest_common.sh@10 -- # set +x 00:25:46.835 request: 00:25:46.835 { 00:25:46.835 "name": "nvme_second", 00:25:46.835 "trtype": "tcp", 00:25:46.835 "traddr": "10.0.0.2", 00:25:46.835 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:46.835 "adrfam": "ipv4", 00:25:46.835 "trsvcid": "8009", 00:25:46.835 "wait_for_attach": true, 00:25:46.835 "method": "bdev_nvme_start_discovery", 00:25:46.835 "req_id": 1 00:25:46.835 } 00:25:46.835 Got JSON-RPC error response 00:25:46.835 response: 00:25:46.835 { 00:25:46.835 "code": -17, 00:25:46.835 "message": "File exists" 00:25:46.835 } 00:25:46.835 03:26:21 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:25:46.835 03:26:21 -- common/autotest_common.sh@641 -- # es=1 00:25:46.835 03:26:21 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:25:46.835 03:26:21 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:25:46.835 03:26:21 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:25:46.835 03:26:21 -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:25:46.835 03:26:21 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:46.835 03:26:21 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:46.835 03:26:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:46.835 03:26:21 -- common/autotest_common.sh@10 -- # set +x 00:25:46.835 03:26:21 -- host/discovery.sh@67 -- # sort 00:25:46.835 03:26:21 -- host/discovery.sh@67 -- # xargs 00:25:46.835 03:26:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:46.835 03:26:21 -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:25:46.835 03:26:21 -- host/discovery.sh@152 -- # get_bdev_list 00:25:46.835 03:26:21 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:46.835 03:26:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:46.835 03:26:21 -- common/autotest_common.sh@10 -- # set +x 00:25:46.835 03:26:21 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:46.835 03:26:21 -- host/discovery.sh@55 -- # sort 00:25:46.835 03:26:21 -- host/discovery.sh@55 -- # xargs 00:25:46.835 03:26:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:46.835 03:26:21 -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:46.835 03:26:21 -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:46.835 03:26:21 -- common/autotest_common.sh@638 -- # local es=0 00:25:46.835 03:26:21 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:46.835 03:26:21 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:25:46.835 03:26:21 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:25:46.835 03:26:21 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:25:46.835 03:26:21 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:25:46.835 03:26:21 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:46.835 03:26:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:46.835 03:26:21 -- common/autotest_common.sh@10 -- # set +x 00:25:47.773 [2024-04-25 03:26:22.241526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.773 [2024-04-25 03:26:22.241828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.773 [2024-04-25 03:26:22.241857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7590 with addr=10.0.0.2, port=8010 00:25:47.773 [2024-04-25 03:26:22.241895] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:47.773 [2024-04-25 03:26:22.241912] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:47.773 [2024-04-25 03:26:22.241926] bdev_nvme.c:6981:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:49.154 [2024-04-25 03:26:23.243898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.154 [2024-04-25 03:26:23.244127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.155 [2024-04-25 03:26:23.244170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7590 with addr=10.0.0.2, port=8010 00:25:49.155 [2024-04-25 03:26:23.244193] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:49.155 [2024-04-25 03:26:23.244208] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:49.155 [2024-04-25 03:26:23.244225] bdev_nvme.c:6981:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:50.094 [2024-04-25 03:26:24.246060] bdev_nvme.c:6962:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:25:50.094 request: 00:25:50.094 { 00:25:50.094 "name": "nvme_second", 00:25:50.094 "trtype": "tcp", 00:25:50.094 "traddr": "10.0.0.2", 00:25:50.094 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:50.094 "adrfam": "ipv4", 00:25:50.094 "trsvcid": "8010", 00:25:50.094 "attach_timeout_ms": 3000, 00:25:50.094 "method": "bdev_nvme_start_discovery", 00:25:50.094 "req_id": 1 00:25:50.094 } 00:25:50.094 Got JSON-RPC error response 00:25:50.094 response: 00:25:50.094 { 00:25:50.094 "code": -110, 00:25:50.094 "message": "Connection timed out" 00:25:50.094 } 00:25:50.094 03:26:24 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:25:50.094 03:26:24 -- common/autotest_common.sh@641 -- # es=1 00:25:50.094 03:26:24 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:25:50.094 03:26:24 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:25:50.094 03:26:24 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:25:50.094 03:26:24 -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:25:50.094 03:26:24 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:50.094 03:26:24 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:50.094 03:26:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:50.094 03:26:24 -- common/autotest_common.sh@10 -- # set +x 00:25:50.094 03:26:24 -- host/discovery.sh@67 -- # sort 00:25:50.094 03:26:24 -- host/discovery.sh@67 -- # xargs 00:25:50.094 03:26:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:50.094 03:26:24 -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:25:50.094 03:26:24 -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:25:50.094 03:26:24 -- host/discovery.sh@161 -- # kill 1592670 00:25:50.094 03:26:24 -- host/discovery.sh@162 -- # nvmftestfini 00:25:50.094 03:26:24 -- nvmf/common.sh@477 -- # nvmfcleanup 00:25:50.094 03:26:24 -- nvmf/common.sh@117 -- # sync 00:25:50.094 03:26:24 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:50.094 03:26:24 -- nvmf/common.sh@120 -- # set +e 00:25:50.094 03:26:24 -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:50.094 03:26:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:50.094 rmmod nvme_tcp 00:25:50.094 rmmod nvme_fabrics 00:25:50.094 rmmod nvme_keyring 00:25:50.094 03:26:24 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:50.094 03:26:24 -- nvmf/common.sh@124 -- # set -e 00:25:50.094 03:26:24 -- nvmf/common.sh@125 -- # return 0 00:25:50.094 03:26:24 -- nvmf/common.sh@478 -- # '[' -n 1592513 ']' 00:25:50.094 03:26:24 -- nvmf/common.sh@479 -- # killprocess 1592513 00:25:50.094 03:26:24 -- common/autotest_common.sh@936 -- # '[' -z 1592513 ']' 00:25:50.094 03:26:24 -- common/autotest_common.sh@940 -- # kill -0 1592513 00:25:50.094 03:26:24 -- common/autotest_common.sh@941 -- # uname 00:25:50.094 03:26:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:50.094 03:26:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1592513 00:25:50.094 03:26:24 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:50.094 03:26:24 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:50.094 03:26:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1592513' 00:25:50.094 killing process with pid 1592513 00:25:50.094 03:26:24 -- common/autotest_common.sh@955 -- # kill 1592513 00:25:50.094 03:26:24 -- common/autotest_common.sh@960 -- # wait 1592513 00:25:50.353 03:26:24 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:25:50.353 03:26:24 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:25:50.353 03:26:24 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:25:50.353 03:26:24 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:50.353 03:26:24 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:50.353 03:26:24 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:50.353 03:26:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:50.353 03:26:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:52.258 03:26:26 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:52.258 00:25:52.258 real 0m13.940s 00:25:52.258 user 0m20.158s 00:25:52.258 sys 0m2.807s 00:25:52.258 03:26:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:52.258 03:26:26 -- common/autotest_common.sh@10 -- # set +x 00:25:52.258 ************************************ 00:25:52.258 END TEST nvmf_discovery 00:25:52.258 ************************************ 00:25:52.258 03:26:26 -- nvmf/nvmf.sh@100 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:25:52.258 03:26:26 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:52.258 03:26:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:52.258 03:26:26 -- common/autotest_common.sh@10 -- # set +x 00:25:52.518 ************************************ 00:25:52.518 START TEST nvmf_discovery_remove_ifc 00:25:52.518 ************************************ 00:25:52.518 03:26:26 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:25:52.519 * Looking for test storage... 00:25:52.519 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:52.519 03:26:26 -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:52.519 03:26:26 -- nvmf/common.sh@7 -- # uname -s 00:25:52.519 03:26:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:52.519 03:26:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:52.519 03:26:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:52.519 03:26:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:52.519 03:26:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:52.519 03:26:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:52.519 03:26:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:52.519 03:26:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:52.519 03:26:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:52.519 03:26:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:52.519 03:26:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:52.519 03:26:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:52.519 03:26:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:52.519 03:26:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:52.519 03:26:26 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:52.519 03:26:26 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:52.519 03:26:26 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:52.519 03:26:26 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:52.519 03:26:26 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:52.519 03:26:26 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:52.519 03:26:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:52.519 03:26:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:52.519 03:26:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:52.519 03:26:26 -- paths/export.sh@5 -- # export PATH 00:25:52.519 03:26:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:52.519 03:26:26 -- nvmf/common.sh@47 -- # : 0 00:25:52.519 03:26:26 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:52.519 03:26:26 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:52.519 03:26:26 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:52.519 03:26:26 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:52.519 03:26:26 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:52.519 03:26:26 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:52.519 03:26:26 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:52.519 03:26:26 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:52.519 03:26:26 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:25:52.519 03:26:26 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:25:52.519 03:26:26 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:25:52.519 03:26:26 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:25:52.519 03:26:26 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:25:52.519 03:26:26 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:25:52.519 03:26:26 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:25:52.519 03:26:26 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:25:52.519 03:26:26 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:52.519 03:26:26 -- nvmf/common.sh@437 -- # prepare_net_devs 00:25:52.519 03:26:26 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:25:52.519 03:26:26 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:25:52.519 03:26:26 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:52.519 03:26:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:52.519 03:26:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:52.519 03:26:26 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:25:52.519 03:26:26 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:25:52.519 03:26:26 -- nvmf/common.sh@285 -- # xtrace_disable 00:25:52.519 03:26:26 -- common/autotest_common.sh@10 -- # set +x 00:25:55.054 03:26:28 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:55.054 03:26:28 -- nvmf/common.sh@291 -- # pci_devs=() 00:25:55.054 03:26:28 -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:55.054 03:26:28 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:55.054 03:26:28 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:55.054 03:26:28 -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:55.054 03:26:28 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:55.054 03:26:28 -- nvmf/common.sh@295 -- # net_devs=() 00:25:55.054 03:26:28 -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:55.054 03:26:28 -- nvmf/common.sh@296 -- # e810=() 00:25:55.054 03:26:28 -- nvmf/common.sh@296 -- # local -ga e810 00:25:55.054 03:26:28 -- nvmf/common.sh@297 -- # x722=() 00:25:55.054 03:26:28 -- nvmf/common.sh@297 -- # local -ga x722 00:25:55.054 03:26:28 -- nvmf/common.sh@298 -- # mlx=() 00:25:55.054 03:26:28 -- nvmf/common.sh@298 -- # local -ga mlx 00:25:55.054 03:26:28 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:55.054 03:26:28 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:55.054 03:26:28 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:55.054 03:26:28 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:55.054 03:26:28 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:55.054 03:26:28 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:55.054 03:26:28 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:55.054 03:26:28 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:55.054 03:26:28 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:55.054 03:26:28 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:55.054 03:26:28 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:55.054 03:26:28 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:55.054 03:26:28 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:55.054 03:26:28 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:55.054 03:26:28 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:55.054 03:26:28 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:55.054 03:26:28 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:55.054 03:26:28 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:55.054 03:26:28 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:55.054 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:55.054 03:26:28 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:55.054 03:26:28 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:55.054 03:26:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:55.054 03:26:28 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:55.054 03:26:28 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:55.054 03:26:28 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:55.054 03:26:28 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:55.054 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:55.054 03:26:28 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:55.054 03:26:28 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:55.054 03:26:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:55.054 03:26:28 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:55.054 03:26:28 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:55.054 03:26:28 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:55.054 03:26:28 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:55.054 03:26:28 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:55.054 03:26:28 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:55.054 03:26:28 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:55.054 03:26:28 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:25:55.054 03:26:28 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:55.054 03:26:28 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:55.054 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:55.054 03:26:28 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:25:55.054 03:26:28 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:55.054 03:26:28 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:55.054 03:26:28 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:25:55.054 03:26:28 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:55.054 03:26:28 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:55.054 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:55.054 03:26:28 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:25:55.054 03:26:28 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:25:55.054 03:26:28 -- nvmf/common.sh@403 -- # is_hw=yes 00:25:55.054 03:26:28 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:25:55.054 03:26:28 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:25:55.054 03:26:28 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:25:55.054 03:26:28 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:55.054 03:26:28 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:55.054 03:26:28 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:55.054 03:26:28 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:55.054 03:26:28 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:55.054 03:26:28 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:55.054 03:26:28 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:55.054 03:26:28 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:55.054 03:26:28 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:55.054 03:26:28 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:55.054 03:26:28 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:55.054 03:26:28 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:55.054 03:26:28 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:55.054 03:26:29 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:55.054 03:26:29 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:55.054 03:26:29 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:55.054 03:26:29 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:55.054 03:26:29 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:55.054 03:26:29 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:55.054 03:26:29 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:55.054 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:55.054 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.252 ms 00:25:55.054 00:25:55.054 --- 10.0.0.2 ping statistics --- 00:25:55.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:55.054 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:25:55.054 03:26:29 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:55.054 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:55.054 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.146 ms 00:25:55.054 00:25:55.054 --- 10.0.0.1 ping statistics --- 00:25:55.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:55.054 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:25:55.054 03:26:29 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:55.054 03:26:29 -- nvmf/common.sh@411 -- # return 0 00:25:55.054 03:26:29 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:25:55.054 03:26:29 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:55.054 03:26:29 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:25:55.054 03:26:29 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:25:55.054 03:26:29 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:55.054 03:26:29 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:25:55.054 03:26:29 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:25:55.054 03:26:29 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:25:55.055 03:26:29 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:25:55.055 03:26:29 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:55.055 03:26:29 -- common/autotest_common.sh@10 -- # set +x 00:25:55.055 03:26:29 -- nvmf/common.sh@470 -- # nvmfpid=1595702 00:25:55.055 03:26:29 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:55.055 03:26:29 -- nvmf/common.sh@471 -- # waitforlisten 1595702 00:25:55.055 03:26:29 -- common/autotest_common.sh@817 -- # '[' -z 1595702 ']' 00:25:55.055 03:26:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:55.055 03:26:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:55.055 03:26:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:55.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:55.055 03:26:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:55.055 03:26:29 -- common/autotest_common.sh@10 -- # set +x 00:25:55.055 [2024-04-25 03:26:29.150284] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:25:55.055 [2024-04-25 03:26:29.150372] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:55.055 EAL: No free 2048 kB hugepages reported on node 1 00:25:55.055 [2024-04-25 03:26:29.223576] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:55.055 [2024-04-25 03:26:29.337522] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:55.055 [2024-04-25 03:26:29.337595] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:55.055 [2024-04-25 03:26:29.337611] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:55.055 [2024-04-25 03:26:29.337624] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:55.055 [2024-04-25 03:26:29.337648] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:55.055 [2024-04-25 03:26:29.337697] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:55.621 03:26:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:55.621 03:26:30 -- common/autotest_common.sh@850 -- # return 0 00:25:55.621 03:26:30 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:25:55.621 03:26:30 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:55.621 03:26:30 -- common/autotest_common.sh@10 -- # set +x 00:25:55.621 03:26:30 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:55.621 03:26:30 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:25:55.621 03:26:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:55.621 03:26:30 -- common/autotest_common.sh@10 -- # set +x 00:25:55.621 [2024-04-25 03:26:30.113336] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:55.879 [2024-04-25 03:26:30.121516] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:55.879 null0 00:25:55.879 [2024-04-25 03:26:30.153470] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:55.879 03:26:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:55.879 03:26:30 -- host/discovery_remove_ifc.sh@59 -- # hostpid=1595852 00:25:55.879 03:26:30 -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:25:55.879 03:26:30 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1595852 /tmp/host.sock 00:25:55.879 03:26:30 -- common/autotest_common.sh@817 -- # '[' -z 1595852 ']' 00:25:55.879 03:26:30 -- common/autotest_common.sh@821 -- # local rpc_addr=/tmp/host.sock 00:25:55.879 03:26:30 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:55.879 03:26:30 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:55.879 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:55.879 03:26:30 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:55.879 03:26:30 -- common/autotest_common.sh@10 -- # set +x 00:25:55.880 [2024-04-25 03:26:30.215261] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:25:55.880 [2024-04-25 03:26:30.215324] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1595852 ] 00:25:55.880 EAL: No free 2048 kB hugepages reported on node 1 00:25:55.880 [2024-04-25 03:26:30.278230] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:56.138 [2024-04-25 03:26:30.393885] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:56.138 03:26:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:56.138 03:26:30 -- common/autotest_common.sh@850 -- # return 0 00:25:56.138 03:26:30 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:56.138 03:26:30 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:25:56.138 03:26:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:56.138 03:26:30 -- common/autotest_common.sh@10 -- # set +x 00:25:56.138 03:26:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:56.138 03:26:30 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:25:56.138 03:26:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:56.138 03:26:30 -- common/autotest_common.sh@10 -- # set +x 00:25:56.138 03:26:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:56.138 03:26:30 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:25:56.138 03:26:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:56.138 03:26:30 -- common/autotest_common.sh@10 -- # set +x 00:25:57.512 [2024-04-25 03:26:31.592972] bdev_nvme.c:6919:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:57.512 [2024-04-25 03:26:31.593005] bdev_nvme.c:6999:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:57.512 [2024-04-25 03:26:31.593046] bdev_nvme.c:6882:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:57.512 [2024-04-25 03:26:31.719468] bdev_nvme.c:6848:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:57.512 [2024-04-25 03:26:31.782351] bdev_nvme.c:7709:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:25:57.512 [2024-04-25 03:26:31.782424] bdev_nvme.c:7709:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:25:57.512 [2024-04-25 03:26:31.782469] bdev_nvme.c:7709:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:25:57.512 [2024-04-25 03:26:31.782494] bdev_nvme.c:6738:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:57.512 [2024-04-25 03:26:31.782532] bdev_nvme.c:6697:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:57.512 03:26:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:57.512 03:26:31 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:25:57.512 03:26:31 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:57.512 03:26:31 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:57.512 03:26:31 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:57.512 03:26:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:57.512 03:26:31 -- common/autotest_common.sh@10 -- # set +x 00:25:57.512 03:26:31 -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:57.512 03:26:31 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:57.512 03:26:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:57.512 03:26:31 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:25:57.512 03:26:31 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:25:57.512 03:26:31 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:25:57.512 [2024-04-25 03:26:31.830423] bdev_nvme.c:1605:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1100bd0 was disconnected and freed. delete nvme_qpair. 00:25:57.512 03:26:31 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:25:57.512 03:26:31 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:57.512 03:26:31 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:57.512 03:26:31 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:57.512 03:26:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:57.512 03:26:31 -- common/autotest_common.sh@10 -- # set +x 00:25:57.512 03:26:31 -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:57.512 03:26:31 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:57.512 03:26:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:57.512 03:26:31 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:57.512 03:26:31 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:58.446 03:26:32 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:58.446 03:26:32 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:58.446 03:26:32 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:58.446 03:26:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:58.446 03:26:32 -- common/autotest_common.sh@10 -- # set +x 00:25:58.446 03:26:32 -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:58.446 03:26:32 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:58.446 03:26:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:58.703 03:26:32 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:58.703 03:26:32 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:59.635 03:26:33 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:59.635 03:26:33 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:59.635 03:26:33 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:59.635 03:26:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:59.635 03:26:33 -- common/autotest_common.sh@10 -- # set +x 00:25:59.635 03:26:33 -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:59.635 03:26:33 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:59.635 03:26:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:59.635 03:26:34 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:59.635 03:26:34 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:00.568 03:26:35 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:00.568 03:26:35 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:00.568 03:26:35 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:00.568 03:26:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:00.568 03:26:35 -- common/autotest_common.sh@10 -- # set +x 00:26:00.568 03:26:35 -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:00.568 03:26:35 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:00.568 03:26:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:00.568 03:26:35 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:00.568 03:26:35 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:01.944 03:26:36 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:01.944 03:26:36 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:01.944 03:26:36 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:01.944 03:26:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:01.944 03:26:36 -- common/autotest_common.sh@10 -- # set +x 00:26:01.944 03:26:36 -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:01.944 03:26:36 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:01.944 03:26:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:01.944 03:26:36 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:01.944 03:26:36 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:02.878 03:26:37 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:02.878 03:26:37 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:02.878 03:26:37 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:02.878 03:26:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:02.878 03:26:37 -- common/autotest_common.sh@10 -- # set +x 00:26:02.878 03:26:37 -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:02.878 03:26:37 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:02.878 03:26:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:02.878 03:26:37 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:02.878 03:26:37 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:02.878 [2024-04-25 03:26:37.223186] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:26:02.878 [2024-04-25 03:26:37.223260] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:02.878 [2024-04-25 03:26:37.223296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.878 [2024-04-25 03:26:37.223316] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:02.878 [2024-04-25 03:26:37.223331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.878 [2024-04-25 03:26:37.223347] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:02.878 [2024-04-25 03:26:37.223368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.878 [2024-04-25 03:26:37.223383] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:02.878 [2024-04-25 03:26:37.223397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.878 [2024-04-25 03:26:37.223420] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:02.878 [2024-04-25 03:26:37.223437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.878 [2024-04-25 03:26:37.223452] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10c7060 is same with the state(5) to be set 00:26:02.878 [2024-04-25 03:26:37.233206] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10c7060 (9): Bad file descriptor 00:26:02.878 [2024-04-25 03:26:37.243256] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:03.810 03:26:38 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:03.810 03:26:38 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:03.810 03:26:38 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:03.810 03:26:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:03.810 03:26:38 -- common/autotest_common.sh@10 -- # set +x 00:26:03.810 03:26:38 -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:03.810 03:26:38 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:03.810 [2024-04-25 03:26:38.285682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:26:05.184 [2024-04-25 03:26:39.309703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:26:05.184 [2024-04-25 03:26:39.309760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10c7060 with addr=10.0.0.2, port=4420 00:26:05.184 [2024-04-25 03:26:39.309788] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10c7060 is same with the state(5) to be set 00:26:05.184 [2024-04-25 03:26:39.310288] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10c7060 (9): Bad file descriptor 00:26:05.184 [2024-04-25 03:26:39.310333] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.184 [2024-04-25 03:26:39.310373] bdev_nvme.c:6670:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:26:05.184 [2024-04-25 03:26:39.310416] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:05.184 [2024-04-25 03:26:39.310440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.184 [2024-04-25 03:26:39.310463] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:05.184 [2024-04-25 03:26:39.310478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.184 [2024-04-25 03:26:39.310494] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:05.184 [2024-04-25 03:26:39.310510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.184 [2024-04-25 03:26:39.310525] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:05.184 [2024-04-25 03:26:39.310540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.184 [2024-04-25 03:26:39.310556] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:05.184 [2024-04-25 03:26:39.310570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.184 [2024-04-25 03:26:39.310586] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:26:05.184 [2024-04-25 03:26:39.310827] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10c7470 (9): Bad file descriptor 00:26:05.184 [2024-04-25 03:26:39.311849] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:26:05.184 [2024-04-25 03:26:39.311873] nvme_ctrlr.c:1148:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:26:05.184 03:26:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:05.184 03:26:39 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:05.184 03:26:39 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:06.119 03:26:40 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:06.119 03:26:40 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:06.119 03:26:40 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:06.119 03:26:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:06.119 03:26:40 -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:06.119 03:26:40 -- common/autotest_common.sh@10 -- # set +x 00:26:06.119 03:26:40 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:06.119 03:26:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:06.119 03:26:40 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:26:06.119 03:26:40 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:06.119 03:26:40 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:06.119 03:26:40 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:26:06.119 03:26:40 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:06.119 03:26:40 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:06.119 03:26:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:06.119 03:26:40 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:06.119 03:26:40 -- common/autotest_common.sh@10 -- # set +x 00:26:06.119 03:26:40 -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:06.119 03:26:40 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:06.119 03:26:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:06.119 03:26:40 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:06.119 03:26:40 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:07.052 [2024-04-25 03:26:41.327811] bdev_nvme.c:6919:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:07.052 [2024-04-25 03:26:41.327839] bdev_nvme.c:6999:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:07.052 [2024-04-25 03:26:41.327862] bdev_nvme.c:6882:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:07.052 [2024-04-25 03:26:41.414143] bdev_nvme.c:6848:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:26:07.052 03:26:41 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:07.052 03:26:41 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:07.052 03:26:41 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:07.052 03:26:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:07.052 03:26:41 -- common/autotest_common.sh@10 -- # set +x 00:26:07.052 03:26:41 -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:07.052 03:26:41 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:07.052 03:26:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:07.052 03:26:41 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:07.052 03:26:41 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:07.311 [2024-04-25 03:26:41.598716] bdev_nvme.c:7709:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:07.311 [2024-04-25 03:26:41.598765] bdev_nvme.c:7709:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:07.311 [2024-04-25 03:26:41.598796] bdev_nvme.c:7709:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:07.311 [2024-04-25 03:26:41.598817] bdev_nvme.c:6738:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:26:07.311 [2024-04-25 03:26:41.598829] bdev_nvme.c:6697:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:07.311 [2024-04-25 03:26:41.606116] bdev_nvme.c:1605:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x110b9f0 was disconnected and freed. delete nvme_qpair. 00:26:08.252 03:26:42 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:08.252 03:26:42 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:08.252 03:26:42 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:08.252 03:26:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:08.252 03:26:42 -- common/autotest_common.sh@10 -- # set +x 00:26:08.252 03:26:42 -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:08.252 03:26:42 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:08.252 03:26:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:08.252 03:26:42 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:26:08.252 03:26:42 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:26:08.252 03:26:42 -- host/discovery_remove_ifc.sh@90 -- # killprocess 1595852 00:26:08.252 03:26:42 -- common/autotest_common.sh@936 -- # '[' -z 1595852 ']' 00:26:08.252 03:26:42 -- common/autotest_common.sh@940 -- # kill -0 1595852 00:26:08.252 03:26:42 -- common/autotest_common.sh@941 -- # uname 00:26:08.252 03:26:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:08.252 03:26:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1595852 00:26:08.252 03:26:42 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:08.252 03:26:42 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:08.252 03:26:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1595852' 00:26:08.252 killing process with pid 1595852 00:26:08.252 03:26:42 -- common/autotest_common.sh@955 -- # kill 1595852 00:26:08.252 03:26:42 -- common/autotest_common.sh@960 -- # wait 1595852 00:26:08.510 03:26:42 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:26:08.510 03:26:42 -- nvmf/common.sh@477 -- # nvmfcleanup 00:26:08.510 03:26:42 -- nvmf/common.sh@117 -- # sync 00:26:08.510 03:26:42 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:08.510 03:26:42 -- nvmf/common.sh@120 -- # set +e 00:26:08.510 03:26:42 -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:08.510 03:26:42 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:08.510 rmmod nvme_tcp 00:26:08.510 rmmod nvme_fabrics 00:26:08.510 rmmod nvme_keyring 00:26:08.510 03:26:42 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:08.510 03:26:42 -- nvmf/common.sh@124 -- # set -e 00:26:08.510 03:26:42 -- nvmf/common.sh@125 -- # return 0 00:26:08.510 03:26:42 -- nvmf/common.sh@478 -- # '[' -n 1595702 ']' 00:26:08.510 03:26:42 -- nvmf/common.sh@479 -- # killprocess 1595702 00:26:08.510 03:26:42 -- common/autotest_common.sh@936 -- # '[' -z 1595702 ']' 00:26:08.510 03:26:42 -- common/autotest_common.sh@940 -- # kill -0 1595702 00:26:08.510 03:26:42 -- common/autotest_common.sh@941 -- # uname 00:26:08.510 03:26:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:08.510 03:26:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1595702 00:26:08.510 03:26:42 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:26:08.510 03:26:42 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:26:08.510 03:26:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1595702' 00:26:08.510 killing process with pid 1595702 00:26:08.510 03:26:42 -- common/autotest_common.sh@955 -- # kill 1595702 00:26:08.510 03:26:42 -- common/autotest_common.sh@960 -- # wait 1595702 00:26:08.510 [2024-04-25 03:26:42.937287] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f0920 is same with the state(5) to be set 00:26:08.510 [2024-04-25 03:26:42.937344] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f0920 is same with the state(5) to be set 00:26:08.768 03:26:43 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:26:08.768 03:26:43 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:26:08.768 03:26:43 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:26:08.768 03:26:43 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:08.768 03:26:43 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:08.768 03:26:43 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:08.768 03:26:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:08.768 03:26:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:11.302 03:26:45 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:11.302 00:26:11.302 real 0m18.413s 00:26:11.302 user 0m25.490s 00:26:11.302 sys 0m3.005s 00:26:11.302 03:26:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:11.302 03:26:45 -- common/autotest_common.sh@10 -- # set +x 00:26:11.302 ************************************ 00:26:11.302 END TEST nvmf_discovery_remove_ifc 00:26:11.302 ************************************ 00:26:11.302 03:26:45 -- nvmf/nvmf.sh@101 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:11.302 03:26:45 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:26:11.302 03:26:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:11.302 03:26:45 -- common/autotest_common.sh@10 -- # set +x 00:26:11.302 ************************************ 00:26:11.302 START TEST nvmf_identify_kernel_target 00:26:11.302 ************************************ 00:26:11.302 03:26:45 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:11.302 * Looking for test storage... 00:26:11.302 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:11.302 03:26:45 -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:11.302 03:26:45 -- nvmf/common.sh@7 -- # uname -s 00:26:11.302 03:26:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:11.302 03:26:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:11.302 03:26:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:11.302 03:26:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:11.302 03:26:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:11.302 03:26:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:11.302 03:26:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:11.302 03:26:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:11.302 03:26:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:11.303 03:26:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:11.303 03:26:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:11.303 03:26:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:11.303 03:26:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:11.303 03:26:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:11.303 03:26:45 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:11.303 03:26:45 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:11.303 03:26:45 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:11.303 03:26:45 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:11.303 03:26:45 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:11.303 03:26:45 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:11.303 03:26:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:11.303 03:26:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:11.303 03:26:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:11.303 03:26:45 -- paths/export.sh@5 -- # export PATH 00:26:11.303 03:26:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:11.303 03:26:45 -- nvmf/common.sh@47 -- # : 0 00:26:11.303 03:26:45 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:11.303 03:26:45 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:11.303 03:26:45 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:11.303 03:26:45 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:11.303 03:26:45 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:11.303 03:26:45 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:11.303 03:26:45 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:11.303 03:26:45 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:11.303 03:26:45 -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:26:11.303 03:26:45 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:26:11.303 03:26:45 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:11.303 03:26:45 -- nvmf/common.sh@437 -- # prepare_net_devs 00:26:11.303 03:26:45 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:26:11.303 03:26:45 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:26:11.303 03:26:45 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:11.303 03:26:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:11.303 03:26:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:11.303 03:26:45 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:26:11.303 03:26:45 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:26:11.303 03:26:45 -- nvmf/common.sh@285 -- # xtrace_disable 00:26:11.303 03:26:45 -- common/autotest_common.sh@10 -- # set +x 00:26:13.204 03:26:47 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:13.204 03:26:47 -- nvmf/common.sh@291 -- # pci_devs=() 00:26:13.204 03:26:47 -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:13.204 03:26:47 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:13.204 03:26:47 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:13.204 03:26:47 -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:13.204 03:26:47 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:13.204 03:26:47 -- nvmf/common.sh@295 -- # net_devs=() 00:26:13.204 03:26:47 -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:13.204 03:26:47 -- nvmf/common.sh@296 -- # e810=() 00:26:13.204 03:26:47 -- nvmf/common.sh@296 -- # local -ga e810 00:26:13.204 03:26:47 -- nvmf/common.sh@297 -- # x722=() 00:26:13.204 03:26:47 -- nvmf/common.sh@297 -- # local -ga x722 00:26:13.204 03:26:47 -- nvmf/common.sh@298 -- # mlx=() 00:26:13.204 03:26:47 -- nvmf/common.sh@298 -- # local -ga mlx 00:26:13.204 03:26:47 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:13.204 03:26:47 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:13.204 03:26:47 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:13.204 03:26:47 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:13.204 03:26:47 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:13.204 03:26:47 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:13.205 03:26:47 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:13.205 03:26:47 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:13.205 03:26:47 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:13.205 03:26:47 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:13.205 03:26:47 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:13.205 03:26:47 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:13.205 03:26:47 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:13.205 03:26:47 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:13.205 03:26:47 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:13.205 03:26:47 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:13.205 03:26:47 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:13.205 03:26:47 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:13.205 03:26:47 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:13.205 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:13.205 03:26:47 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:13.205 03:26:47 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:13.205 03:26:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:13.205 03:26:47 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:13.205 03:26:47 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:13.205 03:26:47 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:13.205 03:26:47 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:13.205 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:13.205 03:26:47 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:13.205 03:26:47 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:13.205 03:26:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:13.205 03:26:47 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:13.205 03:26:47 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:13.205 03:26:47 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:13.205 03:26:47 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:13.205 03:26:47 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:13.205 03:26:47 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:13.205 03:26:47 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:13.205 03:26:47 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:26:13.205 03:26:47 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:13.205 03:26:47 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:13.205 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:13.205 03:26:47 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:26:13.205 03:26:47 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:13.205 03:26:47 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:13.205 03:26:47 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:26:13.205 03:26:47 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:13.205 03:26:47 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:13.205 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:13.205 03:26:47 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:26:13.205 03:26:47 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:26:13.205 03:26:47 -- nvmf/common.sh@403 -- # is_hw=yes 00:26:13.205 03:26:47 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:26:13.205 03:26:47 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:26:13.205 03:26:47 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:26:13.205 03:26:47 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:13.205 03:26:47 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:13.205 03:26:47 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:13.205 03:26:47 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:13.205 03:26:47 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:13.205 03:26:47 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:13.205 03:26:47 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:13.205 03:26:47 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:13.205 03:26:47 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:13.205 03:26:47 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:13.205 03:26:47 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:13.205 03:26:47 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:13.205 03:26:47 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:13.205 03:26:47 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:13.205 03:26:47 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:13.205 03:26:47 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:13.205 03:26:47 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:13.205 03:26:47 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:13.205 03:26:47 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:13.205 03:26:47 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:13.205 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:13.205 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.140 ms 00:26:13.205 00:26:13.205 --- 10.0.0.2 ping statistics --- 00:26:13.205 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:13.205 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:26:13.205 03:26:47 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:13.205 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:13.205 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.071 ms 00:26:13.205 00:26:13.205 --- 10.0.0.1 ping statistics --- 00:26:13.205 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:13.205 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:26:13.205 03:26:47 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:13.205 03:26:47 -- nvmf/common.sh@411 -- # return 0 00:26:13.205 03:26:47 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:26:13.205 03:26:47 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:13.205 03:26:47 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:26:13.205 03:26:47 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:26:13.205 03:26:47 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:13.205 03:26:47 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:26:13.205 03:26:47 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:26:13.205 03:26:47 -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:26:13.205 03:26:47 -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:26:13.205 03:26:47 -- nvmf/common.sh@717 -- # local ip 00:26:13.205 03:26:47 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:13.205 03:26:47 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:13.205 03:26:47 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:13.205 03:26:47 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:13.205 03:26:47 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:13.205 03:26:47 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:13.205 03:26:47 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:13.205 03:26:47 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:13.205 03:26:47 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:13.205 03:26:47 -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:26:13.205 03:26:47 -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:26:13.205 03:26:47 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:26:13.205 03:26:47 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:26:13.205 03:26:47 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:13.205 03:26:47 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:13.205 03:26:47 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:13.205 03:26:47 -- nvmf/common.sh@628 -- # local block nvme 00:26:13.205 03:26:47 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:26:13.205 03:26:47 -- nvmf/common.sh@631 -- # modprobe nvmet 00:26:13.205 03:26:47 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:13.205 03:26:47 -- nvmf/common.sh@636 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:14.582 Waiting for block devices as requested 00:26:14.582 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:26:14.582 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:26:14.582 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:26:14.582 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:26:14.840 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:26:14.840 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:26:14.840 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:26:14.840 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:26:14.840 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:26:15.099 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:26:15.099 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:26:15.099 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:26:15.357 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:26:15.357 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:26:15.357 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:26:15.357 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:26:15.616 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:26:15.616 03:26:50 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:26:15.616 03:26:50 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:15.616 03:26:50 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:26:15.616 03:26:50 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:26:15.616 03:26:50 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:15.616 03:26:50 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:26:15.616 03:26:50 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:26:15.616 03:26:50 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:26:15.616 03:26:50 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:26:15.616 No valid GPT data, bailing 00:26:15.616 03:26:50 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:15.616 03:26:50 -- scripts/common.sh@391 -- # pt= 00:26:15.616 03:26:50 -- scripts/common.sh@392 -- # return 1 00:26:15.616 03:26:50 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:26:15.616 03:26:50 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme0n1 ]] 00:26:15.616 03:26:50 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:15.616 03:26:50 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:15.616 03:26:50 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:15.616 03:26:50 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:26:15.616 03:26:50 -- nvmf/common.sh@656 -- # echo 1 00:26:15.616 03:26:50 -- nvmf/common.sh@657 -- # echo /dev/nvme0n1 00:26:15.616 03:26:50 -- nvmf/common.sh@658 -- # echo 1 00:26:15.616 03:26:50 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:26:15.616 03:26:50 -- nvmf/common.sh@661 -- # echo tcp 00:26:15.616 03:26:50 -- nvmf/common.sh@662 -- # echo 4420 00:26:15.616 03:26:50 -- nvmf/common.sh@663 -- # echo ipv4 00:26:15.616 03:26:50 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:15.616 03:26:50 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:26:15.876 00:26:15.876 Discovery Log Number of Records 2, Generation counter 2 00:26:15.876 =====Discovery Log Entry 0====== 00:26:15.876 trtype: tcp 00:26:15.876 adrfam: ipv4 00:26:15.876 subtype: current discovery subsystem 00:26:15.876 treq: not specified, sq flow control disable supported 00:26:15.876 portid: 1 00:26:15.876 trsvcid: 4420 00:26:15.876 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:15.876 traddr: 10.0.0.1 00:26:15.876 eflags: none 00:26:15.876 sectype: none 00:26:15.876 =====Discovery Log Entry 1====== 00:26:15.876 trtype: tcp 00:26:15.876 adrfam: ipv4 00:26:15.876 subtype: nvme subsystem 00:26:15.876 treq: not specified, sq flow control disable supported 00:26:15.876 portid: 1 00:26:15.876 trsvcid: 4420 00:26:15.876 subnqn: nqn.2016-06.io.spdk:testnqn 00:26:15.876 traddr: 10.0.0.1 00:26:15.876 eflags: none 00:26:15.876 sectype: none 00:26:15.876 03:26:50 -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:26:15.876 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:26:15.876 EAL: No free 2048 kB hugepages reported on node 1 00:26:15.876 ===================================================== 00:26:15.876 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:26:15.876 ===================================================== 00:26:15.876 Controller Capabilities/Features 00:26:15.876 ================================ 00:26:15.876 Vendor ID: 0000 00:26:15.876 Subsystem Vendor ID: 0000 00:26:15.876 Serial Number: 4160c15f674d66864ccd 00:26:15.876 Model Number: Linux 00:26:15.876 Firmware Version: 6.7.0-68 00:26:15.876 Recommended Arb Burst: 0 00:26:15.876 IEEE OUI Identifier: 00 00 00 00:26:15.876 Multi-path I/O 00:26:15.876 May have multiple subsystem ports: No 00:26:15.876 May have multiple controllers: No 00:26:15.876 Associated with SR-IOV VF: No 00:26:15.876 Max Data Transfer Size: Unlimited 00:26:15.876 Max Number of Namespaces: 0 00:26:15.876 Max Number of I/O Queues: 1024 00:26:15.876 NVMe Specification Version (VS): 1.3 00:26:15.876 NVMe Specification Version (Identify): 1.3 00:26:15.876 Maximum Queue Entries: 1024 00:26:15.876 Contiguous Queues Required: No 00:26:15.876 Arbitration Mechanisms Supported 00:26:15.876 Weighted Round Robin: Not Supported 00:26:15.876 Vendor Specific: Not Supported 00:26:15.876 Reset Timeout: 7500 ms 00:26:15.876 Doorbell Stride: 4 bytes 00:26:15.876 NVM Subsystem Reset: Not Supported 00:26:15.876 Command Sets Supported 00:26:15.876 NVM Command Set: Supported 00:26:15.876 Boot Partition: Not Supported 00:26:15.876 Memory Page Size Minimum: 4096 bytes 00:26:15.876 Memory Page Size Maximum: 4096 bytes 00:26:15.876 Persistent Memory Region: Not Supported 00:26:15.876 Optional Asynchronous Events Supported 00:26:15.876 Namespace Attribute Notices: Not Supported 00:26:15.876 Firmware Activation Notices: Not Supported 00:26:15.876 ANA Change Notices: Not Supported 00:26:15.876 PLE Aggregate Log Change Notices: Not Supported 00:26:15.876 LBA Status Info Alert Notices: Not Supported 00:26:15.876 EGE Aggregate Log Change Notices: Not Supported 00:26:15.876 Normal NVM Subsystem Shutdown event: Not Supported 00:26:15.876 Zone Descriptor Change Notices: Not Supported 00:26:15.876 Discovery Log Change Notices: Supported 00:26:15.876 Controller Attributes 00:26:15.876 128-bit Host Identifier: Not Supported 00:26:15.876 Non-Operational Permissive Mode: Not Supported 00:26:15.876 NVM Sets: Not Supported 00:26:15.876 Read Recovery Levels: Not Supported 00:26:15.876 Endurance Groups: Not Supported 00:26:15.876 Predictable Latency Mode: Not Supported 00:26:15.876 Traffic Based Keep ALive: Not Supported 00:26:15.876 Namespace Granularity: Not Supported 00:26:15.876 SQ Associations: Not Supported 00:26:15.876 UUID List: Not Supported 00:26:15.876 Multi-Domain Subsystem: Not Supported 00:26:15.876 Fixed Capacity Management: Not Supported 00:26:15.876 Variable Capacity Management: Not Supported 00:26:15.876 Delete Endurance Group: Not Supported 00:26:15.876 Delete NVM Set: Not Supported 00:26:15.876 Extended LBA Formats Supported: Not Supported 00:26:15.876 Flexible Data Placement Supported: Not Supported 00:26:15.876 00:26:15.876 Controller Memory Buffer Support 00:26:15.876 ================================ 00:26:15.876 Supported: No 00:26:15.876 00:26:15.876 Persistent Memory Region Support 00:26:15.876 ================================ 00:26:15.876 Supported: No 00:26:15.876 00:26:15.876 Admin Command Set Attributes 00:26:15.876 ============================ 00:26:15.876 Security Send/Receive: Not Supported 00:26:15.876 Format NVM: Not Supported 00:26:15.876 Firmware Activate/Download: Not Supported 00:26:15.876 Namespace Management: Not Supported 00:26:15.876 Device Self-Test: Not Supported 00:26:15.876 Directives: Not Supported 00:26:15.876 NVMe-MI: Not Supported 00:26:15.876 Virtualization Management: Not Supported 00:26:15.876 Doorbell Buffer Config: Not Supported 00:26:15.876 Get LBA Status Capability: Not Supported 00:26:15.876 Command & Feature Lockdown Capability: Not Supported 00:26:15.876 Abort Command Limit: 1 00:26:15.876 Async Event Request Limit: 1 00:26:15.876 Number of Firmware Slots: N/A 00:26:15.876 Firmware Slot 1 Read-Only: N/A 00:26:15.876 Firmware Activation Without Reset: N/A 00:26:15.876 Multiple Update Detection Support: N/A 00:26:15.876 Firmware Update Granularity: No Information Provided 00:26:15.876 Per-Namespace SMART Log: No 00:26:15.876 Asymmetric Namespace Access Log Page: Not Supported 00:26:15.876 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:26:15.876 Command Effects Log Page: Not Supported 00:26:15.876 Get Log Page Extended Data: Supported 00:26:15.876 Telemetry Log Pages: Not Supported 00:26:15.876 Persistent Event Log Pages: Not Supported 00:26:15.876 Supported Log Pages Log Page: May Support 00:26:15.876 Commands Supported & Effects Log Page: Not Supported 00:26:15.876 Feature Identifiers & Effects Log Page:May Support 00:26:15.876 NVMe-MI Commands & Effects Log Page: May Support 00:26:15.876 Data Area 4 for Telemetry Log: Not Supported 00:26:15.876 Error Log Page Entries Supported: 1 00:26:15.876 Keep Alive: Not Supported 00:26:15.876 00:26:15.876 NVM Command Set Attributes 00:26:15.876 ========================== 00:26:15.876 Submission Queue Entry Size 00:26:15.876 Max: 1 00:26:15.876 Min: 1 00:26:15.876 Completion Queue Entry Size 00:26:15.876 Max: 1 00:26:15.876 Min: 1 00:26:15.876 Number of Namespaces: 0 00:26:15.876 Compare Command: Not Supported 00:26:15.876 Write Uncorrectable Command: Not Supported 00:26:15.876 Dataset Management Command: Not Supported 00:26:15.876 Write Zeroes Command: Not Supported 00:26:15.876 Set Features Save Field: Not Supported 00:26:15.876 Reservations: Not Supported 00:26:15.876 Timestamp: Not Supported 00:26:15.876 Copy: Not Supported 00:26:15.876 Volatile Write Cache: Not Present 00:26:15.876 Atomic Write Unit (Normal): 1 00:26:15.876 Atomic Write Unit (PFail): 1 00:26:15.876 Atomic Compare & Write Unit: 1 00:26:15.876 Fused Compare & Write: Not Supported 00:26:15.876 Scatter-Gather List 00:26:15.876 SGL Command Set: Supported 00:26:15.876 SGL Keyed: Not Supported 00:26:15.876 SGL Bit Bucket Descriptor: Not Supported 00:26:15.876 SGL Metadata Pointer: Not Supported 00:26:15.876 Oversized SGL: Not Supported 00:26:15.876 SGL Metadata Address: Not Supported 00:26:15.876 SGL Offset: Supported 00:26:15.876 Transport SGL Data Block: Not Supported 00:26:15.876 Replay Protected Memory Block: Not Supported 00:26:15.876 00:26:15.876 Firmware Slot Information 00:26:15.876 ========================= 00:26:15.876 Active slot: 0 00:26:15.876 00:26:15.876 00:26:15.876 Error Log 00:26:15.876 ========= 00:26:15.876 00:26:15.876 Active Namespaces 00:26:15.876 ================= 00:26:15.876 Discovery Log Page 00:26:15.876 ================== 00:26:15.876 Generation Counter: 2 00:26:15.876 Number of Records: 2 00:26:15.876 Record Format: 0 00:26:15.876 00:26:15.876 Discovery Log Entry 0 00:26:15.876 ---------------------- 00:26:15.876 Transport Type: 3 (TCP) 00:26:15.877 Address Family: 1 (IPv4) 00:26:15.877 Subsystem Type: 3 (Current Discovery Subsystem) 00:26:15.877 Entry Flags: 00:26:15.877 Duplicate Returned Information: 0 00:26:15.877 Explicit Persistent Connection Support for Discovery: 0 00:26:15.877 Transport Requirements: 00:26:15.877 Secure Channel: Not Specified 00:26:15.877 Port ID: 1 (0x0001) 00:26:15.877 Controller ID: 65535 (0xffff) 00:26:15.877 Admin Max SQ Size: 32 00:26:15.877 Transport Service Identifier: 4420 00:26:15.877 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:26:15.877 Transport Address: 10.0.0.1 00:26:15.877 Discovery Log Entry 1 00:26:15.877 ---------------------- 00:26:15.877 Transport Type: 3 (TCP) 00:26:15.877 Address Family: 1 (IPv4) 00:26:15.877 Subsystem Type: 2 (NVM Subsystem) 00:26:15.877 Entry Flags: 00:26:15.877 Duplicate Returned Information: 0 00:26:15.877 Explicit Persistent Connection Support for Discovery: 0 00:26:15.877 Transport Requirements: 00:26:15.877 Secure Channel: Not Specified 00:26:15.877 Port ID: 1 (0x0001) 00:26:15.877 Controller ID: 65535 (0xffff) 00:26:15.877 Admin Max SQ Size: 32 00:26:15.877 Transport Service Identifier: 4420 00:26:15.877 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:26:15.877 Transport Address: 10.0.0.1 00:26:15.877 03:26:50 -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:15.877 EAL: No free 2048 kB hugepages reported on node 1 00:26:15.877 get_feature(0x01) failed 00:26:15.877 get_feature(0x02) failed 00:26:15.877 get_feature(0x04) failed 00:26:15.877 ===================================================== 00:26:15.877 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:26:15.877 ===================================================== 00:26:15.877 Controller Capabilities/Features 00:26:15.877 ================================ 00:26:15.877 Vendor ID: 0000 00:26:15.877 Subsystem Vendor ID: 0000 00:26:15.877 Serial Number: 9e530847b503098690df 00:26:15.877 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:26:15.877 Firmware Version: 6.7.0-68 00:26:15.877 Recommended Arb Burst: 6 00:26:15.877 IEEE OUI Identifier: 00 00 00 00:26:15.877 Multi-path I/O 00:26:15.877 May have multiple subsystem ports: Yes 00:26:15.877 May have multiple controllers: Yes 00:26:15.877 Associated with SR-IOV VF: No 00:26:15.877 Max Data Transfer Size: Unlimited 00:26:15.877 Max Number of Namespaces: 1024 00:26:15.877 Max Number of I/O Queues: 128 00:26:15.877 NVMe Specification Version (VS): 1.3 00:26:15.877 NVMe Specification Version (Identify): 1.3 00:26:15.877 Maximum Queue Entries: 1024 00:26:15.877 Contiguous Queues Required: No 00:26:15.877 Arbitration Mechanisms Supported 00:26:15.877 Weighted Round Robin: Not Supported 00:26:15.877 Vendor Specific: Not Supported 00:26:15.877 Reset Timeout: 7500 ms 00:26:15.877 Doorbell Stride: 4 bytes 00:26:15.877 NVM Subsystem Reset: Not Supported 00:26:15.877 Command Sets Supported 00:26:15.877 NVM Command Set: Supported 00:26:15.877 Boot Partition: Not Supported 00:26:15.877 Memory Page Size Minimum: 4096 bytes 00:26:15.877 Memory Page Size Maximum: 4096 bytes 00:26:15.877 Persistent Memory Region: Not Supported 00:26:15.877 Optional Asynchronous Events Supported 00:26:15.877 Namespace Attribute Notices: Supported 00:26:15.877 Firmware Activation Notices: Not Supported 00:26:15.877 ANA Change Notices: Supported 00:26:15.877 PLE Aggregate Log Change Notices: Not Supported 00:26:15.877 LBA Status Info Alert Notices: Not Supported 00:26:15.877 EGE Aggregate Log Change Notices: Not Supported 00:26:15.877 Normal NVM Subsystem Shutdown event: Not Supported 00:26:15.877 Zone Descriptor Change Notices: Not Supported 00:26:15.877 Discovery Log Change Notices: Not Supported 00:26:15.877 Controller Attributes 00:26:15.877 128-bit Host Identifier: Supported 00:26:15.877 Non-Operational Permissive Mode: Not Supported 00:26:15.877 NVM Sets: Not Supported 00:26:15.877 Read Recovery Levels: Not Supported 00:26:15.877 Endurance Groups: Not Supported 00:26:15.877 Predictable Latency Mode: Not Supported 00:26:15.877 Traffic Based Keep ALive: Supported 00:26:15.877 Namespace Granularity: Not Supported 00:26:15.877 SQ Associations: Not Supported 00:26:15.877 UUID List: Not Supported 00:26:15.877 Multi-Domain Subsystem: Not Supported 00:26:15.877 Fixed Capacity Management: Not Supported 00:26:15.877 Variable Capacity Management: Not Supported 00:26:15.877 Delete Endurance Group: Not Supported 00:26:15.877 Delete NVM Set: Not Supported 00:26:15.877 Extended LBA Formats Supported: Not Supported 00:26:15.877 Flexible Data Placement Supported: Not Supported 00:26:15.877 00:26:15.877 Controller Memory Buffer Support 00:26:15.877 ================================ 00:26:15.877 Supported: No 00:26:15.877 00:26:15.877 Persistent Memory Region Support 00:26:15.877 ================================ 00:26:15.877 Supported: No 00:26:15.877 00:26:15.877 Admin Command Set Attributes 00:26:15.877 ============================ 00:26:15.877 Security Send/Receive: Not Supported 00:26:15.877 Format NVM: Not Supported 00:26:15.877 Firmware Activate/Download: Not Supported 00:26:15.877 Namespace Management: Not Supported 00:26:15.877 Device Self-Test: Not Supported 00:26:15.877 Directives: Not Supported 00:26:15.877 NVMe-MI: Not Supported 00:26:15.877 Virtualization Management: Not Supported 00:26:15.877 Doorbell Buffer Config: Not Supported 00:26:15.877 Get LBA Status Capability: Not Supported 00:26:15.877 Command & Feature Lockdown Capability: Not Supported 00:26:15.877 Abort Command Limit: 4 00:26:15.877 Async Event Request Limit: 4 00:26:15.877 Number of Firmware Slots: N/A 00:26:15.877 Firmware Slot 1 Read-Only: N/A 00:26:15.877 Firmware Activation Without Reset: N/A 00:26:15.877 Multiple Update Detection Support: N/A 00:26:15.877 Firmware Update Granularity: No Information Provided 00:26:15.877 Per-Namespace SMART Log: Yes 00:26:15.877 Asymmetric Namespace Access Log Page: Supported 00:26:15.877 ANA Transition Time : 10 sec 00:26:15.877 00:26:15.877 Asymmetric Namespace Access Capabilities 00:26:15.877 ANA Optimized State : Supported 00:26:15.877 ANA Non-Optimized State : Supported 00:26:15.877 ANA Inaccessible State : Supported 00:26:15.877 ANA Persistent Loss State : Supported 00:26:15.877 ANA Change State : Supported 00:26:15.877 ANAGRPID is not changed : No 00:26:15.877 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:26:15.877 00:26:15.877 ANA Group Identifier Maximum : 128 00:26:15.877 Number of ANA Group Identifiers : 128 00:26:15.877 Max Number of Allowed Namespaces : 1024 00:26:15.877 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:26:15.877 Command Effects Log Page: Supported 00:26:15.877 Get Log Page Extended Data: Supported 00:26:15.877 Telemetry Log Pages: Not Supported 00:26:15.877 Persistent Event Log Pages: Not Supported 00:26:15.877 Supported Log Pages Log Page: May Support 00:26:15.877 Commands Supported & Effects Log Page: Not Supported 00:26:15.877 Feature Identifiers & Effects Log Page:May Support 00:26:15.877 NVMe-MI Commands & Effects Log Page: May Support 00:26:15.877 Data Area 4 for Telemetry Log: Not Supported 00:26:15.877 Error Log Page Entries Supported: 128 00:26:15.877 Keep Alive: Supported 00:26:15.877 Keep Alive Granularity: 1000 ms 00:26:15.877 00:26:15.877 NVM Command Set Attributes 00:26:15.877 ========================== 00:26:15.877 Submission Queue Entry Size 00:26:15.877 Max: 64 00:26:15.877 Min: 64 00:26:15.877 Completion Queue Entry Size 00:26:15.877 Max: 16 00:26:15.877 Min: 16 00:26:15.877 Number of Namespaces: 1024 00:26:15.877 Compare Command: Not Supported 00:26:15.877 Write Uncorrectable Command: Not Supported 00:26:15.877 Dataset Management Command: Supported 00:26:15.877 Write Zeroes Command: Supported 00:26:15.877 Set Features Save Field: Not Supported 00:26:15.877 Reservations: Not Supported 00:26:15.877 Timestamp: Not Supported 00:26:15.877 Copy: Not Supported 00:26:15.877 Volatile Write Cache: Present 00:26:15.877 Atomic Write Unit (Normal): 1 00:26:15.877 Atomic Write Unit (PFail): 1 00:26:15.877 Atomic Compare & Write Unit: 1 00:26:15.877 Fused Compare & Write: Not Supported 00:26:15.877 Scatter-Gather List 00:26:15.877 SGL Command Set: Supported 00:26:15.877 SGL Keyed: Not Supported 00:26:15.877 SGL Bit Bucket Descriptor: Not Supported 00:26:15.877 SGL Metadata Pointer: Not Supported 00:26:15.877 Oversized SGL: Not Supported 00:26:15.877 SGL Metadata Address: Not Supported 00:26:15.877 SGL Offset: Supported 00:26:15.877 Transport SGL Data Block: Not Supported 00:26:15.877 Replay Protected Memory Block: Not Supported 00:26:15.877 00:26:15.877 Firmware Slot Information 00:26:15.877 ========================= 00:26:15.877 Active slot: 0 00:26:15.877 00:26:15.877 Asymmetric Namespace Access 00:26:15.877 =========================== 00:26:15.877 Change Count : 0 00:26:15.877 Number of ANA Group Descriptors : 1 00:26:15.877 ANA Group Descriptor : 0 00:26:15.878 ANA Group ID : 1 00:26:15.878 Number of NSID Values : 1 00:26:15.878 Change Count : 0 00:26:15.878 ANA State : 1 00:26:15.878 Namespace Identifier : 1 00:26:15.878 00:26:15.878 Commands Supported and Effects 00:26:15.878 ============================== 00:26:15.878 Admin Commands 00:26:15.878 -------------- 00:26:15.878 Get Log Page (02h): Supported 00:26:15.878 Identify (06h): Supported 00:26:15.878 Abort (08h): Supported 00:26:15.878 Set Features (09h): Supported 00:26:15.878 Get Features (0Ah): Supported 00:26:15.878 Asynchronous Event Request (0Ch): Supported 00:26:15.878 Keep Alive (18h): Supported 00:26:15.878 I/O Commands 00:26:15.878 ------------ 00:26:15.878 Flush (00h): Supported 00:26:15.878 Write (01h): Supported LBA-Change 00:26:15.878 Read (02h): Supported 00:26:15.878 Write Zeroes (08h): Supported LBA-Change 00:26:15.878 Dataset Management (09h): Supported 00:26:15.878 00:26:15.878 Error Log 00:26:15.878 ========= 00:26:15.878 Entry: 0 00:26:15.878 Error Count: 0x3 00:26:15.878 Submission Queue Id: 0x0 00:26:15.878 Command Id: 0x5 00:26:15.878 Phase Bit: 0 00:26:15.878 Status Code: 0x2 00:26:15.878 Status Code Type: 0x0 00:26:15.878 Do Not Retry: 1 00:26:15.878 Error Location: 0x28 00:26:15.878 LBA: 0x0 00:26:15.878 Namespace: 0x0 00:26:15.878 Vendor Log Page: 0x0 00:26:15.878 ----------- 00:26:15.878 Entry: 1 00:26:15.878 Error Count: 0x2 00:26:15.878 Submission Queue Id: 0x0 00:26:15.878 Command Id: 0x5 00:26:15.878 Phase Bit: 0 00:26:15.878 Status Code: 0x2 00:26:15.878 Status Code Type: 0x0 00:26:15.878 Do Not Retry: 1 00:26:15.878 Error Location: 0x28 00:26:15.878 LBA: 0x0 00:26:15.878 Namespace: 0x0 00:26:15.878 Vendor Log Page: 0x0 00:26:15.878 ----------- 00:26:15.878 Entry: 2 00:26:15.878 Error Count: 0x1 00:26:15.878 Submission Queue Id: 0x0 00:26:15.878 Command Id: 0x4 00:26:15.878 Phase Bit: 0 00:26:15.878 Status Code: 0x2 00:26:15.878 Status Code Type: 0x0 00:26:15.878 Do Not Retry: 1 00:26:15.878 Error Location: 0x28 00:26:15.878 LBA: 0x0 00:26:15.878 Namespace: 0x0 00:26:15.878 Vendor Log Page: 0x0 00:26:15.878 00:26:15.878 Number of Queues 00:26:15.878 ================ 00:26:15.878 Number of I/O Submission Queues: 128 00:26:15.878 Number of I/O Completion Queues: 128 00:26:15.878 00:26:15.878 ZNS Specific Controller Data 00:26:15.878 ============================ 00:26:15.878 Zone Append Size Limit: 0 00:26:15.878 00:26:15.878 00:26:15.878 Active Namespaces 00:26:15.878 ================= 00:26:15.878 get_feature(0x05) failed 00:26:15.878 Namespace ID:1 00:26:15.878 Command Set Identifier: NVM (00h) 00:26:15.878 Deallocate: Supported 00:26:15.878 Deallocated/Unwritten Error: Not Supported 00:26:15.878 Deallocated Read Value: Unknown 00:26:15.878 Deallocate in Write Zeroes: Not Supported 00:26:15.878 Deallocated Guard Field: 0xFFFF 00:26:15.878 Flush: Supported 00:26:15.878 Reservation: Not Supported 00:26:15.878 Namespace Sharing Capabilities: Multiple Controllers 00:26:15.878 Size (in LBAs): 1953525168 (931GiB) 00:26:15.878 Capacity (in LBAs): 1953525168 (931GiB) 00:26:15.878 Utilization (in LBAs): 1953525168 (931GiB) 00:26:15.878 UUID: aee84449-9a09-4ac8-af06-3a6b3d52b920 00:26:15.878 Thin Provisioning: Not Supported 00:26:15.878 Per-NS Atomic Units: Yes 00:26:15.878 Atomic Boundary Size (Normal): 0 00:26:15.878 Atomic Boundary Size (PFail): 0 00:26:15.878 Atomic Boundary Offset: 0 00:26:15.878 NGUID/EUI64 Never Reused: No 00:26:15.878 ANA group ID: 1 00:26:15.878 Namespace Write Protected: No 00:26:15.878 Number of LBA Formats: 1 00:26:15.878 Current LBA Format: LBA Format #00 00:26:15.878 LBA Format #00: Data Size: 512 Metadata Size: 0 00:26:15.878 00:26:16.136 03:26:50 -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:26:16.136 03:26:50 -- nvmf/common.sh@477 -- # nvmfcleanup 00:26:16.136 03:26:50 -- nvmf/common.sh@117 -- # sync 00:26:16.136 03:26:50 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:16.136 03:26:50 -- nvmf/common.sh@120 -- # set +e 00:26:16.136 03:26:50 -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:16.136 03:26:50 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:16.136 rmmod nvme_tcp 00:26:16.136 rmmod nvme_fabrics 00:26:16.136 03:26:50 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:16.136 03:26:50 -- nvmf/common.sh@124 -- # set -e 00:26:16.136 03:26:50 -- nvmf/common.sh@125 -- # return 0 00:26:16.136 03:26:50 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:26:16.136 03:26:50 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:26:16.136 03:26:50 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:26:16.136 03:26:50 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:26:16.136 03:26:50 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:16.136 03:26:50 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:16.136 03:26:50 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:16.136 03:26:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:16.136 03:26:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:18.037 03:26:52 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:18.037 03:26:52 -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:26:18.037 03:26:52 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:26:18.037 03:26:52 -- nvmf/common.sh@675 -- # echo 0 00:26:18.037 03:26:52 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:18.037 03:26:52 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:18.037 03:26:52 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:18.037 03:26:52 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:18.037 03:26:52 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:26:18.037 03:26:52 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:26:18.037 03:26:52 -- nvmf/common.sh@687 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:19.411 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:26:19.411 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:26:19.411 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:26:19.411 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:26:19.411 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:26:19.411 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:26:19.411 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:26:19.411 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:26:19.411 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:26:19.411 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:26:19.411 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:26:19.411 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:26:19.411 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:26:19.411 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:26:19.411 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:26:19.411 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:26:20.344 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:26:20.344 00:26:20.344 real 0m9.413s 00:26:20.344 user 0m1.965s 00:26:20.344 sys 0m3.383s 00:26:20.344 03:26:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:20.344 03:26:54 -- common/autotest_common.sh@10 -- # set +x 00:26:20.344 ************************************ 00:26:20.344 END TEST nvmf_identify_kernel_target 00:26:20.344 ************************************ 00:26:20.344 03:26:54 -- nvmf/nvmf.sh@102 -- # run_test nvmf_auth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:26:20.344 03:26:54 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:26:20.344 03:26:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:20.344 03:26:54 -- common/autotest_common.sh@10 -- # set +x 00:26:20.602 ************************************ 00:26:20.602 START TEST nvmf_auth 00:26:20.602 ************************************ 00:26:20.602 03:26:54 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:26:20.602 * Looking for test storage... 00:26:20.602 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:20.602 03:26:54 -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:20.602 03:26:54 -- nvmf/common.sh@7 -- # uname -s 00:26:20.602 03:26:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:20.602 03:26:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:20.602 03:26:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:20.602 03:26:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:20.602 03:26:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:20.602 03:26:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:20.602 03:26:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:20.602 03:26:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:20.602 03:26:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:20.602 03:26:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:20.602 03:26:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:20.602 03:26:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:20.602 03:26:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:20.602 03:26:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:20.602 03:26:54 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:20.602 03:26:54 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:20.602 03:26:54 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:20.602 03:26:54 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:20.602 03:26:54 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:20.602 03:26:54 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:20.602 03:26:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:20.603 03:26:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:20.603 03:26:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:20.603 03:26:54 -- paths/export.sh@5 -- # export PATH 00:26:20.603 03:26:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:20.603 03:26:54 -- nvmf/common.sh@47 -- # : 0 00:26:20.603 03:26:54 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:20.603 03:26:54 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:20.603 03:26:54 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:20.603 03:26:54 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:20.603 03:26:54 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:20.603 03:26:54 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:20.603 03:26:54 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:20.603 03:26:54 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:20.603 03:26:54 -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:26:20.603 03:26:54 -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:26:20.603 03:26:54 -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:26:20.603 03:26:54 -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:26:20.603 03:26:54 -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:20.603 03:26:54 -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:20.603 03:26:54 -- host/auth.sh@21 -- # keys=() 00:26:20.603 03:26:54 -- host/auth.sh@77 -- # nvmftestinit 00:26:20.603 03:26:54 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:26:20.603 03:26:54 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:20.603 03:26:54 -- nvmf/common.sh@437 -- # prepare_net_devs 00:26:20.603 03:26:54 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:26:20.603 03:26:54 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:26:20.603 03:26:54 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:20.603 03:26:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:20.603 03:26:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:20.603 03:26:54 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:26:20.603 03:26:54 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:26:20.603 03:26:54 -- nvmf/common.sh@285 -- # xtrace_disable 00:26:20.603 03:26:54 -- common/autotest_common.sh@10 -- # set +x 00:26:22.503 03:26:56 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:22.503 03:26:56 -- nvmf/common.sh@291 -- # pci_devs=() 00:26:22.503 03:26:56 -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:22.503 03:26:56 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:22.503 03:26:56 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:22.503 03:26:56 -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:22.503 03:26:56 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:22.503 03:26:56 -- nvmf/common.sh@295 -- # net_devs=() 00:26:22.503 03:26:56 -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:22.503 03:26:56 -- nvmf/common.sh@296 -- # e810=() 00:26:22.503 03:26:56 -- nvmf/common.sh@296 -- # local -ga e810 00:26:22.503 03:26:56 -- nvmf/common.sh@297 -- # x722=() 00:26:22.503 03:26:56 -- nvmf/common.sh@297 -- # local -ga x722 00:26:22.503 03:26:56 -- nvmf/common.sh@298 -- # mlx=() 00:26:22.503 03:26:56 -- nvmf/common.sh@298 -- # local -ga mlx 00:26:22.503 03:26:56 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:22.503 03:26:56 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:22.503 03:26:56 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:22.503 03:26:56 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:22.503 03:26:56 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:22.503 03:26:56 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:22.503 03:26:56 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:22.503 03:26:56 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:22.503 03:26:56 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:22.503 03:26:56 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:22.503 03:26:56 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:22.503 03:26:56 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:22.503 03:26:56 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:22.503 03:26:56 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:22.503 03:26:56 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:22.503 03:26:56 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:22.503 03:26:56 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:22.503 03:26:56 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:22.503 03:26:56 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:22.503 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:22.503 03:26:56 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:22.503 03:26:56 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:22.504 03:26:56 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:22.504 03:26:56 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:22.504 03:26:56 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:22.504 03:26:56 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:22.504 03:26:56 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:22.504 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:22.504 03:26:56 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:22.504 03:26:56 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:22.504 03:26:56 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:22.504 03:26:56 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:22.504 03:26:56 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:22.504 03:26:56 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:22.504 03:26:56 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:22.504 03:26:56 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:22.504 03:26:56 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:22.504 03:26:56 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:22.504 03:26:56 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:26:22.504 03:26:56 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:22.504 03:26:56 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:22.504 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:22.504 03:26:56 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:26:22.504 03:26:56 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:22.504 03:26:56 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:22.504 03:26:56 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:26:22.504 03:26:56 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:22.504 03:26:56 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:22.504 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:22.504 03:26:56 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:26:22.504 03:26:56 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:26:22.504 03:26:56 -- nvmf/common.sh@403 -- # is_hw=yes 00:26:22.504 03:26:56 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:26:22.504 03:26:56 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:26:22.504 03:26:56 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:26:22.504 03:26:56 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:22.504 03:26:56 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:22.504 03:26:56 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:22.504 03:26:56 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:22.504 03:26:56 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:22.504 03:26:56 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:22.504 03:26:56 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:22.504 03:26:56 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:22.504 03:26:56 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:22.504 03:26:56 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:22.504 03:26:56 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:22.504 03:26:56 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:22.504 03:26:56 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:22.504 03:26:56 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:22.504 03:26:56 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:22.504 03:26:56 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:22.504 03:26:57 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:22.766 03:26:57 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:22.766 03:26:57 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:22.766 03:26:57 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:22.766 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:22.766 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.119 ms 00:26:22.766 00:26:22.766 --- 10.0.0.2 ping statistics --- 00:26:22.766 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:22.766 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:26:22.766 03:26:57 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:22.766 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:22.766 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:26:22.766 00:26:22.766 --- 10.0.0.1 ping statistics --- 00:26:22.766 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:22.766 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:26:22.766 03:26:57 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:22.766 03:26:57 -- nvmf/common.sh@411 -- # return 0 00:26:22.766 03:26:57 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:26:22.766 03:26:57 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:22.766 03:26:57 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:26:22.766 03:26:57 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:26:22.766 03:26:57 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:22.766 03:26:57 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:26:22.766 03:26:57 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:26:22.766 03:26:57 -- host/auth.sh@78 -- # nvmfappstart -L nvme_auth 00:26:22.766 03:26:57 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:26:22.766 03:26:57 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:22.766 03:26:57 -- common/autotest_common.sh@10 -- # set +x 00:26:22.766 03:26:57 -- nvmf/common.sh@470 -- # nvmfpid=1603055 00:26:22.766 03:26:57 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:26:22.766 03:26:57 -- nvmf/common.sh@471 -- # waitforlisten 1603055 00:26:22.766 03:26:57 -- common/autotest_common.sh@817 -- # '[' -z 1603055 ']' 00:26:22.766 03:26:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:22.766 03:26:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:22.766 03:26:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:22.766 03:26:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:22.766 03:26:57 -- common/autotest_common.sh@10 -- # set +x 00:26:23.024 03:26:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:23.024 03:26:57 -- common/autotest_common.sh@850 -- # return 0 00:26:23.024 03:26:57 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:26:23.024 03:26:57 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:23.024 03:26:57 -- common/autotest_common.sh@10 -- # set +x 00:26:23.024 03:26:57 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:23.024 03:26:57 -- host/auth.sh@79 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:26:23.024 03:26:57 -- host/auth.sh@81 -- # gen_key null 32 00:26:23.024 03:26:57 -- host/auth.sh@53 -- # local digest len file key 00:26:23.024 03:26:57 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:23.024 03:26:57 -- host/auth.sh@54 -- # local -A digests 00:26:23.024 03:26:57 -- host/auth.sh@56 -- # digest=null 00:26:23.024 03:26:57 -- host/auth.sh@56 -- # len=32 00:26:23.024 03:26:57 -- host/auth.sh@57 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:23.024 03:26:57 -- host/auth.sh@57 -- # key=c3c5a587a6a2c107d71773039d59923a 00:26:23.024 03:26:57 -- host/auth.sh@58 -- # mktemp -t spdk.key-null.XXX 00:26:23.024 03:26:57 -- host/auth.sh@58 -- # file=/tmp/spdk.key-null.6Zf 00:26:23.024 03:26:57 -- host/auth.sh@59 -- # format_dhchap_key c3c5a587a6a2c107d71773039d59923a 0 00:26:23.024 03:26:57 -- nvmf/common.sh@708 -- # format_key DHHC-1 c3c5a587a6a2c107d71773039d59923a 0 00:26:23.024 03:26:57 -- nvmf/common.sh@691 -- # local prefix key digest 00:26:23.025 03:26:57 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:26:23.025 03:26:57 -- nvmf/common.sh@693 -- # key=c3c5a587a6a2c107d71773039d59923a 00:26:23.025 03:26:57 -- nvmf/common.sh@693 -- # digest=0 00:26:23.025 03:26:57 -- nvmf/common.sh@694 -- # python - 00:26:23.025 03:26:57 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-null.6Zf 00:26:23.025 03:26:57 -- host/auth.sh@62 -- # echo /tmp/spdk.key-null.6Zf 00:26:23.025 03:26:57 -- host/auth.sh@81 -- # keys[0]=/tmp/spdk.key-null.6Zf 00:26:23.025 03:26:57 -- host/auth.sh@82 -- # gen_key null 48 00:26:23.025 03:26:57 -- host/auth.sh@53 -- # local digest len file key 00:26:23.025 03:26:57 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:23.025 03:26:57 -- host/auth.sh@54 -- # local -A digests 00:26:23.025 03:26:57 -- host/auth.sh@56 -- # digest=null 00:26:23.025 03:26:57 -- host/auth.sh@56 -- # len=48 00:26:23.025 03:26:57 -- host/auth.sh@57 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:23.025 03:26:57 -- host/auth.sh@57 -- # key=8c10c411d1f54b957443b89b18c9e57c335b62a14c6dc1ac 00:26:23.025 03:26:57 -- host/auth.sh@58 -- # mktemp -t spdk.key-null.XXX 00:26:23.025 03:26:57 -- host/auth.sh@58 -- # file=/tmp/spdk.key-null.dT4 00:26:23.025 03:26:57 -- host/auth.sh@59 -- # format_dhchap_key 8c10c411d1f54b957443b89b18c9e57c335b62a14c6dc1ac 0 00:26:23.025 03:26:57 -- nvmf/common.sh@708 -- # format_key DHHC-1 8c10c411d1f54b957443b89b18c9e57c335b62a14c6dc1ac 0 00:26:23.025 03:26:57 -- nvmf/common.sh@691 -- # local prefix key digest 00:26:23.025 03:26:57 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:26:23.025 03:26:57 -- nvmf/common.sh@693 -- # key=8c10c411d1f54b957443b89b18c9e57c335b62a14c6dc1ac 00:26:23.025 03:26:57 -- nvmf/common.sh@693 -- # digest=0 00:26:23.025 03:26:57 -- nvmf/common.sh@694 -- # python - 00:26:23.283 03:26:57 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-null.dT4 00:26:23.283 03:26:57 -- host/auth.sh@62 -- # echo /tmp/spdk.key-null.dT4 00:26:23.283 03:26:57 -- host/auth.sh@82 -- # keys[1]=/tmp/spdk.key-null.dT4 00:26:23.283 03:26:57 -- host/auth.sh@83 -- # gen_key sha256 32 00:26:23.283 03:26:57 -- host/auth.sh@53 -- # local digest len file key 00:26:23.283 03:26:57 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:23.283 03:26:57 -- host/auth.sh@54 -- # local -A digests 00:26:23.283 03:26:57 -- host/auth.sh@56 -- # digest=sha256 00:26:23.283 03:26:57 -- host/auth.sh@56 -- # len=32 00:26:23.283 03:26:57 -- host/auth.sh@57 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:23.283 03:26:57 -- host/auth.sh@57 -- # key=675091986cf179b7fe5b72e561224893 00:26:23.283 03:26:57 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha256.XXX 00:26:23.283 03:26:57 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha256.Yej 00:26:23.283 03:26:57 -- host/auth.sh@59 -- # format_dhchap_key 675091986cf179b7fe5b72e561224893 1 00:26:23.283 03:26:57 -- nvmf/common.sh@708 -- # format_key DHHC-1 675091986cf179b7fe5b72e561224893 1 00:26:23.284 03:26:57 -- nvmf/common.sh@691 -- # local prefix key digest 00:26:23.284 03:26:57 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:26:23.284 03:26:57 -- nvmf/common.sh@693 -- # key=675091986cf179b7fe5b72e561224893 00:26:23.284 03:26:57 -- nvmf/common.sh@693 -- # digest=1 00:26:23.284 03:26:57 -- nvmf/common.sh@694 -- # python - 00:26:23.284 03:26:57 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha256.Yej 00:26:23.284 03:26:57 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha256.Yej 00:26:23.284 03:26:57 -- host/auth.sh@83 -- # keys[2]=/tmp/spdk.key-sha256.Yej 00:26:23.284 03:26:57 -- host/auth.sh@84 -- # gen_key sha384 48 00:26:23.284 03:26:57 -- host/auth.sh@53 -- # local digest len file key 00:26:23.284 03:26:57 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:23.284 03:26:57 -- host/auth.sh@54 -- # local -A digests 00:26:23.284 03:26:57 -- host/auth.sh@56 -- # digest=sha384 00:26:23.284 03:26:57 -- host/auth.sh@56 -- # len=48 00:26:23.284 03:26:57 -- host/auth.sh@57 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:23.284 03:26:57 -- host/auth.sh@57 -- # key=847d53b1bba06a28a1b0d322e15e7b4008aa81094203ecdb 00:26:23.284 03:26:57 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha384.XXX 00:26:23.284 03:26:57 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha384.zpA 00:26:23.284 03:26:57 -- host/auth.sh@59 -- # format_dhchap_key 847d53b1bba06a28a1b0d322e15e7b4008aa81094203ecdb 2 00:26:23.284 03:26:57 -- nvmf/common.sh@708 -- # format_key DHHC-1 847d53b1bba06a28a1b0d322e15e7b4008aa81094203ecdb 2 00:26:23.284 03:26:57 -- nvmf/common.sh@691 -- # local prefix key digest 00:26:23.284 03:26:57 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:26:23.284 03:26:57 -- nvmf/common.sh@693 -- # key=847d53b1bba06a28a1b0d322e15e7b4008aa81094203ecdb 00:26:23.284 03:26:57 -- nvmf/common.sh@693 -- # digest=2 00:26:23.284 03:26:57 -- nvmf/common.sh@694 -- # python - 00:26:23.284 03:26:57 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha384.zpA 00:26:23.284 03:26:57 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha384.zpA 00:26:23.284 03:26:57 -- host/auth.sh@84 -- # keys[3]=/tmp/spdk.key-sha384.zpA 00:26:23.284 03:26:57 -- host/auth.sh@85 -- # gen_key sha512 64 00:26:23.284 03:26:57 -- host/auth.sh@53 -- # local digest len file key 00:26:23.284 03:26:57 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:23.284 03:26:57 -- host/auth.sh@54 -- # local -A digests 00:26:23.284 03:26:57 -- host/auth.sh@56 -- # digest=sha512 00:26:23.284 03:26:57 -- host/auth.sh@56 -- # len=64 00:26:23.284 03:26:57 -- host/auth.sh@57 -- # xxd -p -c0 -l 32 /dev/urandom 00:26:23.284 03:26:57 -- host/auth.sh@57 -- # key=8243da038cd1c4ec2862835cd099be8aba2497729afbf67abc6e830f62d6d2d2 00:26:23.284 03:26:57 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha512.XXX 00:26:23.284 03:26:57 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha512.ajr 00:26:23.284 03:26:57 -- host/auth.sh@59 -- # format_dhchap_key 8243da038cd1c4ec2862835cd099be8aba2497729afbf67abc6e830f62d6d2d2 3 00:26:23.284 03:26:57 -- nvmf/common.sh@708 -- # format_key DHHC-1 8243da038cd1c4ec2862835cd099be8aba2497729afbf67abc6e830f62d6d2d2 3 00:26:23.284 03:26:57 -- nvmf/common.sh@691 -- # local prefix key digest 00:26:23.284 03:26:57 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:26:23.284 03:26:57 -- nvmf/common.sh@693 -- # key=8243da038cd1c4ec2862835cd099be8aba2497729afbf67abc6e830f62d6d2d2 00:26:23.284 03:26:57 -- nvmf/common.sh@693 -- # digest=3 00:26:23.284 03:26:57 -- nvmf/common.sh@694 -- # python - 00:26:23.284 03:26:57 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha512.ajr 00:26:23.284 03:26:57 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha512.ajr 00:26:23.284 03:26:57 -- host/auth.sh@85 -- # keys[4]=/tmp/spdk.key-sha512.ajr 00:26:23.284 03:26:57 -- host/auth.sh@87 -- # waitforlisten 1603055 00:26:23.284 03:26:57 -- common/autotest_common.sh@817 -- # '[' -z 1603055 ']' 00:26:23.284 03:26:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:23.284 03:26:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:23.284 03:26:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:23.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:23.284 03:26:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:23.284 03:26:57 -- common/autotest_common.sh@10 -- # set +x 00:26:23.543 03:26:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:23.543 03:26:57 -- common/autotest_common.sh@850 -- # return 0 00:26:23.543 03:26:57 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:26:23.543 03:26:57 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.6Zf 00:26:23.543 03:26:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:23.543 03:26:57 -- common/autotest_common.sh@10 -- # set +x 00:26:23.543 03:26:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:23.543 03:26:57 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:26:23.543 03:26:57 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.dT4 00:26:23.543 03:26:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:23.543 03:26:57 -- common/autotest_common.sh@10 -- # set +x 00:26:23.543 03:26:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:23.543 03:26:57 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:26:23.543 03:26:57 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.Yej 00:26:23.543 03:26:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:23.543 03:26:57 -- common/autotest_common.sh@10 -- # set +x 00:26:23.543 03:26:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:23.543 03:26:57 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:26:23.543 03:26:57 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.zpA 00:26:23.543 03:26:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:23.543 03:26:57 -- common/autotest_common.sh@10 -- # set +x 00:26:23.543 03:26:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:23.543 03:26:57 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:26:23.543 03:26:57 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.ajr 00:26:23.543 03:26:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:23.543 03:26:57 -- common/autotest_common.sh@10 -- # set +x 00:26:23.543 03:26:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:23.543 03:26:57 -- host/auth.sh@92 -- # nvmet_auth_init 00:26:23.543 03:26:57 -- host/auth.sh@35 -- # get_main_ns_ip 00:26:23.543 03:26:57 -- nvmf/common.sh@717 -- # local ip 00:26:23.543 03:26:57 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:23.543 03:26:57 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:23.543 03:26:57 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:23.543 03:26:57 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:23.543 03:26:57 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:23.543 03:26:57 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:23.543 03:26:57 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:23.543 03:26:57 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:23.543 03:26:57 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:23.543 03:26:57 -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:26:23.543 03:26:57 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:26:23.543 03:26:57 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:26:23.543 03:26:57 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:23.543 03:26:57 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:23.543 03:26:57 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:23.543 03:26:57 -- nvmf/common.sh@628 -- # local block nvme 00:26:23.543 03:26:57 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:26:23.543 03:26:57 -- nvmf/common.sh@631 -- # modprobe nvmet 00:26:23.543 03:26:58 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:23.543 03:26:58 -- nvmf/common.sh@636 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:24.478 Waiting for block devices as requested 00:26:24.478 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:26:24.736 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:26:24.736 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:26:24.995 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:26:24.995 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:26:24.995 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:26:24.995 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:26:25.254 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:26:25.254 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:26:25.254 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:26:25.512 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:26:25.512 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:26:25.512 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:26:25.512 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:26:25.770 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:26:25.770 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:26:25.770 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:26:26.337 03:27:00 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:26:26.337 03:27:00 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:26.337 03:27:00 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:26:26.337 03:27:00 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:26:26.337 03:27:00 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:26.337 03:27:00 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:26:26.337 03:27:00 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:26:26.337 03:27:00 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:26:26.338 03:27:00 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:26:26.338 No valid GPT data, bailing 00:26:26.338 03:27:00 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:26.338 03:27:00 -- scripts/common.sh@391 -- # pt= 00:26:26.338 03:27:00 -- scripts/common.sh@392 -- # return 1 00:26:26.338 03:27:00 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:26:26.338 03:27:00 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme0n1 ]] 00:26:26.338 03:27:00 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:26.338 03:27:00 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:26.338 03:27:00 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:26.338 03:27:00 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:26:26.338 03:27:00 -- nvmf/common.sh@656 -- # echo 1 00:26:26.338 03:27:00 -- nvmf/common.sh@657 -- # echo /dev/nvme0n1 00:26:26.338 03:27:00 -- nvmf/common.sh@658 -- # echo 1 00:26:26.338 03:27:00 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:26:26.338 03:27:00 -- nvmf/common.sh@661 -- # echo tcp 00:26:26.338 03:27:00 -- nvmf/common.sh@662 -- # echo 4420 00:26:26.338 03:27:00 -- nvmf/common.sh@663 -- # echo ipv4 00:26:26.338 03:27:00 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:26.338 03:27:00 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:26:26.338 00:26:26.338 Discovery Log Number of Records 2, Generation counter 2 00:26:26.338 =====Discovery Log Entry 0====== 00:26:26.338 trtype: tcp 00:26:26.338 adrfam: ipv4 00:26:26.338 subtype: current discovery subsystem 00:26:26.338 treq: not specified, sq flow control disable supported 00:26:26.338 portid: 1 00:26:26.338 trsvcid: 4420 00:26:26.338 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:26.338 traddr: 10.0.0.1 00:26:26.338 eflags: none 00:26:26.338 sectype: none 00:26:26.338 =====Discovery Log Entry 1====== 00:26:26.338 trtype: tcp 00:26:26.338 adrfam: ipv4 00:26:26.338 subtype: nvme subsystem 00:26:26.338 treq: not specified, sq flow control disable supported 00:26:26.338 portid: 1 00:26:26.338 trsvcid: 4420 00:26:26.338 subnqn: nqn.2024-02.io.spdk:cnode0 00:26:26.338 traddr: 10.0.0.1 00:26:26.338 eflags: none 00:26:26.338 sectype: none 00:26:26.338 03:27:00 -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:26.338 03:27:00 -- host/auth.sh@37 -- # echo 0 00:26:26.338 03:27:00 -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:26:26.338 03:27:00 -- host/auth.sh@95 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:26.338 03:27:00 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:26.338 03:27:00 -- host/auth.sh@44 -- # digest=sha256 00:26:26.338 03:27:00 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:26.338 03:27:00 -- host/auth.sh@44 -- # keyid=1 00:26:26.338 03:27:00 -- host/auth.sh@45 -- # key=DHHC-1:00:OGMxMGM0MTFkMWY1NGI5NTc0NDNiODliMThjOWU1N2MzMzViNjJhMTRjNmRjMWFjsMUppw==: 00:26:26.338 03:27:00 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:26:26.338 03:27:00 -- host/auth.sh@48 -- # echo ffdhe2048 00:26:26.338 03:27:00 -- host/auth.sh@49 -- # echo DHHC-1:00:OGMxMGM0MTFkMWY1NGI5NTc0NDNiODliMThjOWU1N2MzMzViNjJhMTRjNmRjMWFjsMUppw==: 00:26:26.338 03:27:00 -- host/auth.sh@100 -- # IFS=, 00:26:26.338 03:27:00 -- host/auth.sh@101 -- # printf %s sha256,sha384,sha512 00:26:26.338 03:27:00 -- host/auth.sh@100 -- # IFS=, 00:26:26.338 03:27:00 -- host/auth.sh@101 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:26.338 03:27:00 -- host/auth.sh@100 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:26:26.338 03:27:00 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:26.338 03:27:00 -- host/auth.sh@68 -- # digest=sha256,sha384,sha512 00:26:26.338 03:27:00 -- host/auth.sh@68 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:26.338 03:27:00 -- host/auth.sh@68 -- # keyid=1 00:26:26.338 03:27:00 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:26.338 03:27:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:26.338 03:27:00 -- common/autotest_common.sh@10 -- # set +x 00:26:26.338 03:27:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:26.338 03:27:00 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:26.338 03:27:00 -- nvmf/common.sh@717 -- # local ip 00:26:26.338 03:27:00 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:26.338 03:27:00 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:26.338 03:27:00 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:26.338 03:27:00 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:26.338 03:27:00 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:26.338 03:27:00 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:26.338 03:27:00 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:26.338 03:27:00 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:26.338 03:27:00 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:26.338 03:27:00 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:26:26.338 03:27:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:26.338 03:27:00 -- common/autotest_common.sh@10 -- # set +x 00:26:26.597 nvme0n1 00:26:26.597 03:27:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:26.597 03:27:00 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:26.597 03:27:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:26.597 03:27:00 -- common/autotest_common.sh@10 -- # set +x 00:26:26.597 03:27:00 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:26.597 03:27:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:26.597 03:27:00 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:26.597 03:27:00 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:26.597 03:27:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:26.597 03:27:00 -- common/autotest_common.sh@10 -- # set +x 00:26:26.597 03:27:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:26.597 03:27:01 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:26:26.597 03:27:01 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:26:26.597 03:27:01 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:26.597 03:27:01 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:26:26.597 03:27:01 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:26.597 03:27:01 -- host/auth.sh@44 -- # digest=sha256 00:26:26.597 03:27:01 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:26.597 03:27:01 -- host/auth.sh@44 -- # keyid=0 00:26:26.597 03:27:01 -- host/auth.sh@45 -- # key=DHHC-1:00:YzNjNWE1ODdhNmEyYzEwN2Q3MTc3MzAzOWQ1OTkyM2H6n/Cu: 00:26:26.597 03:27:01 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:26:26.597 03:27:01 -- host/auth.sh@48 -- # echo ffdhe2048 00:26:26.597 03:27:01 -- host/auth.sh@49 -- # echo DHHC-1:00:YzNjNWE1ODdhNmEyYzEwN2Q3MTc3MzAzOWQ1OTkyM2H6n/Cu: 00:26:26.597 03:27:01 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 0 00:26:26.597 03:27:01 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:26.597 03:27:01 -- host/auth.sh@68 -- # digest=sha256 00:26:26.597 03:27:01 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:26:26.597 03:27:01 -- host/auth.sh@68 -- # keyid=0 00:26:26.597 03:27:01 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:26.597 03:27:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:26.597 03:27:01 -- common/autotest_common.sh@10 -- # set +x 00:26:26.597 03:27:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:26.597 03:27:01 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:26.597 03:27:01 -- nvmf/common.sh@717 -- # local ip 00:26:26.597 03:27:01 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:26.597 03:27:01 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:26.597 03:27:01 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:26.597 03:27:01 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:26.597 03:27:01 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:26.597 03:27:01 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:26.597 03:27:01 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:26.597 03:27:01 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:26.597 03:27:01 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:26.597 03:27:01 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:26:26.597 03:27:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:26.597 03:27:01 -- common/autotest_common.sh@10 -- # set +x 00:26:26.856 nvme0n1 00:26:26.856 03:27:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:26.856 03:27:01 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:26.856 03:27:01 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:26.856 03:27:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:26.856 03:27:01 -- common/autotest_common.sh@10 -- # set +x 00:26:26.856 03:27:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:26.856 03:27:01 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:26.856 03:27:01 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:26.856 03:27:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:26.856 03:27:01 -- common/autotest_common.sh@10 -- # set +x 00:26:26.856 03:27:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:26.856 03:27:01 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:26.856 03:27:01 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:26.856 03:27:01 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:26.856 03:27:01 -- host/auth.sh@44 -- # digest=sha256 00:26:26.856 03:27:01 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:26.856 03:27:01 -- host/auth.sh@44 -- # keyid=1 00:26:26.856 03:27:01 -- host/auth.sh@45 -- # key=DHHC-1:00:OGMxMGM0MTFkMWY1NGI5NTc0NDNiODliMThjOWU1N2MzMzViNjJhMTRjNmRjMWFjsMUppw==: 00:26:26.856 03:27:01 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:26:26.856 03:27:01 -- host/auth.sh@48 -- # echo ffdhe2048 00:26:26.856 03:27:01 -- host/auth.sh@49 -- # echo DHHC-1:00:OGMxMGM0MTFkMWY1NGI5NTc0NDNiODliMThjOWU1N2MzMzViNjJhMTRjNmRjMWFjsMUppw==: 00:26:26.856 03:27:01 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 1 00:26:26.856 03:27:01 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:26.856 03:27:01 -- host/auth.sh@68 -- # digest=sha256 00:26:26.856 03:27:01 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:26:26.856 03:27:01 -- host/auth.sh@68 -- # keyid=1 00:26:26.856 03:27:01 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:26.856 03:27:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:26.856 03:27:01 -- common/autotest_common.sh@10 -- # set +x 00:26:26.856 03:27:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:26.856 03:27:01 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:26.856 03:27:01 -- nvmf/common.sh@717 -- # local ip 00:26:26.856 03:27:01 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:26.856 03:27:01 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:26.856 03:27:01 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:26.856 03:27:01 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:26.856 03:27:01 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:26.856 03:27:01 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:26.856 03:27:01 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:26.856 03:27:01 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:26.856 03:27:01 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:26.856 03:27:01 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:26:26.856 03:27:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:26.856 03:27:01 -- common/autotest_common.sh@10 -- # set +x 00:26:27.114 nvme0n1 00:26:27.114 03:27:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:27.115 03:27:01 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:27.115 03:27:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:27.115 03:27:01 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:27.115 03:27:01 -- common/autotest_common.sh@10 -- # set +x 00:26:27.115 03:27:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:27.115 03:27:01 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:27.115 03:27:01 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:27.115 03:27:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:27.115 03:27:01 -- common/autotest_common.sh@10 -- # set +x 00:26:27.115 03:27:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:27.115 03:27:01 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:27.115 03:27:01 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:26:27.115 03:27:01 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:27.115 03:27:01 -- host/auth.sh@44 -- # digest=sha256 00:26:27.115 03:27:01 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:27.115 03:27:01 -- host/auth.sh@44 -- # keyid=2 00:26:27.115 03:27:01 -- host/auth.sh@45 -- # key=DHHC-1:01:Njc1MDkxOTg2Y2YxNzliN2ZlNWI3MmU1NjEyMjQ4OTOIsYpe: 00:26:27.115 03:27:01 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:26:27.115 03:27:01 -- host/auth.sh@48 -- # echo ffdhe2048 00:26:27.115 03:27:01 -- host/auth.sh@49 -- # echo DHHC-1:01:Njc1MDkxOTg2Y2YxNzliN2ZlNWI3MmU1NjEyMjQ4OTOIsYpe: 00:26:27.115 03:27:01 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 2 00:26:27.115 03:27:01 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:27.115 03:27:01 -- host/auth.sh@68 -- # digest=sha256 00:26:27.115 03:27:01 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:26:27.115 03:27:01 -- host/auth.sh@68 -- # keyid=2 00:26:27.115 03:27:01 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:27.115 03:27:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:27.115 03:27:01 -- common/autotest_common.sh@10 -- # set +x 00:26:27.115 03:27:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:27.115 03:27:01 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:27.115 03:27:01 -- nvmf/common.sh@717 -- # local ip 00:26:27.115 03:27:01 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:27.115 03:27:01 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:27.115 03:27:01 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:27.115 03:27:01 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:27.115 03:27:01 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:27.115 03:27:01 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:27.115 03:27:01 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:27.115 03:27:01 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:27.115 03:27:01 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:27.115 03:27:01 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:27.115 03:27:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:27.115 03:27:01 -- common/autotest_common.sh@10 -- # set +x 00:26:27.373 nvme0n1 00:26:27.373 03:27:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:27.373 03:27:01 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:27.373 03:27:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:27.373 03:27:01 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:27.373 03:27:01 -- common/autotest_common.sh@10 -- # set +x 00:26:27.373 03:27:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:27.373 03:27:01 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:27.373 03:27:01 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:27.373 03:27:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:27.373 03:27:01 -- common/autotest_common.sh@10 -- # set +x 00:26:27.373 03:27:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:27.373 03:27:01 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:27.373 03:27:01 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:26:27.373 03:27:01 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:27.373 03:27:01 -- host/auth.sh@44 -- # digest=sha256 00:26:27.373 03:27:01 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:27.373 03:27:01 -- host/auth.sh@44 -- # keyid=3 00:26:27.373 03:27:01 -- host/auth.sh@45 -- # key=DHHC-1:02:ODQ3ZDUzYjFiYmEwNmEyOGExYjBkMzIyZTE1ZTdiNDAwOGFhODEwOTQyMDNlY2Ri2DqywQ==: 00:26:27.373 03:27:01 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:26:27.373 03:27:01 -- host/auth.sh@48 -- # echo ffdhe2048 00:26:27.373 03:27:01 -- host/auth.sh@49 -- # echo DHHC-1:02:ODQ3ZDUzYjFiYmEwNmEyOGExYjBkMzIyZTE1ZTdiNDAwOGFhODEwOTQyMDNlY2Ri2DqywQ==: 00:26:27.373 03:27:01 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 3 00:26:27.373 03:27:01 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:27.373 03:27:01 -- host/auth.sh@68 -- # digest=sha256 00:26:27.373 03:27:01 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:26:27.373 03:27:01 -- host/auth.sh@68 -- # keyid=3 00:26:27.373 03:27:01 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:27.373 03:27:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:27.373 03:27:01 -- common/autotest_common.sh@10 -- # set +x 00:26:27.373 03:27:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:27.373 03:27:01 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:27.374 03:27:01 -- nvmf/common.sh@717 -- # local ip 00:26:27.374 03:27:01 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:27.374 03:27:01 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:27.374 03:27:01 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:27.374 03:27:01 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:27.374 03:27:01 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:27.374 03:27:01 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:27.374 03:27:01 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:27.374 03:27:01 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:27.374 03:27:01 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:27.374 03:27:01 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:26:27.374 03:27:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:27.374 03:27:01 -- common/autotest_common.sh@10 -- # set +x 00:26:27.374 nvme0n1 00:26:27.374 03:27:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:27.374 03:27:01 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:27.374 03:27:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:27.374 03:27:01 -- common/autotest_common.sh@10 -- # set +x 00:26:27.374 03:27:01 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:27.374 03:27:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:27.374 03:27:01 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:27.374 03:27:01 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:27.374 03:27:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:27.374 03:27:01 -- common/autotest_common.sh@10 -- # set +x 00:26:27.632 03:27:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:27.632 03:27:01 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:27.632 03:27:01 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:26:27.633 03:27:01 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:27.633 03:27:01 -- host/auth.sh@44 -- # digest=sha256 00:26:27.633 03:27:01 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:27.633 03:27:01 -- host/auth.sh@44 -- # keyid=4 00:26:27.633 03:27:01 -- host/auth.sh@45 -- # key=DHHC-1:03:ODI0M2RhMDM4Y2QxYzRlYzI4NjI4MzVjZDA5OWJlOGFiYTI0OTc3MjlhZmJmNjdhYmM2ZTgzMGY2MmQ2ZDJkMi4LGic=: 00:26:27.633 03:27:01 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:26:27.633 03:27:01 -- host/auth.sh@48 -- # echo ffdhe2048 00:26:27.633 03:27:01 -- host/auth.sh@49 -- # echo DHHC-1:03:ODI0M2RhMDM4Y2QxYzRlYzI4NjI4MzVjZDA5OWJlOGFiYTI0OTc3MjlhZmJmNjdhYmM2ZTgzMGY2MmQ2ZDJkMi4LGic=: 00:26:27.633 03:27:01 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 4 00:26:27.633 03:27:01 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:27.633 03:27:01 -- host/auth.sh@68 -- # digest=sha256 00:26:27.633 03:27:01 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:26:27.633 03:27:01 -- host/auth.sh@68 -- # keyid=4 00:26:27.633 03:27:01 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:27.633 03:27:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:27.633 03:27:01 -- common/autotest_common.sh@10 -- # set +x 00:26:27.633 03:27:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:27.633 03:27:01 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:27.633 03:27:01 -- nvmf/common.sh@717 -- # local ip 00:26:27.633 03:27:01 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:27.633 03:27:01 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:27.633 03:27:01 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:27.633 03:27:01 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:27.633 03:27:01 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:27.633 03:27:01 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:27.633 03:27:01 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:27.633 03:27:01 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:27.633 03:27:01 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:27.633 03:27:01 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:27.633 03:27:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:27.633 03:27:01 -- common/autotest_common.sh@10 -- # set +x 00:26:27.633 nvme0n1 00:26:27.633 03:27:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:27.633 03:27:02 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:27.633 03:27:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:27.633 03:27:02 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:27.633 03:27:02 -- common/autotest_common.sh@10 -- # set +x 00:26:27.633 03:27:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:27.633 03:27:02 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:27.633 03:27:02 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:27.633 03:27:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:27.633 03:27:02 -- common/autotest_common.sh@10 -- # set +x 00:26:27.633 03:27:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:27.633 03:27:02 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:26:27.633 03:27:02 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:27.633 03:27:02 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:26:27.633 03:27:02 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:27.633 03:27:02 -- host/auth.sh@44 -- # digest=sha256 00:26:27.633 03:27:02 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:27.633 03:27:02 -- host/auth.sh@44 -- # keyid=0 00:26:27.633 03:27:02 -- host/auth.sh@45 -- # key=DHHC-1:00:YzNjNWE1ODdhNmEyYzEwN2Q3MTc3MzAzOWQ1OTkyM2H6n/Cu: 00:26:27.633 03:27:02 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:26:27.633 03:27:02 -- host/auth.sh@48 -- # echo ffdhe3072 00:26:27.633 03:27:02 -- host/auth.sh@49 -- # echo DHHC-1:00:YzNjNWE1ODdhNmEyYzEwN2Q3MTc3MzAzOWQ1OTkyM2H6n/Cu: 00:26:27.633 03:27:02 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 0 00:26:27.633 03:27:02 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:27.633 03:27:02 -- host/auth.sh@68 -- # digest=sha256 00:26:27.633 03:27:02 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:26:27.633 03:27:02 -- host/auth.sh@68 -- # keyid=0 00:26:27.633 03:27:02 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:27.633 03:27:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:27.633 03:27:02 -- common/autotest_common.sh@10 -- # set +x 00:26:27.633 03:27:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:27.633 03:27:02 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:27.633 03:27:02 -- nvmf/common.sh@717 -- # local ip 00:26:27.633 03:27:02 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:27.633 03:27:02 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:27.633 03:27:02 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:27.633 03:27:02 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:27.633 03:27:02 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:27.633 03:27:02 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:27.633 03:27:02 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:27.633 03:27:02 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:27.633 03:27:02 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:27.633 03:27:02 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:26:27.633 03:27:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:27.633 03:27:02 -- common/autotest_common.sh@10 -- # set +x 00:26:27.891 nvme0n1 00:26:27.891 03:27:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:27.891 03:27:02 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:27.891 03:27:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:27.891 03:27:02 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:27.891 03:27:02 -- common/autotest_common.sh@10 -- # set +x 00:26:27.891 03:27:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:27.891 03:27:02 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:27.891 03:27:02 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:27.891 03:27:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:27.891 03:27:02 -- common/autotest_common.sh@10 -- # set +x 00:26:27.891 03:27:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:27.891 03:27:02 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:27.891 03:27:02 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:26:27.891 03:27:02 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:27.891 03:27:02 -- host/auth.sh@44 -- # digest=sha256 00:26:27.891 03:27:02 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:27.891 03:27:02 -- host/auth.sh@44 -- # keyid=1 00:26:27.891 03:27:02 -- host/auth.sh@45 -- # key=DHHC-1:00:OGMxMGM0MTFkMWY1NGI5NTc0NDNiODliMThjOWU1N2MzMzViNjJhMTRjNmRjMWFjsMUppw==: 00:26:27.891 03:27:02 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:26:27.891 03:27:02 -- host/auth.sh@48 -- # echo ffdhe3072 00:26:27.891 03:27:02 -- host/auth.sh@49 -- # echo DHHC-1:00:OGMxMGM0MTFkMWY1NGI5NTc0NDNiODliMThjOWU1N2MzMzViNjJhMTRjNmRjMWFjsMUppw==: 00:26:27.891 03:27:02 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 1 00:26:27.891 03:27:02 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:27.891 03:27:02 -- host/auth.sh@68 -- # digest=sha256 00:26:27.891 03:27:02 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:26:27.891 03:27:02 -- host/auth.sh@68 -- # keyid=1 00:26:27.891 03:27:02 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:27.891 03:27:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:27.891 03:27:02 -- common/autotest_common.sh@10 -- # set +x 00:26:27.891 03:27:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:27.891 03:27:02 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:27.891 03:27:02 -- nvmf/common.sh@717 -- # local ip 00:26:27.891 03:27:02 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:27.891 03:27:02 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:27.891 03:27:02 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:27.891 03:27:02 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:27.891 03:27:02 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:27.891 03:27:02 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:27.892 03:27:02 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:27.892 03:27:02 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:27.892 03:27:02 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:27.892 03:27:02 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:26:27.892 03:27:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:27.892 03:27:02 -- common/autotest_common.sh@10 -- # set +x 00:26:28.150 nvme0n1 00:26:28.150 03:27:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:28.150 03:27:02 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:28.150 03:27:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:28.150 03:27:02 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:28.150 03:27:02 -- common/autotest_common.sh@10 -- # set +x 00:26:28.150 03:27:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:28.150 03:27:02 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:28.150 03:27:02 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:28.150 03:27:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:28.150 03:27:02 -- common/autotest_common.sh@10 -- # set +x 00:26:28.150 03:27:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:28.150 03:27:02 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:28.150 03:27:02 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:26:28.150 03:27:02 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:28.150 03:27:02 -- host/auth.sh@44 -- # digest=sha256 00:26:28.150 03:27:02 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:28.150 03:27:02 -- host/auth.sh@44 -- # keyid=2 00:26:28.150 03:27:02 -- host/auth.sh@45 -- # key=DHHC-1:01:Njc1MDkxOTg2Y2YxNzliN2ZlNWI3MmU1NjEyMjQ4OTOIsYpe: 00:26:28.150 03:27:02 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:26:28.150 03:27:02 -- host/auth.sh@48 -- # echo ffdhe3072 00:26:28.150 03:27:02 -- host/auth.sh@49 -- # echo DHHC-1:01:Njc1MDkxOTg2Y2YxNzliN2ZlNWI3MmU1NjEyMjQ4OTOIsYpe: 00:26:28.150 03:27:02 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 2 00:26:28.150 03:27:02 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:28.150 03:27:02 -- host/auth.sh@68 -- # digest=sha256 00:26:28.150 03:27:02 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:26:28.150 03:27:02 -- host/auth.sh@68 -- # keyid=2 00:26:28.150 03:27:02 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:28.150 03:27:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:28.150 03:27:02 -- common/autotest_common.sh@10 -- # set +x 00:26:28.150 03:27:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:28.150 03:27:02 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:28.150 03:27:02 -- nvmf/common.sh@717 -- # local ip 00:26:28.150 03:27:02 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:28.150 03:27:02 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:28.150 03:27:02 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:28.150 03:27:02 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:28.150 03:27:02 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:28.150 03:27:02 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:28.150 03:27:02 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:28.150 03:27:02 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:28.150 03:27:02 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:28.150 03:27:02 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:28.150 03:27:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:28.150 03:27:02 -- common/autotest_common.sh@10 -- # set +x 00:26:28.408 nvme0n1 00:26:28.408 03:27:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:28.408 03:27:02 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:28.408 03:27:02 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:28.408 03:27:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:28.408 03:27:02 -- common/autotest_common.sh@10 -- # set +x 00:26:28.409 03:27:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:28.409 03:27:02 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:28.409 03:27:02 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:28.409 03:27:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:28.409 03:27:02 -- common/autotest_common.sh@10 -- # set +x 00:26:28.409 03:27:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:28.409 03:27:02 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:28.409 03:27:02 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:26:28.409 03:27:02 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:28.409 03:27:02 -- host/auth.sh@44 -- # digest=sha256 00:26:28.409 03:27:02 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:28.409 03:27:02 -- host/auth.sh@44 -- # keyid=3 00:26:28.409 03:27:02 -- host/auth.sh@45 -- # key=DHHC-1:02:ODQ3ZDUzYjFiYmEwNmEyOGExYjBkMzIyZTE1ZTdiNDAwOGFhODEwOTQyMDNlY2Ri2DqywQ==: 00:26:28.409 03:27:02 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:26:28.409 03:27:02 -- host/auth.sh@48 -- # echo ffdhe3072 00:26:28.409 03:27:02 -- host/auth.sh@49 -- # echo DHHC-1:02:ODQ3ZDUzYjFiYmEwNmEyOGExYjBkMzIyZTE1ZTdiNDAwOGFhODEwOTQyMDNlY2Ri2DqywQ==: 00:26:28.409 03:27:02 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 3 00:26:28.409 03:27:02 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:28.409 03:27:02 -- host/auth.sh@68 -- # digest=sha256 00:26:28.409 03:27:02 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:26:28.409 03:27:02 -- host/auth.sh@68 -- # keyid=3 00:26:28.409 03:27:02 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:28.409 03:27:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:28.409 03:27:02 -- common/autotest_common.sh@10 -- # set +x 00:26:28.409 03:27:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:28.409 03:27:02 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:28.409 03:27:02 -- nvmf/common.sh@717 -- # local ip 00:26:28.409 03:27:02 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:28.409 03:27:02 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:28.409 03:27:02 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:28.409 03:27:02 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:28.409 03:27:02 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:28.409 03:27:02 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:28.409 03:27:02 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:28.409 03:27:02 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:28.409 03:27:02 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:28.409 03:27:02 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:26:28.409 03:27:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:28.409 03:27:02 -- common/autotest_common.sh@10 -- # set +x 00:26:28.670 nvme0n1 00:26:28.670 03:27:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:28.670 03:27:03 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:28.670 03:27:03 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:28.670 03:27:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:28.670 03:27:03 -- common/autotest_common.sh@10 -- # set +x 00:26:28.670 03:27:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:28.670 03:27:03 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:28.670 03:27:03 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:28.670 03:27:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:28.670 03:27:03 -- common/autotest_common.sh@10 -- # set +x 00:26:28.670 03:27:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:28.670 03:27:03 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:28.670 03:27:03 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:26:28.670 03:27:03 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:28.670 03:27:03 -- host/auth.sh@44 -- # digest=sha256 00:26:28.670 03:27:03 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:28.670 03:27:03 -- host/auth.sh@44 -- # keyid=4 00:26:28.670 03:27:03 -- host/auth.sh@45 -- # key=DHHC-1:03:ODI0M2RhMDM4Y2QxYzRlYzI4NjI4MzVjZDA5OWJlOGFiYTI0OTc3MjlhZmJmNjdhYmM2ZTgzMGY2MmQ2ZDJkMi4LGic=: 00:26:28.670 03:27:03 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:26:28.670 03:27:03 -- host/auth.sh@48 -- # echo ffdhe3072 00:26:28.670 03:27:03 -- host/auth.sh@49 -- # echo DHHC-1:03:ODI0M2RhMDM4Y2QxYzRlYzI4NjI4MzVjZDA5OWJlOGFiYTI0OTc3MjlhZmJmNjdhYmM2ZTgzMGY2MmQ2ZDJkMi4LGic=: 00:26:28.670 03:27:03 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 4 00:26:28.670 03:27:03 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:28.670 03:27:03 -- host/auth.sh@68 -- # digest=sha256 00:26:28.670 03:27:03 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:26:28.670 03:27:03 -- host/auth.sh@68 -- # keyid=4 00:26:28.670 03:27:03 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:28.670 03:27:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:28.670 03:27:03 -- common/autotest_common.sh@10 -- # set +x 00:26:28.670 03:27:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:28.670 03:27:03 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:28.670 03:27:03 -- nvmf/common.sh@717 -- # local ip 00:26:28.670 03:27:03 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:28.670 03:27:03 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:28.670 03:27:03 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:28.670 03:27:03 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:28.670 03:27:03 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:28.670 03:27:03 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:28.670 03:27:03 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:28.670 03:27:03 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:28.670 03:27:03 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:28.670 03:27:03 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:28.670 03:27:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:28.670 03:27:03 -- common/autotest_common.sh@10 -- # set +x 00:26:28.926 nvme0n1 00:26:28.926 03:27:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:28.926 03:27:03 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:28.926 03:27:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:28.926 03:27:03 -- common/autotest_common.sh@10 -- # set +x 00:26:28.926 03:27:03 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:28.926 03:27:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:28.926 03:27:03 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:28.926 03:27:03 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:28.926 03:27:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:28.926 03:27:03 -- common/autotest_common.sh@10 -- # set +x 00:26:28.926 03:27:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:28.926 03:27:03 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:26:28.926 03:27:03 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:28.926 03:27:03 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:26:28.926 03:27:03 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:28.926 03:27:03 -- host/auth.sh@44 -- # digest=sha256 00:26:28.926 03:27:03 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:28.926 03:27:03 -- host/auth.sh@44 -- # keyid=0 00:26:28.926 03:27:03 -- host/auth.sh@45 -- # key=DHHC-1:00:YzNjNWE1ODdhNmEyYzEwN2Q3MTc3MzAzOWQ1OTkyM2H6n/Cu: 00:26:28.926 03:27:03 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:26:28.926 03:27:03 -- host/auth.sh@48 -- # echo ffdhe4096 00:26:28.926 03:27:03 -- host/auth.sh@49 -- # echo DHHC-1:00:YzNjNWE1ODdhNmEyYzEwN2Q3MTc3MzAzOWQ1OTkyM2H6n/Cu: 00:26:28.926 03:27:03 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 0 00:26:28.926 03:27:03 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:28.926 03:27:03 -- host/auth.sh@68 -- # digest=sha256 00:26:28.926 03:27:03 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:26:28.926 03:27:03 -- host/auth.sh@68 -- # keyid=0 00:26:28.926 03:27:03 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:28.926 03:27:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:28.926 03:27:03 -- common/autotest_common.sh@10 -- # set +x 00:26:28.926 03:27:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:28.926 03:27:03 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:28.926 03:27:03 -- nvmf/common.sh@717 -- # local ip 00:26:28.926 03:27:03 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:28.927 03:27:03 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:28.927 03:27:03 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:28.927 03:27:03 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:28.927 03:27:03 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:28.927 03:27:03 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:28.927 03:27:03 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:28.927 03:27:03 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:28.927 03:27:03 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:28.927 03:27:03 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:26:28.927 03:27:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:28.927 03:27:03 -- common/autotest_common.sh@10 -- # set +x 00:26:29.492 nvme0n1 00:26:29.492 03:27:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:29.492 03:27:03 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:29.492 03:27:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:29.492 03:27:03 -- common/autotest_common.sh@10 -- # set +x 00:26:29.492 03:27:03 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:29.492 03:27:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:29.492 03:27:03 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:29.492 03:27:03 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:29.492 03:27:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:29.492 03:27:03 -- common/autotest_common.sh@10 -- # set +x 00:26:29.492 03:27:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:29.492 03:27:03 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:29.492 03:27:03 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:26:29.492 03:27:03 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:29.492 03:27:03 -- host/auth.sh@44 -- # digest=sha256 00:26:29.492 03:27:03 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:29.492 03:27:03 -- host/auth.sh@44 -- # keyid=1 00:26:29.492 03:27:03 -- host/auth.sh@45 -- # key=DHHC-1:00:OGMxMGM0MTFkMWY1NGI5NTc0NDNiODliMThjOWU1N2MzMzViNjJhMTRjNmRjMWFjsMUppw==: 00:26:29.492 03:27:03 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:26:29.492 03:27:03 -- host/auth.sh@48 -- # echo ffdhe4096 00:26:29.492 03:27:03 -- host/auth.sh@49 -- # echo DHHC-1:00:OGMxMGM0MTFkMWY1NGI5NTc0NDNiODliMThjOWU1N2MzMzViNjJhMTRjNmRjMWFjsMUppw==: 00:26:29.492 03:27:03 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 1 00:26:29.492 03:27:03 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:29.492 03:27:03 -- host/auth.sh@68 -- # digest=sha256 00:26:29.492 03:27:03 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:26:29.492 03:27:03 -- host/auth.sh@68 -- # keyid=1 00:26:29.492 03:27:03 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:29.492 03:27:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:29.492 03:27:03 -- common/autotest_common.sh@10 -- # set +x 00:26:29.492 03:27:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:29.492 03:27:03 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:29.492 03:27:03 -- nvmf/common.sh@717 -- # local ip 00:26:29.492 03:27:03 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:29.492 03:27:03 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:29.492 03:27:03 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:29.492 03:27:03 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:29.492 03:27:03 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:29.492 03:27:03 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:29.492 03:27:03 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:29.492 03:27:03 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:29.492 03:27:03 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:29.492 03:27:03 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:26:29.492 03:27:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:29.492 03:27:03 -- common/autotest_common.sh@10 -- # set +x 00:26:29.750 nvme0n1 00:26:29.750 03:27:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:29.750 03:27:04 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:29.750 03:27:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:29.750 03:27:04 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:29.750 03:27:04 -- common/autotest_common.sh@10 -- # set +x 00:26:29.750 03:27:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:29.750 03:27:04 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:29.750 03:27:04 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:29.750 03:27:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:29.750 03:27:04 -- common/autotest_common.sh@10 -- # set +x 00:26:29.750 03:27:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:29.750 03:27:04 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:29.750 03:27:04 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:26:29.750 03:27:04 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:29.750 03:27:04 -- host/auth.sh@44 -- # digest=sha256 00:26:29.750 03:27:04 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:29.750 03:27:04 -- host/auth.sh@44 -- # keyid=2 00:26:29.750 03:27:04 -- host/auth.sh@45 -- # key=DHHC-1:01:Njc1MDkxOTg2Y2YxNzliN2ZlNWI3MmU1NjEyMjQ4OTOIsYpe: 00:26:29.750 03:27:04 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:26:29.750 03:27:04 -- host/auth.sh@48 -- # echo ffdhe4096 00:26:29.750 03:27:04 -- host/auth.sh@49 -- # echo DHHC-1:01:Njc1MDkxOTg2Y2YxNzliN2ZlNWI3MmU1NjEyMjQ4OTOIsYpe: 00:26:29.750 03:27:04 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 2 00:26:29.750 03:27:04 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:29.750 03:27:04 -- host/auth.sh@68 -- # digest=sha256 00:26:29.750 03:27:04 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:26:29.750 03:27:04 -- host/auth.sh@68 -- # keyid=2 00:26:29.750 03:27:04 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:29.750 03:27:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:29.750 03:27:04 -- common/autotest_common.sh@10 -- # set +x 00:26:29.750 03:27:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:29.750 03:27:04 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:29.750 03:27:04 -- nvmf/common.sh@717 -- # local ip 00:26:29.750 03:27:04 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:29.750 03:27:04 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:29.750 03:27:04 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:29.750 03:27:04 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:29.750 03:27:04 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:29.750 03:27:04 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:29.750 03:27:04 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:29.750 03:27:04 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:29.750 03:27:04 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:29.750 03:27:04 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:29.750 03:27:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:29.750 03:27:04 -- common/autotest_common.sh@10 -- # set +x 00:26:30.010 nvme0n1 00:26:30.010 03:27:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:30.010 03:27:04 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:30.010 03:27:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:30.010 03:27:04 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:30.010 03:27:04 -- common/autotest_common.sh@10 -- # set +x 00:26:30.010 03:27:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:30.010 03:27:04 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:30.010 03:27:04 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:30.010 03:27:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:30.010 03:27:04 -- common/autotest_common.sh@10 -- # set +x 00:26:30.010 03:27:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:30.010 03:27:04 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:30.010 03:27:04 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:26:30.010 03:27:04 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:30.010 03:27:04 -- host/auth.sh@44 -- # digest=sha256 00:26:30.010 03:27:04 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:30.010 03:27:04 -- host/auth.sh@44 -- # keyid=3 00:26:30.010 03:27:04 -- host/auth.sh@45 -- # key=DHHC-1:02:ODQ3ZDUzYjFiYmEwNmEyOGExYjBkMzIyZTE1ZTdiNDAwOGFhODEwOTQyMDNlY2Ri2DqywQ==: 00:26:30.010 03:27:04 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:26:30.010 03:27:04 -- host/auth.sh@48 -- # echo ffdhe4096 00:26:30.010 03:27:04 -- host/auth.sh@49 -- # echo DHHC-1:02:ODQ3ZDUzYjFiYmEwNmEyOGExYjBkMzIyZTE1ZTdiNDAwOGFhODEwOTQyMDNlY2Ri2DqywQ==: 00:26:30.010 03:27:04 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 3 00:26:30.010 03:27:04 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:30.010 03:27:04 -- host/auth.sh@68 -- # digest=sha256 00:26:30.010 03:27:04 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:26:30.010 03:27:04 -- host/auth.sh@68 -- # keyid=3 00:26:30.010 03:27:04 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:30.010 03:27:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:30.010 03:27:04 -- common/autotest_common.sh@10 -- # set +x 00:26:30.010 03:27:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:30.010 03:27:04 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:30.010 03:27:04 -- nvmf/common.sh@717 -- # local ip 00:26:30.010 03:27:04 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:30.010 03:27:04 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:30.010 03:27:04 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:30.010 03:27:04 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:30.010 03:27:04 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:30.010 03:27:04 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:30.010 03:27:04 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:30.010 03:27:04 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:30.010 03:27:04 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:30.010 03:27:04 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:26:30.010 03:27:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:30.010 03:27:04 -- common/autotest_common.sh@10 -- # set +x 00:26:30.575 nvme0n1 00:26:30.575 03:27:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:30.575 03:27:04 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:30.575 03:27:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:30.575 03:27:04 -- common/autotest_common.sh@10 -- # set +x 00:26:30.575 03:27:04 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:30.575 03:27:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:30.575 03:27:04 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:30.575 03:27:04 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:30.575 03:27:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:30.575 03:27:04 -- common/autotest_common.sh@10 -- # set +x 00:26:30.575 03:27:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:30.575 03:27:04 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:30.575 03:27:04 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:26:30.575 03:27:04 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:30.575 03:27:04 -- host/auth.sh@44 -- # digest=sha256 00:26:30.575 03:27:04 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:30.575 03:27:04 -- host/auth.sh@44 -- # keyid=4 00:26:30.575 03:27:04 -- host/auth.sh@45 -- # key=DHHC-1:03:ODI0M2RhMDM4Y2QxYzRlYzI4NjI4MzVjZDA5OWJlOGFiYTI0OTc3MjlhZmJmNjdhYmM2ZTgzMGY2MmQ2ZDJkMi4LGic=: 00:26:30.575 03:27:04 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:26:30.575 03:27:04 -- host/auth.sh@48 -- # echo ffdhe4096 00:26:30.575 03:27:04 -- host/auth.sh@49 -- # echo DHHC-1:03:ODI0M2RhMDM4Y2QxYzRlYzI4NjI4MzVjZDA5OWJlOGFiYTI0OTc3MjlhZmJmNjdhYmM2ZTgzMGY2MmQ2ZDJkMi4LGic=: 00:26:30.575 03:27:04 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 4 00:26:30.575 03:27:04 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:30.575 03:27:04 -- host/auth.sh@68 -- # digest=sha256 00:26:30.575 03:27:04 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:26:30.575 03:27:04 -- host/auth.sh@68 -- # keyid=4 00:26:30.575 03:27:04 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:30.575 03:27:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:30.575 03:27:04 -- common/autotest_common.sh@10 -- # set +x 00:26:30.575 03:27:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:30.575 03:27:04 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:30.575 03:27:04 -- nvmf/common.sh@717 -- # local ip 00:26:30.575 03:27:04 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:30.575 03:27:04 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:30.575 03:27:04 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:30.575 03:27:04 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:30.575 03:27:04 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:30.575 03:27:04 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:30.575 03:27:04 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:30.575 03:27:04 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:30.575 03:27:04 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:30.575 03:27:04 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:30.575 03:27:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:30.575 03:27:04 -- common/autotest_common.sh@10 -- # set +x 00:26:30.833 nvme0n1 00:26:30.833 03:27:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:30.833 03:27:05 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:30.833 03:27:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:30.833 03:27:05 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:30.833 03:27:05 -- common/autotest_common.sh@10 -- # set +x 00:26:30.833 03:27:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:30.833 03:27:05 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:30.833 03:27:05 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:30.833 03:27:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:30.833 03:27:05 -- common/autotest_common.sh@10 -- # set +x 00:26:30.833 03:27:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:30.833 03:27:05 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:26:30.833 03:27:05 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:30.833 03:27:05 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:26:30.833 03:27:05 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:30.833 03:27:05 -- host/auth.sh@44 -- # digest=sha256 00:26:30.833 03:27:05 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:30.833 03:27:05 -- host/auth.sh@44 -- # keyid=0 00:26:30.833 03:27:05 -- host/auth.sh@45 -- # key=DHHC-1:00:YzNjNWE1ODdhNmEyYzEwN2Q3MTc3MzAzOWQ1OTkyM2H6n/Cu: 00:26:30.833 03:27:05 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:26:30.833 03:27:05 -- host/auth.sh@48 -- # echo ffdhe6144 00:26:30.833 03:27:05 -- host/auth.sh@49 -- # echo DHHC-1:00:YzNjNWE1ODdhNmEyYzEwN2Q3MTc3MzAzOWQ1OTkyM2H6n/Cu: 00:26:30.833 03:27:05 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 0 00:26:30.833 03:27:05 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:30.833 03:27:05 -- host/auth.sh@68 -- # digest=sha256 00:26:30.833 03:27:05 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:26:30.833 03:27:05 -- host/auth.sh@68 -- # keyid=0 00:26:30.833 03:27:05 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:30.833 03:27:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:30.833 03:27:05 -- common/autotest_common.sh@10 -- # set +x 00:26:30.833 03:27:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:30.833 03:27:05 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:30.833 03:27:05 -- nvmf/common.sh@717 -- # local ip 00:26:30.833 03:27:05 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:30.833 03:27:05 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:30.833 03:27:05 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:30.833 03:27:05 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:30.833 03:27:05 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:30.833 03:27:05 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:30.833 03:27:05 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:30.833 03:27:05 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:30.833 03:27:05 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:30.833 03:27:05 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:26:30.833 03:27:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:30.833 03:27:05 -- common/autotest_common.sh@10 -- # set +x 00:26:31.401 nvme0n1 00:26:31.401 03:27:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:31.401 03:27:05 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:31.401 03:27:05 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:31.401 03:27:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:31.401 03:27:05 -- common/autotest_common.sh@10 -- # set +x 00:26:31.401 03:27:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:31.401 03:27:05 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:31.401 03:27:05 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:31.401 03:27:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:31.401 03:27:05 -- common/autotest_common.sh@10 -- # set +x 00:26:31.401 03:27:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:31.401 03:27:05 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:31.401 03:27:05 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:26:31.401 03:27:05 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:31.401 03:27:05 -- host/auth.sh@44 -- # digest=sha256 00:26:31.401 03:27:05 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:31.401 03:27:05 -- host/auth.sh@44 -- # keyid=1 00:26:31.401 03:27:05 -- host/auth.sh@45 -- # key=DHHC-1:00:OGMxMGM0MTFkMWY1NGI5NTc0NDNiODliMThjOWU1N2MzMzViNjJhMTRjNmRjMWFjsMUppw==: 00:26:31.401 03:27:05 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:26:31.401 03:27:05 -- host/auth.sh@48 -- # echo ffdhe6144 00:26:31.401 03:27:05 -- host/auth.sh@49 -- # echo DHHC-1:00:OGMxMGM0MTFkMWY1NGI5NTc0NDNiODliMThjOWU1N2MzMzViNjJhMTRjNmRjMWFjsMUppw==: 00:26:31.401 03:27:05 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 1 00:26:31.401 03:27:05 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:31.401 03:27:05 -- host/auth.sh@68 -- # digest=sha256 00:26:31.401 03:27:05 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:26:31.401 03:27:05 -- host/auth.sh@68 -- # keyid=1 00:26:31.401 03:27:05 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:31.401 03:27:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:31.401 03:27:05 -- common/autotest_common.sh@10 -- # set +x 00:26:31.401 03:27:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:31.401 03:27:05 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:31.401 03:27:05 -- nvmf/common.sh@717 -- # local ip 00:26:31.401 03:27:05 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:31.401 03:27:05 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:31.401 03:27:05 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:31.401 03:27:05 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:31.401 03:27:05 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:31.401 03:27:05 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:31.401 03:27:05 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:31.401 03:27:05 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:31.401 03:27:05 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:31.401 03:27:05 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:26:31.401 03:27:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:31.401 03:27:05 -- common/autotest_common.sh@10 -- # set +x 00:26:31.968 nvme0n1 00:26:31.968 03:27:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:31.968 03:27:06 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:31.968 03:27:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:31.968 03:27:06 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:31.968 03:27:06 -- common/autotest_common.sh@10 -- # set +x 00:26:31.968 03:27:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:31.968 03:27:06 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:31.968 03:27:06 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:31.968 03:27:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:31.968 03:27:06 -- common/autotest_common.sh@10 -- # set +x 00:26:31.968 03:27:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:31.968 03:27:06 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:31.968 03:27:06 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:26:31.968 03:27:06 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:31.968 03:27:06 -- host/auth.sh@44 -- # digest=sha256 00:26:31.968 03:27:06 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:31.968 03:27:06 -- host/auth.sh@44 -- # keyid=2 00:26:31.968 03:27:06 -- host/auth.sh@45 -- # key=DHHC-1:01:Njc1MDkxOTg2Y2YxNzliN2ZlNWI3MmU1NjEyMjQ4OTOIsYpe: 00:26:31.968 03:27:06 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:26:31.968 03:27:06 -- host/auth.sh@48 -- # echo ffdhe6144 00:26:31.968 03:27:06 -- host/auth.sh@49 -- # echo DHHC-1:01:Njc1MDkxOTg2Y2YxNzliN2ZlNWI3MmU1NjEyMjQ4OTOIsYpe: 00:26:31.968 03:27:06 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 2 00:26:31.968 03:27:06 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:31.968 03:27:06 -- host/auth.sh@68 -- # digest=sha256 00:26:31.968 03:27:06 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:26:31.968 03:27:06 -- host/auth.sh@68 -- # keyid=2 00:26:31.968 03:27:06 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:31.968 03:27:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:31.968 03:27:06 -- common/autotest_common.sh@10 -- # set +x 00:26:31.968 03:27:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:31.968 03:27:06 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:31.968 03:27:06 -- nvmf/common.sh@717 -- # local ip 00:26:31.968 03:27:06 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:31.968 03:27:06 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:31.968 03:27:06 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:31.968 03:27:06 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:31.968 03:27:06 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:31.968 03:27:06 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:31.968 03:27:06 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:31.968 03:27:06 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:31.968 03:27:06 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:31.968 03:27:06 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:31.968 03:27:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:31.968 03:27:06 -- common/autotest_common.sh@10 -- # set +x 00:26:32.535 nvme0n1 00:26:32.536 03:27:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:32.536 03:27:06 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:32.536 03:27:06 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:32.536 03:27:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:32.536 03:27:06 -- common/autotest_common.sh@10 -- # set +x 00:26:32.536 03:27:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:32.536 03:27:06 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:32.536 03:27:06 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:32.536 03:27:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:32.536 03:27:06 -- common/autotest_common.sh@10 -- # set +x 00:26:32.536 03:27:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:32.536 03:27:06 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:32.536 03:27:06 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:26:32.536 03:27:06 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:32.536 03:27:06 -- host/auth.sh@44 -- # digest=sha256 00:26:32.536 03:27:06 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:32.536 03:27:06 -- host/auth.sh@44 -- # keyid=3 00:26:32.536 03:27:06 -- host/auth.sh@45 -- # key=DHHC-1:02:ODQ3ZDUzYjFiYmEwNmEyOGExYjBkMzIyZTE1ZTdiNDAwOGFhODEwOTQyMDNlY2Ri2DqywQ==: 00:26:32.536 03:27:06 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:26:32.536 03:27:06 -- host/auth.sh@48 -- # echo ffdhe6144 00:26:32.536 03:27:06 -- host/auth.sh@49 -- # echo DHHC-1:02:ODQ3ZDUzYjFiYmEwNmEyOGExYjBkMzIyZTE1ZTdiNDAwOGFhODEwOTQyMDNlY2Ri2DqywQ==: 00:26:32.536 03:27:06 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 3 00:26:32.536 03:27:06 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:32.536 03:27:06 -- host/auth.sh@68 -- # digest=sha256 00:26:32.536 03:27:06 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:26:32.536 03:27:06 -- host/auth.sh@68 -- # keyid=3 00:26:32.536 03:27:06 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:32.536 03:27:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:32.536 03:27:06 -- common/autotest_common.sh@10 -- # set +x 00:26:32.536 03:27:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:32.536 03:27:06 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:32.536 03:27:06 -- nvmf/common.sh@717 -- # local ip 00:26:32.536 03:27:06 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:32.536 03:27:06 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:32.536 03:27:06 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:32.536 03:27:06 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:32.536 03:27:06 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:32.536 03:27:06 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:32.536 03:27:06 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:32.536 03:27:06 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:32.536 03:27:06 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:32.536 03:27:06 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:26:32.536 03:27:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:32.536 03:27:06 -- common/autotest_common.sh@10 -- # set +x 00:26:33.102 nvme0n1 00:26:33.102 03:27:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:33.102 03:27:07 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:33.102 03:27:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:33.102 03:27:07 -- common/autotest_common.sh@10 -- # set +x 00:26:33.102 03:27:07 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:33.102 03:27:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:33.102 03:27:07 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:33.102 03:27:07 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:33.102 03:27:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:33.102 03:27:07 -- common/autotest_common.sh@10 -- # set +x 00:26:33.102 03:27:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:33.102 03:27:07 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:33.102 03:27:07 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:26:33.102 03:27:07 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:33.102 03:27:07 -- host/auth.sh@44 -- # digest=sha256 00:26:33.102 03:27:07 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:33.102 03:27:07 -- host/auth.sh@44 -- # keyid=4 00:26:33.102 03:27:07 -- host/auth.sh@45 -- # key=DHHC-1:03:ODI0M2RhMDM4Y2QxYzRlYzI4NjI4MzVjZDA5OWJlOGFiYTI0OTc3MjlhZmJmNjdhYmM2ZTgzMGY2MmQ2ZDJkMi4LGic=: 00:26:33.102 03:27:07 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:26:33.102 03:27:07 -- host/auth.sh@48 -- # echo ffdhe6144 00:26:33.102 03:27:07 -- host/auth.sh@49 -- # echo DHHC-1:03:ODI0M2RhMDM4Y2QxYzRlYzI4NjI4MzVjZDA5OWJlOGFiYTI0OTc3MjlhZmJmNjdhYmM2ZTgzMGY2MmQ2ZDJkMi4LGic=: 00:26:33.102 03:27:07 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 4 00:26:33.102 03:27:07 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:33.102 03:27:07 -- host/auth.sh@68 -- # digest=sha256 00:26:33.102 03:27:07 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:26:33.102 03:27:07 -- host/auth.sh@68 -- # keyid=4 00:26:33.102 03:27:07 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:33.102 03:27:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:33.102 03:27:07 -- common/autotest_common.sh@10 -- # set +x 00:26:33.102 03:27:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:33.102 03:27:07 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:33.102 03:27:07 -- nvmf/common.sh@717 -- # local ip 00:26:33.102 03:27:07 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:33.102 03:27:07 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:33.102 03:27:07 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:33.102 03:27:07 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:33.102 03:27:07 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:33.102 03:27:07 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:33.102 03:27:07 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:33.102 03:27:07 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:33.102 03:27:07 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:33.103 03:27:07 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:33.103 03:27:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:33.103 03:27:07 -- common/autotest_common.sh@10 -- # set +x 00:26:33.669 nvme0n1 00:26:33.669 03:27:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:33.669 03:27:08 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:33.669 03:27:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:33.669 03:27:08 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:33.669 03:27:08 -- common/autotest_common.sh@10 -- # set +x 00:26:33.669 03:27:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:33.669 03:27:08 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:33.669 03:27:08 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:33.669 03:27:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:33.669 03:27:08 -- common/autotest_common.sh@10 -- # set +x 00:26:33.669 03:27:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:33.669 03:27:08 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:26:33.669 03:27:08 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:33.669 03:27:08 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:26:33.669 03:27:08 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:33.669 03:27:08 -- host/auth.sh@44 -- # digest=sha256 00:26:33.669 03:27:08 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:33.669 03:27:08 -- host/auth.sh@44 -- # keyid=0 00:26:33.669 03:27:08 -- host/auth.sh@45 -- # key=DHHC-1:00:YzNjNWE1ODdhNmEyYzEwN2Q3MTc3MzAzOWQ1OTkyM2H6n/Cu: 00:26:33.669 03:27:08 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:26:33.669 03:27:08 -- host/auth.sh@48 -- # echo ffdhe8192 00:26:33.669 03:27:08 -- host/auth.sh@49 -- # echo DHHC-1:00:YzNjNWE1ODdhNmEyYzEwN2Q3MTc3MzAzOWQ1OTkyM2H6n/Cu: 00:26:33.669 03:27:08 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 0 00:26:33.669 03:27:08 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:33.669 03:27:08 -- host/auth.sh@68 -- # digest=sha256 00:26:33.669 03:27:08 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:26:33.669 03:27:08 -- host/auth.sh@68 -- # keyid=0 00:26:33.669 03:27:08 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:33.669 03:27:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:33.669 03:27:08 -- common/autotest_common.sh@10 -- # set +x 00:26:33.669 03:27:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:33.669 03:27:08 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:33.669 03:27:08 -- nvmf/common.sh@717 -- # local ip 00:26:33.669 03:27:08 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:33.669 03:27:08 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:33.669 03:27:08 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:33.669 03:27:08 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:33.669 03:27:08 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:33.669 03:27:08 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:33.669 03:27:08 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:33.669 03:27:08 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:33.669 03:27:08 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:33.669 03:27:08 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:26:33.669 03:27:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:33.670 03:27:08 -- common/autotest_common.sh@10 -- # set +x 00:26:35.099 nvme0n1 00:26:35.099 03:27:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:35.099 03:27:09 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:35.099 03:27:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:35.099 03:27:09 -- common/autotest_common.sh@10 -- # set +x 00:26:35.099 03:27:09 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:35.099 03:27:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:35.099 03:27:09 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:35.099 03:27:09 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:35.099 03:27:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:35.099 03:27:09 -- common/autotest_common.sh@10 -- # set +x 00:26:35.099 03:27:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:35.099 03:27:09 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:35.099 03:27:09 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:26:35.099 03:27:09 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:35.099 03:27:09 -- host/auth.sh@44 -- # digest=sha256 00:26:35.099 03:27:09 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:35.099 03:27:09 -- host/auth.sh@44 -- # keyid=1 00:26:35.099 03:27:09 -- host/auth.sh@45 -- # key=DHHC-1:00:OGMxMGM0MTFkMWY1NGI5NTc0NDNiODliMThjOWU1N2MzMzViNjJhMTRjNmRjMWFjsMUppw==: 00:26:35.099 03:27:09 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:26:35.099 03:27:09 -- host/auth.sh@48 -- # echo ffdhe8192 00:26:35.099 03:27:09 -- host/auth.sh@49 -- # echo DHHC-1:00:OGMxMGM0MTFkMWY1NGI5NTc0NDNiODliMThjOWU1N2MzMzViNjJhMTRjNmRjMWFjsMUppw==: 00:26:35.099 03:27:09 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 1 00:26:35.099 03:27:09 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:35.099 03:27:09 -- host/auth.sh@68 -- # digest=sha256 00:26:35.099 03:27:09 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:26:35.099 03:27:09 -- host/auth.sh@68 -- # keyid=1 00:26:35.099 03:27:09 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:35.099 03:27:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:35.099 03:27:09 -- common/autotest_common.sh@10 -- # set +x 00:26:35.099 03:27:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:35.099 03:27:09 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:35.099 03:27:09 -- nvmf/common.sh@717 -- # local ip 00:26:35.099 03:27:09 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:35.099 03:27:09 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:35.099 03:27:09 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:35.099 03:27:09 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:35.099 03:27:09 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:35.099 03:27:09 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:35.099 03:27:09 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:35.099 03:27:09 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:35.099 03:27:09 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:35.099 03:27:09 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:26:35.099 03:27:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:35.099 03:27:09 -- common/autotest_common.sh@10 -- # set +x 00:26:35.666 nvme0n1 00:26:35.666 03:27:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:35.666 03:27:10 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:35.666 03:27:10 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:35.666 03:27:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:35.666 03:27:10 -- common/autotest_common.sh@10 -- # set +x 00:26:35.666 03:27:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:35.925 03:27:10 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:35.925 03:27:10 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:35.925 03:27:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:35.925 03:27:10 -- common/autotest_common.sh@10 -- # set +x 00:26:35.925 03:27:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:35.925 03:27:10 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:35.925 03:27:10 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:26:35.925 03:27:10 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:35.925 03:27:10 -- host/auth.sh@44 -- # digest=sha256 00:26:35.925 03:27:10 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:35.925 03:27:10 -- host/auth.sh@44 -- # keyid=2 00:26:35.925 03:27:10 -- host/auth.sh@45 -- # key=DHHC-1:01:Njc1MDkxOTg2Y2YxNzliN2ZlNWI3MmU1NjEyMjQ4OTOIsYpe: 00:26:35.925 03:27:10 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:26:35.925 03:27:10 -- host/auth.sh@48 -- # echo ffdhe8192 00:26:35.925 03:27:10 -- host/auth.sh@49 -- # echo DHHC-1:01:Njc1MDkxOTg2Y2YxNzliN2ZlNWI3MmU1NjEyMjQ4OTOIsYpe: 00:26:35.925 03:27:10 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 2 00:26:35.925 03:27:10 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:35.925 03:27:10 -- host/auth.sh@68 -- # digest=sha256 00:26:35.925 03:27:10 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:26:35.925 03:27:10 -- host/auth.sh@68 -- # keyid=2 00:26:35.925 03:27:10 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:35.925 03:27:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:35.925 03:27:10 -- common/autotest_common.sh@10 -- # set +x 00:26:35.925 03:27:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:35.925 03:27:10 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:35.925 03:27:10 -- nvmf/common.sh@717 -- # local ip 00:26:35.925 03:27:10 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:35.925 03:27:10 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:35.925 03:27:10 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:35.925 03:27:10 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:35.925 03:27:10 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:35.925 03:27:10 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:35.925 03:27:10 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:35.925 03:27:10 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:35.925 03:27:10 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:35.925 03:27:10 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:35.925 03:27:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:35.925 03:27:10 -- common/autotest_common.sh@10 -- # set +x 00:26:36.861 nvme0n1 00:26:36.861 03:27:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:36.861 03:27:11 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:36.861 03:27:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:36.861 03:27:11 -- common/autotest_common.sh@10 -- # set +x 00:26:36.861 03:27:11 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:36.861 03:27:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:36.861 03:27:11 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:36.861 03:27:11 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:36.861 03:27:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:36.861 03:27:11 -- common/autotest_common.sh@10 -- # set +x 00:26:36.861 03:27:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:36.861 03:27:11 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:36.861 03:27:11 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:26:36.861 03:27:11 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:36.861 03:27:11 -- host/auth.sh@44 -- # digest=sha256 00:26:36.861 03:27:11 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:36.861 03:27:11 -- host/auth.sh@44 -- # keyid=3 00:26:36.861 03:27:11 -- host/auth.sh@45 -- # key=DHHC-1:02:ODQ3ZDUzYjFiYmEwNmEyOGExYjBkMzIyZTE1ZTdiNDAwOGFhODEwOTQyMDNlY2Ri2DqywQ==: 00:26:36.861 03:27:11 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:26:36.861 03:27:11 -- host/auth.sh@48 -- # echo ffdhe8192 00:26:36.861 03:27:11 -- host/auth.sh@49 -- # echo DHHC-1:02:ODQ3ZDUzYjFiYmEwNmEyOGExYjBkMzIyZTE1ZTdiNDAwOGFhODEwOTQyMDNlY2Ri2DqywQ==: 00:26:36.861 03:27:11 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 3 00:26:36.861 03:27:11 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:36.861 03:27:11 -- host/auth.sh@68 -- # digest=sha256 00:26:36.861 03:27:11 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:26:36.861 03:27:11 -- host/auth.sh@68 -- # keyid=3 00:26:36.861 03:27:11 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:36.861 03:27:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:36.861 03:27:11 -- common/autotest_common.sh@10 -- # set +x 00:26:36.861 03:27:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:36.861 03:27:11 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:36.861 03:27:11 -- nvmf/common.sh@717 -- # local ip 00:26:36.861 03:27:11 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:36.861 03:27:11 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:36.861 03:27:11 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:36.861 03:27:11 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:36.861 03:27:11 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:36.861 03:27:11 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:36.861 03:27:11 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:36.861 03:27:11 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:36.861 03:27:11 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:36.861 03:27:11 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:26:36.861 03:27:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:36.861 03:27:11 -- common/autotest_common.sh@10 -- # set +x 00:26:37.798 nvme0n1 00:26:37.798 03:27:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:37.798 03:27:12 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:37.798 03:27:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:37.798 03:27:12 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:37.798 03:27:12 -- common/autotest_common.sh@10 -- # set +x 00:26:37.798 03:27:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:37.798 03:27:12 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:37.798 03:27:12 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:37.798 03:27:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:37.798 03:27:12 -- common/autotest_common.sh@10 -- # set +x 00:26:37.798 03:27:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:37.798 03:27:12 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:37.798 03:27:12 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:26:37.798 03:27:12 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:37.798 03:27:12 -- host/auth.sh@44 -- # digest=sha256 00:26:37.798 03:27:12 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:37.798 03:27:12 -- host/auth.sh@44 -- # keyid=4 00:26:37.798 03:27:12 -- host/auth.sh@45 -- # key=DHHC-1:03:ODI0M2RhMDM4Y2QxYzRlYzI4NjI4MzVjZDA5OWJlOGFiYTI0OTc3MjlhZmJmNjdhYmM2ZTgzMGY2MmQ2ZDJkMi4LGic=: 00:26:37.798 03:27:12 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:26:37.798 03:27:12 -- host/auth.sh@48 -- # echo ffdhe8192 00:26:37.798 03:27:12 -- host/auth.sh@49 -- # echo DHHC-1:03:ODI0M2RhMDM4Y2QxYzRlYzI4NjI4MzVjZDA5OWJlOGFiYTI0OTc3MjlhZmJmNjdhYmM2ZTgzMGY2MmQ2ZDJkMi4LGic=: 00:26:37.798 03:27:12 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 4 00:26:37.798 03:27:12 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:37.799 03:27:12 -- host/auth.sh@68 -- # digest=sha256 00:26:37.799 03:27:12 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:26:37.799 03:27:12 -- host/auth.sh@68 -- # keyid=4 00:26:37.799 03:27:12 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:37.799 03:27:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:37.799 03:27:12 -- common/autotest_common.sh@10 -- # set +x 00:26:37.799 03:27:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:37.799 03:27:12 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:37.799 03:27:12 -- nvmf/common.sh@717 -- # local ip 00:26:37.799 03:27:12 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:37.799 03:27:12 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:37.799 03:27:12 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:37.799 03:27:12 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:37.799 03:27:12 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:37.799 03:27:12 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:37.799 03:27:12 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:37.799 03:27:12 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:37.799 03:27:12 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:37.799 03:27:12 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:37.799 03:27:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:37.799 03:27:12 -- common/autotest_common.sh@10 -- # set +x 00:26:38.735 nvme0n1 00:26:38.735 03:27:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:38.735 03:27:13 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:38.735 03:27:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:38.735 03:27:13 -- common/autotest_common.sh@10 -- # set +x 00:26:38.735 03:27:13 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:38.991 03:27:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:38.991 03:27:13 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:38.991 03:27:13 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:38.991 03:27:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:38.991 03:27:13 -- common/autotest_common.sh@10 -- # set +x 00:26:38.991 03:27:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:38.991 03:27:13 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:26:38.991 03:27:13 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:26:38.991 03:27:13 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:38.991 03:27:13 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:26:38.991 03:27:13 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:38.991 03:27:13 -- host/auth.sh@44 -- # digest=sha384 00:26:38.991 03:27:13 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:38.991 03:27:13 -- host/auth.sh@44 -- # keyid=0 00:26:38.991 03:27:13 -- host/auth.sh@45 -- # key=DHHC-1:00:YzNjNWE1ODdhNmEyYzEwN2Q3MTc3MzAzOWQ1OTkyM2H6n/Cu: 00:26:38.991 03:27:13 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:26:38.991 03:27:13 -- host/auth.sh@48 -- # echo ffdhe2048 00:26:38.991 03:27:13 -- host/auth.sh@49 -- # echo DHHC-1:00:YzNjNWE1ODdhNmEyYzEwN2Q3MTc3MzAzOWQ1OTkyM2H6n/Cu: 00:26:38.991 03:27:13 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 0 00:26:38.991 03:27:13 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:38.991 03:27:13 -- host/auth.sh@68 -- # digest=sha384 00:26:38.991 03:27:13 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:26:38.991 03:27:13 -- host/auth.sh@68 -- # keyid=0 00:26:38.991 03:27:13 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:38.991 03:27:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:38.991 03:27:13 -- common/autotest_common.sh@10 -- # set +x 00:26:38.991 03:27:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:38.991 03:27:13 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:38.991 03:27:13 -- nvmf/common.sh@717 -- # local ip 00:26:38.991 03:27:13 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:38.991 03:27:13 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:38.991 03:27:13 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:38.991 03:27:13 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:38.991 03:27:13 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:38.991 03:27:13 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:38.991 03:27:13 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:38.991 03:27:13 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:38.991 03:27:13 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:38.991 03:27:13 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:26:38.991 03:27:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:38.991 03:27:13 -- common/autotest_common.sh@10 -- # set +x 00:26:38.991 nvme0n1 00:26:38.991 03:27:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:38.991 03:27:13 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:38.991 03:27:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:38.991 03:27:13 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:38.991 03:27:13 -- common/autotest_common.sh@10 -- # set +x 00:26:38.992 03:27:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:39.250 03:27:13 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:39.250 03:27:13 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:39.250 03:27:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:39.250 03:27:13 -- common/autotest_common.sh@10 -- # set +x 00:26:39.250 03:27:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:39.250 03:27:13 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:39.250 03:27:13 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:26:39.250 03:27:13 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:39.250 03:27:13 -- host/auth.sh@44 -- # digest=sha384 00:26:39.250 03:27:13 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:39.250 03:27:13 -- host/auth.sh@44 -- # keyid=1 00:26:39.250 03:27:13 -- host/auth.sh@45 -- # key=DHHC-1:00:OGMxMGM0MTFkMWY1NGI5NTc0NDNiODliMThjOWU1N2MzMzViNjJhMTRjNmRjMWFjsMUppw==: 00:26:39.250 03:27:13 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:26:39.250 03:27:13 -- host/auth.sh@48 -- # echo ffdhe2048 00:26:39.250 03:27:13 -- host/auth.sh@49 -- # echo DHHC-1:00:OGMxMGM0MTFkMWY1NGI5NTc0NDNiODliMThjOWU1N2MzMzViNjJhMTRjNmRjMWFjsMUppw==: 00:26:39.250 03:27:13 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 1 00:26:39.250 03:27:13 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:39.250 03:27:13 -- host/auth.sh@68 -- # digest=sha384 00:26:39.250 03:27:13 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:26:39.250 03:27:13 -- host/auth.sh@68 -- # keyid=1 00:26:39.250 03:27:13 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:39.250 03:27:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:39.250 03:27:13 -- common/autotest_common.sh@10 -- # set +x 00:26:39.250 03:27:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:39.250 03:27:13 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:39.250 03:27:13 -- nvmf/common.sh@717 -- # local ip 00:26:39.250 03:27:13 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:39.250 03:27:13 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:39.250 03:27:13 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:39.250 03:27:13 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:39.250 03:27:13 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:39.250 03:27:13 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:39.250 03:27:13 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:39.250 03:27:13 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:39.250 03:27:13 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:39.250 03:27:13 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:26:39.250 03:27:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:39.250 03:27:13 -- common/autotest_common.sh@10 -- # set +x 00:26:39.250 nvme0n1 00:26:39.250 03:27:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:39.250 03:27:13 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:39.250 03:27:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:39.250 03:27:13 -- common/autotest_common.sh@10 -- # set +x 00:26:39.250 03:27:13 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:39.250 03:27:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:39.250 03:27:13 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:39.250 03:27:13 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:39.250 03:27:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:39.250 03:27:13 -- common/autotest_common.sh@10 -- # set +x 00:26:39.250 03:27:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:39.250 03:27:13 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:39.250 03:27:13 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:26:39.250 03:27:13 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:39.250 03:27:13 -- host/auth.sh@44 -- # digest=sha384 00:26:39.250 03:27:13 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:39.250 03:27:13 -- host/auth.sh@44 -- # keyid=2 00:26:39.250 03:27:13 -- host/auth.sh@45 -- # key=DHHC-1:01:Njc1MDkxOTg2Y2YxNzliN2ZlNWI3MmU1NjEyMjQ4OTOIsYpe: 00:26:39.250 03:27:13 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:26:39.250 03:27:13 -- host/auth.sh@48 -- # echo ffdhe2048 00:26:39.250 03:27:13 -- host/auth.sh@49 -- # echo DHHC-1:01:Njc1MDkxOTg2Y2YxNzliN2ZlNWI3MmU1NjEyMjQ4OTOIsYpe: 00:26:39.250 03:27:13 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 2 00:26:39.251 03:27:13 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:39.251 03:27:13 -- host/auth.sh@68 -- # digest=sha384 00:26:39.251 03:27:13 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:26:39.251 03:27:13 -- host/auth.sh@68 -- # keyid=2 00:26:39.251 03:27:13 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:39.251 03:27:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:39.251 03:27:13 -- common/autotest_common.sh@10 -- # set +x 00:26:39.509 03:27:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:39.509 03:27:13 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:39.509 03:27:13 -- nvmf/common.sh@717 -- # local ip 00:26:39.509 03:27:13 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:39.509 03:27:13 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:39.509 03:27:13 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:39.509 03:27:13 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:39.509 03:27:13 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:39.509 03:27:13 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:39.509 03:27:13 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:39.509 03:27:13 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:39.509 03:27:13 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:39.509 03:27:13 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:39.509 03:27:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:39.509 03:27:13 -- common/autotest_common.sh@10 -- # set +x 00:26:39.509 nvme0n1 00:26:39.509 03:27:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:39.509 03:27:13 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:39.509 03:27:13 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:39.509 03:27:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:39.509 03:27:13 -- common/autotest_common.sh@10 -- # set +x 00:26:39.509 03:27:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:39.509 03:27:13 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:39.509 03:27:13 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:39.509 03:27:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:39.509 03:27:13 -- common/autotest_common.sh@10 -- # set +x 00:26:39.509 03:27:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:39.509 03:27:13 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:39.509 03:27:13 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:26:39.509 03:27:13 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:39.509 03:27:13 -- host/auth.sh@44 -- # digest=sha384 00:26:39.509 03:27:13 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:39.509 03:27:13 -- host/auth.sh@44 -- # keyid=3 00:26:39.509 03:27:13 -- host/auth.sh@45 -- # key=DHHC-1:02:ODQ3ZDUzYjFiYmEwNmEyOGExYjBkMzIyZTE1ZTdiNDAwOGFhODEwOTQyMDNlY2Ri2DqywQ==: 00:26:39.509 03:27:13 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:26:39.509 03:27:13 -- host/auth.sh@48 -- # echo ffdhe2048 00:26:39.509 03:27:13 -- host/auth.sh@49 -- # echo DHHC-1:02:ODQ3ZDUzYjFiYmEwNmEyOGExYjBkMzIyZTE1ZTdiNDAwOGFhODEwOTQyMDNlY2Ri2DqywQ==: 00:26:39.509 03:27:13 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 3 00:26:39.509 03:27:13 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:39.509 03:27:13 -- host/auth.sh@68 -- # digest=sha384 00:26:39.509 03:27:13 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:26:39.509 03:27:13 -- host/auth.sh@68 -- # keyid=3 00:26:39.509 03:27:13 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:39.509 03:27:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:39.509 03:27:13 -- common/autotest_common.sh@10 -- # set +x 00:26:39.509 03:27:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:39.509 03:27:13 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:39.509 03:27:13 -- nvmf/common.sh@717 -- # local ip 00:26:39.509 03:27:13 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:39.509 03:27:13 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:39.509 03:27:13 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:39.509 03:27:13 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:39.509 03:27:13 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:39.509 03:27:13 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:39.509 03:27:13 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:39.509 03:27:13 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:39.509 03:27:13 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:39.509 03:27:13 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:26:39.509 03:27:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:39.509 03:27:13 -- common/autotest_common.sh@10 -- # set +x 00:26:39.769 nvme0n1 00:26:39.769 03:27:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:39.769 03:27:14 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:39.769 03:27:14 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:39.769 03:27:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:39.769 03:27:14 -- common/autotest_common.sh@10 -- # set +x 00:26:39.769 03:27:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:39.769 03:27:14 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:39.769 03:27:14 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:39.769 03:27:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:39.769 03:27:14 -- common/autotest_common.sh@10 -- # set +x 00:26:39.769 03:27:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:39.769 03:27:14 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:39.769 03:27:14 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:26:39.769 03:27:14 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:39.769 03:27:14 -- host/auth.sh@44 -- # digest=sha384 00:26:39.769 03:27:14 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:39.769 03:27:14 -- host/auth.sh@44 -- # keyid=4 00:26:39.769 03:27:14 -- host/auth.sh@45 -- # key=DHHC-1:03:ODI0M2RhMDM4Y2QxYzRlYzI4NjI4MzVjZDA5OWJlOGFiYTI0OTc3MjlhZmJmNjdhYmM2ZTgzMGY2MmQ2ZDJkMi4LGic=: 00:26:39.769 03:27:14 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:26:39.769 03:27:14 -- host/auth.sh@48 -- # echo ffdhe2048 00:26:39.769 03:27:14 -- host/auth.sh@49 -- # echo DHHC-1:03:ODI0M2RhMDM4Y2QxYzRlYzI4NjI4MzVjZDA5OWJlOGFiYTI0OTc3MjlhZmJmNjdhYmM2ZTgzMGY2MmQ2ZDJkMi4LGic=: 00:26:39.769 03:27:14 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 4 00:26:39.769 03:27:14 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:39.769 03:27:14 -- host/auth.sh@68 -- # digest=sha384 00:26:39.769 03:27:14 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:26:39.769 03:27:14 -- host/auth.sh@68 -- # keyid=4 00:26:39.769 03:27:14 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:39.769 03:27:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:39.769 03:27:14 -- common/autotest_common.sh@10 -- # set +x 00:26:39.769 03:27:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:39.769 03:27:14 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:39.769 03:27:14 -- nvmf/common.sh@717 -- # local ip 00:26:39.769 03:27:14 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:39.769 03:27:14 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:39.769 03:27:14 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:39.769 03:27:14 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:39.769 03:27:14 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:39.769 03:27:14 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:39.769 03:27:14 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:39.769 03:27:14 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:39.769 03:27:14 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:39.769 03:27:14 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:39.769 03:27:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:39.769 03:27:14 -- common/autotest_common.sh@10 -- # set +x 00:26:40.027 nvme0n1 00:26:40.028 03:27:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:40.028 03:27:14 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:40.028 03:27:14 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:40.028 03:27:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:40.028 03:27:14 -- common/autotest_common.sh@10 -- # set +x 00:26:40.028 03:27:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:40.028 03:27:14 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:40.028 03:27:14 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:40.028 03:27:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:40.028 03:27:14 -- common/autotest_common.sh@10 -- # set +x 00:26:40.028 03:27:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:40.028 03:27:14 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:26:40.028 03:27:14 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:40.028 03:27:14 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:26:40.028 03:27:14 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:40.028 03:27:14 -- host/auth.sh@44 -- # digest=sha384 00:26:40.028 03:27:14 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:40.028 03:27:14 -- host/auth.sh@44 -- # keyid=0 00:26:40.028 03:27:14 -- host/auth.sh@45 -- # key=DHHC-1:00:YzNjNWE1ODdhNmEyYzEwN2Q3MTc3MzAzOWQ1OTkyM2H6n/Cu: 00:26:40.028 03:27:14 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:26:40.028 03:27:14 -- host/auth.sh@48 -- # echo ffdhe3072 00:26:40.028 03:27:14 -- host/auth.sh@49 -- # echo DHHC-1:00:YzNjNWE1ODdhNmEyYzEwN2Q3MTc3MzAzOWQ1OTkyM2H6n/Cu: 00:26:40.028 03:27:14 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 0 00:26:40.028 03:27:14 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:40.028 03:27:14 -- host/auth.sh@68 -- # digest=sha384 00:26:40.028 03:27:14 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:26:40.028 03:27:14 -- host/auth.sh@68 -- # keyid=0 00:26:40.028 03:27:14 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:40.028 03:27:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:40.028 03:27:14 -- common/autotest_common.sh@10 -- # set +x 00:26:40.028 03:27:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:40.028 03:27:14 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:40.028 03:27:14 -- nvmf/common.sh@717 -- # local ip 00:26:40.028 03:27:14 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:40.028 03:27:14 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:40.028 03:27:14 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:40.028 03:27:14 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:40.028 03:27:14 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:40.028 03:27:14 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:40.028 03:27:14 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:40.028 03:27:14 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:40.028 03:27:14 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:40.028 03:27:14 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:26:40.028 03:27:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:40.028 03:27:14 -- common/autotest_common.sh@10 -- # set +x 00:26:40.287 nvme0n1 00:26:40.287 03:27:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:40.287 03:27:14 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:40.287 03:27:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:40.287 03:27:14 -- common/autotest_common.sh@10 -- # set +x 00:26:40.287 03:27:14 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:40.287 03:27:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:40.287 03:27:14 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:40.287 03:27:14 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:40.287 03:27:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:40.287 03:27:14 -- common/autotest_common.sh@10 -- # set +x 00:26:40.287 03:27:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:40.287 03:27:14 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:40.287 03:27:14 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:26:40.287 03:27:14 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:40.287 03:27:14 -- host/auth.sh@44 -- # digest=sha384 00:26:40.287 03:27:14 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:40.287 03:27:14 -- host/auth.sh@44 -- # keyid=1 00:26:40.287 03:27:14 -- host/auth.sh@45 -- # key=DHHC-1:00:OGMxMGM0MTFkMWY1NGI5NTc0NDNiODliMThjOWU1N2MzMzViNjJhMTRjNmRjMWFjsMUppw==: 00:26:40.287 03:27:14 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:26:40.287 03:27:14 -- host/auth.sh@48 -- # echo ffdhe3072 00:26:40.287 03:27:14 -- host/auth.sh@49 -- # echo DHHC-1:00:OGMxMGM0MTFkMWY1NGI5NTc0NDNiODliMThjOWU1N2MzMzViNjJhMTRjNmRjMWFjsMUppw==: 00:26:40.287 03:27:14 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 1 00:26:40.287 03:27:14 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:40.287 03:27:14 -- host/auth.sh@68 -- # digest=sha384 00:26:40.287 03:27:14 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:26:40.287 03:27:14 -- host/auth.sh@68 -- # keyid=1 00:26:40.287 03:27:14 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:40.287 03:27:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:40.287 03:27:14 -- common/autotest_common.sh@10 -- # set +x 00:26:40.287 03:27:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:40.287 03:27:14 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:40.287 03:27:14 -- nvmf/common.sh@717 -- # local ip 00:26:40.287 03:27:14 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:40.287 03:27:14 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:40.287 03:27:14 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:40.287 03:27:14 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:40.287 03:27:14 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:40.287 03:27:14 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:40.287 03:27:14 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:40.287 03:27:14 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:40.287 03:27:14 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:40.287 03:27:14 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:26:40.287 03:27:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:40.287 03:27:14 -- common/autotest_common.sh@10 -- # set +x 00:26:40.548 nvme0n1 00:26:40.548 03:27:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:40.548 03:27:14 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:40.548 03:27:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:40.548 03:27:14 -- common/autotest_common.sh@10 -- # set +x 00:26:40.548 03:27:14 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:40.548 03:27:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:40.548 03:27:14 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:40.548 03:27:14 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:40.548 03:27:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:40.548 03:27:14 -- common/autotest_common.sh@10 -- # set +x 00:26:40.548 03:27:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:40.548 03:27:14 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:40.548 03:27:14 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:26:40.548 03:27:14 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:40.548 03:27:14 -- host/auth.sh@44 -- # digest=sha384 00:26:40.548 03:27:14 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:40.548 03:27:14 -- host/auth.sh@44 -- # keyid=2 00:26:40.548 03:27:14 -- host/auth.sh@45 -- # key=DHHC-1:01:Njc1MDkxOTg2Y2YxNzliN2ZlNWI3MmU1NjEyMjQ4OTOIsYpe: 00:26:40.548 03:27:14 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:26:40.548 03:27:14 -- host/auth.sh@48 -- # echo ffdhe3072 00:26:40.548 03:27:14 -- host/auth.sh@49 -- # echo DHHC-1:01:Njc1MDkxOTg2Y2YxNzliN2ZlNWI3MmU1NjEyMjQ4OTOIsYpe: 00:26:40.548 03:27:14 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 2 00:26:40.548 03:27:14 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:40.548 03:27:14 -- host/auth.sh@68 -- # digest=sha384 00:26:40.548 03:27:14 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:26:40.548 03:27:14 -- host/auth.sh@68 -- # keyid=2 00:26:40.548 03:27:14 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:40.548 03:27:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:40.548 03:27:14 -- common/autotest_common.sh@10 -- # set +x 00:26:40.548 03:27:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:40.548 03:27:14 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:40.548 03:27:14 -- nvmf/common.sh@717 -- # local ip 00:26:40.548 03:27:14 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:40.548 03:27:14 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:40.548 03:27:14 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:40.548 03:27:14 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:40.548 03:27:14 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:40.548 03:27:14 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:40.548 03:27:14 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:40.548 03:27:14 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:40.548 03:27:14 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:40.548 03:27:14 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:40.548 03:27:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:40.548 03:27:14 -- common/autotest_common.sh@10 -- # set +x 00:26:40.809 nvme0n1 00:26:40.809 03:27:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:40.809 03:27:15 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:40.809 03:27:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:40.809 03:27:15 -- common/autotest_common.sh@10 -- # set +x 00:26:40.809 03:27:15 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:40.809 03:27:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:40.809 03:27:15 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:40.809 03:27:15 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:40.809 03:27:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:40.809 03:27:15 -- common/autotest_common.sh@10 -- # set +x 00:26:40.809 03:27:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:40.809 03:27:15 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:40.809 03:27:15 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:26:40.809 03:27:15 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:40.809 03:27:15 -- host/auth.sh@44 -- # digest=sha384 00:26:40.809 03:27:15 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:40.809 03:27:15 -- host/auth.sh@44 -- # keyid=3 00:26:40.809 03:27:15 -- host/auth.sh@45 -- # key=DHHC-1:02:ODQ3ZDUzYjFiYmEwNmEyOGExYjBkMzIyZTE1ZTdiNDAwOGFhODEwOTQyMDNlY2Ri2DqywQ==: 00:26:40.809 03:27:15 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:26:40.809 03:27:15 -- host/auth.sh@48 -- # echo ffdhe3072 00:26:40.809 03:27:15 -- host/auth.sh@49 -- # echo DHHC-1:02:ODQ3ZDUzYjFiYmEwNmEyOGExYjBkMzIyZTE1ZTdiNDAwOGFhODEwOTQyMDNlY2Ri2DqywQ==: 00:26:40.809 03:27:15 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 3 00:26:40.809 03:27:15 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:40.809 03:27:15 -- host/auth.sh@68 -- # digest=sha384 00:26:40.809 03:27:15 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:26:40.809 03:27:15 -- host/auth.sh@68 -- # keyid=3 00:26:40.809 03:27:15 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:40.809 03:27:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:40.809 03:27:15 -- common/autotest_common.sh@10 -- # set +x 00:26:40.809 03:27:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:40.809 03:27:15 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:40.809 03:27:15 -- nvmf/common.sh@717 -- # local ip 00:26:40.809 03:27:15 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:40.809 03:27:15 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:40.809 03:27:15 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:40.809 03:27:15 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:40.809 03:27:15 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:40.809 03:27:15 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:40.809 03:27:15 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:40.809 03:27:15 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:40.809 03:27:15 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:40.809 03:27:15 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:26:40.809 03:27:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:40.809 03:27:15 -- common/autotest_common.sh@10 -- # set +x 00:26:41.068 nvme0n1 00:26:41.068 03:27:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:41.068 03:27:15 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:41.068 03:27:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:41.068 03:27:15 -- common/autotest_common.sh@10 -- # set +x 00:26:41.068 03:27:15 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:41.068 03:27:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:41.068 03:27:15 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:41.068 03:27:15 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:41.068 03:27:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:41.068 03:27:15 -- common/autotest_common.sh@10 -- # set +x 00:26:41.068 03:27:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:41.068 03:27:15 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:41.068 03:27:15 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:26:41.068 03:27:15 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:41.068 03:27:15 -- host/auth.sh@44 -- # digest=sha384 00:26:41.068 03:27:15 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:41.068 03:27:15 -- host/auth.sh@44 -- # keyid=4 00:26:41.068 03:27:15 -- host/auth.sh@45 -- # key=DHHC-1:03:ODI0M2RhMDM4Y2QxYzRlYzI4NjI4MzVjZDA5OWJlOGFiYTI0OTc3MjlhZmJmNjdhYmM2ZTgzMGY2MmQ2ZDJkMi4LGic=: 00:26:41.068 03:27:15 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:26:41.068 03:27:15 -- host/auth.sh@48 -- # echo ffdhe3072 00:26:41.068 03:27:15 -- host/auth.sh@49 -- # echo DHHC-1:03:ODI0M2RhMDM4Y2QxYzRlYzI4NjI4MzVjZDA5OWJlOGFiYTI0OTc3MjlhZmJmNjdhYmM2ZTgzMGY2MmQ2ZDJkMi4LGic=: 00:26:41.068 03:27:15 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 4 00:26:41.068 03:27:15 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:41.068 03:27:15 -- host/auth.sh@68 -- # digest=sha384 00:26:41.068 03:27:15 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:26:41.068 03:27:15 -- host/auth.sh@68 -- # keyid=4 00:26:41.068 03:27:15 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:41.068 03:27:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:41.068 03:27:15 -- common/autotest_common.sh@10 -- # set +x 00:26:41.068 03:27:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:41.068 03:27:15 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:41.068 03:27:15 -- nvmf/common.sh@717 -- # local ip 00:26:41.068 03:27:15 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:41.068 03:27:15 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:41.068 03:27:15 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:41.068 03:27:15 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:41.068 03:27:15 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:41.068 03:27:15 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:41.068 03:27:15 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:41.068 03:27:15 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:41.068 03:27:15 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:41.068 03:27:15 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:41.068 03:27:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:41.068 03:27:15 -- common/autotest_common.sh@10 -- # set +x 00:26:41.329 nvme0n1 00:26:41.329 03:27:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:41.329 03:27:15 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:41.329 03:27:15 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:41.329 03:27:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:41.329 03:27:15 -- common/autotest_common.sh@10 -- # set +x 00:26:41.329 03:27:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:41.329 03:27:15 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:41.329 03:27:15 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:41.329 03:27:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:41.329 03:27:15 -- common/autotest_common.sh@10 -- # set +x 00:26:41.329 03:27:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:41.329 03:27:15 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:26:41.329 03:27:15 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:41.329 03:27:15 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:26:41.329 03:27:15 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:41.329 03:27:15 -- host/auth.sh@44 -- # digest=sha384 00:26:41.329 03:27:15 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:41.329 03:27:15 -- host/auth.sh@44 -- # keyid=0 00:26:41.329 03:27:15 -- host/auth.sh@45 -- # key=DHHC-1:00:YzNjNWE1ODdhNmEyYzEwN2Q3MTc3MzAzOWQ1OTkyM2H6n/Cu: 00:26:41.329 03:27:15 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:26:41.329 03:27:15 -- host/auth.sh@48 -- # echo ffdhe4096 00:26:41.329 03:27:15 -- host/auth.sh@49 -- # echo DHHC-1:00:YzNjNWE1ODdhNmEyYzEwN2Q3MTc3MzAzOWQ1OTkyM2H6n/Cu: 00:26:41.329 03:27:15 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 0 00:26:41.329 03:27:15 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:41.329 03:27:15 -- host/auth.sh@68 -- # digest=sha384 00:26:41.329 03:27:15 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:26:41.329 03:27:15 -- host/auth.sh@68 -- # keyid=0 00:26:41.329 03:27:15 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:41.329 03:27:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:41.329 03:27:15 -- common/autotest_common.sh@10 -- # set +x 00:26:41.329 03:27:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:41.329 03:27:15 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:41.329 03:27:15 -- nvmf/common.sh@717 -- # local ip 00:26:41.329 03:27:15 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:41.329 03:27:15 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:41.329 03:27:15 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:41.329 03:27:15 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:41.329 03:27:15 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:41.329 03:27:15 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:41.329 03:27:15 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:41.329 03:27:15 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:41.329 03:27:15 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:41.329 03:27:15 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:26:41.329 03:27:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:41.329 03:27:15 -- common/autotest_common.sh@10 -- # set +x 00:26:41.588 nvme0n1 00:26:41.588 03:27:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:41.588 03:27:16 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:41.588 03:27:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:41.588 03:27:16 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:41.588 03:27:16 -- common/autotest_common.sh@10 -- # set +x 00:26:41.588 03:27:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:41.846 03:27:16 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:41.846 03:27:16 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:41.846 03:27:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:41.846 03:27:16 -- common/autotest_common.sh@10 -- # set +x 00:26:41.846 03:27:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:41.846 03:27:16 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:41.846 03:27:16 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:26:41.846 03:27:16 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:41.846 03:27:16 -- host/auth.sh@44 -- # digest=sha384 00:26:41.846 03:27:16 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:41.846 03:27:16 -- host/auth.sh@44 -- # keyid=1 00:26:41.846 03:27:16 -- host/auth.sh@45 -- # key=DHHC-1:00:OGMxMGM0MTFkMWY1NGI5NTc0NDNiODliMThjOWU1N2MzMzViNjJhMTRjNmRjMWFjsMUppw==: 00:26:41.846 03:27:16 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:26:41.846 03:27:16 -- host/auth.sh@48 -- # echo ffdhe4096 00:26:41.846 03:27:16 -- host/auth.sh@49 -- # echo DHHC-1:00:OGMxMGM0MTFkMWY1NGI5NTc0NDNiODliMThjOWU1N2MzMzViNjJhMTRjNmRjMWFjsMUppw==: 00:26:41.846 03:27:16 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 1 00:26:41.846 03:27:16 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:41.846 03:27:16 -- host/auth.sh@68 -- # digest=sha384 00:26:41.846 03:27:16 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:26:41.846 03:27:16 -- host/auth.sh@68 -- # keyid=1 00:26:41.846 03:27:16 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:41.846 03:27:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:41.846 03:27:16 -- common/autotest_common.sh@10 -- # set +x 00:26:41.846 03:27:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:41.846 03:27:16 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:41.846 03:27:16 -- nvmf/common.sh@717 -- # local ip 00:26:41.846 03:27:16 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:41.846 03:27:16 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:41.846 03:27:16 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:41.846 03:27:16 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:41.846 03:27:16 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:41.846 03:27:16 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:41.846 03:27:16 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:41.846 03:27:16 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:41.846 03:27:16 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:41.846 03:27:16 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:26:41.846 03:27:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:41.846 03:27:16 -- common/autotest_common.sh@10 -- # set +x 00:26:42.107 nvme0n1 00:26:42.107 03:27:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:42.107 03:27:16 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:42.107 03:27:16 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:42.107 03:27:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:42.107 03:27:16 -- common/autotest_common.sh@10 -- # set +x 00:26:42.107 03:27:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:42.107 03:27:16 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:42.107 03:27:16 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:42.107 03:27:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:42.107 03:27:16 -- common/autotest_common.sh@10 -- # set +x 00:26:42.107 03:27:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:42.107 03:27:16 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:42.107 03:27:16 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:26:42.107 03:27:16 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:42.107 03:27:16 -- host/auth.sh@44 -- # digest=sha384 00:26:42.107 03:27:16 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:42.107 03:27:16 -- host/auth.sh@44 -- # keyid=2 00:26:42.107 03:27:16 -- host/auth.sh@45 -- # key=DHHC-1:01:Njc1MDkxOTg2Y2YxNzliN2ZlNWI3MmU1NjEyMjQ4OTOIsYpe: 00:26:42.107 03:27:16 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:26:42.107 03:27:16 -- host/auth.sh@48 -- # echo ffdhe4096 00:26:42.107 03:27:16 -- host/auth.sh@49 -- # echo DHHC-1:01:Njc1MDkxOTg2Y2YxNzliN2ZlNWI3MmU1NjEyMjQ4OTOIsYpe: 00:26:42.107 03:27:16 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 2 00:26:42.107 03:27:16 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:42.107 03:27:16 -- host/auth.sh@68 -- # digest=sha384 00:26:42.107 03:27:16 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:26:42.107 03:27:16 -- host/auth.sh@68 -- # keyid=2 00:26:42.107 03:27:16 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:42.107 03:27:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:42.107 03:27:16 -- common/autotest_common.sh@10 -- # set +x 00:26:42.107 03:27:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:42.107 03:27:16 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:42.107 03:27:16 -- nvmf/common.sh@717 -- # local ip 00:26:42.107 03:27:16 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:42.107 03:27:16 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:42.107 03:27:16 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:42.107 03:27:16 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:42.107 03:27:16 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:42.107 03:27:16 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:42.107 03:27:16 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:42.107 03:27:16 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:42.107 03:27:16 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:42.107 03:27:16 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:42.107 03:27:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:42.107 03:27:16 -- common/autotest_common.sh@10 -- # set +x 00:26:42.366 nvme0n1 00:26:42.366 03:27:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:42.366 03:27:16 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:42.366 03:27:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:42.366 03:27:16 -- common/autotest_common.sh@10 -- # set +x 00:26:42.366 03:27:16 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:42.366 03:27:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:42.366 03:27:16 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:42.366 03:27:16 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:42.366 03:27:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:42.366 03:27:16 -- common/autotest_common.sh@10 -- # set +x 00:26:42.626 03:27:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:42.626 03:27:16 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:42.626 03:27:16 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:26:42.626 03:27:16 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:42.626 03:27:16 -- host/auth.sh@44 -- # digest=sha384 00:26:42.626 03:27:16 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:42.626 03:27:16 -- host/auth.sh@44 -- # keyid=3 00:26:42.626 03:27:16 -- host/auth.sh@45 -- # key=DHHC-1:02:ODQ3ZDUzYjFiYmEwNmEyOGExYjBkMzIyZTE1ZTdiNDAwOGFhODEwOTQyMDNlY2Ri2DqywQ==: 00:26:42.626 03:27:16 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:26:42.626 03:27:16 -- host/auth.sh@48 -- # echo ffdhe4096 00:26:42.626 03:27:16 -- host/auth.sh@49 -- # echo DHHC-1:02:ODQ3ZDUzYjFiYmEwNmEyOGExYjBkMzIyZTE1ZTdiNDAwOGFhODEwOTQyMDNlY2Ri2DqywQ==: 00:26:42.626 03:27:16 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 3 00:26:42.626 03:27:16 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:42.626 03:27:16 -- host/auth.sh@68 -- # digest=sha384 00:26:42.626 03:27:16 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:26:42.626 03:27:16 -- host/auth.sh@68 -- # keyid=3 00:26:42.626 03:27:16 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:42.626 03:27:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:42.626 03:27:16 -- common/autotest_common.sh@10 -- # set +x 00:26:42.626 03:27:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:42.626 03:27:16 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:42.626 03:27:16 -- nvmf/common.sh@717 -- # local ip 00:26:42.626 03:27:16 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:42.626 03:27:16 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:42.626 03:27:16 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:42.626 03:27:16 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:42.626 03:27:16 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:42.626 03:27:16 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:42.626 03:27:16 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:42.626 03:27:16 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:42.626 03:27:16 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:42.626 03:27:16 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:26:42.626 03:27:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:42.626 03:27:16 -- common/autotest_common.sh@10 -- # set +x 00:26:42.887 nvme0n1 00:26:42.887 03:27:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:42.887 03:27:17 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:42.887 03:27:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:42.887 03:27:17 -- common/autotest_common.sh@10 -- # set +x 00:26:42.887 03:27:17 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:42.887 03:27:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:42.887 03:27:17 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:42.887 03:27:17 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:42.887 03:27:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:42.887 03:27:17 -- common/autotest_common.sh@10 -- # set +x 00:26:42.887 03:27:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:42.887 03:27:17 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:42.887 03:27:17 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:26:42.887 03:27:17 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:42.887 03:27:17 -- host/auth.sh@44 -- # digest=sha384 00:26:42.887 03:27:17 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:42.887 03:27:17 -- host/auth.sh@44 -- # keyid=4 00:26:42.887 03:27:17 -- host/auth.sh@45 -- # key=DHHC-1:03:ODI0M2RhMDM4Y2QxYzRlYzI4NjI4MzVjZDA5OWJlOGFiYTI0OTc3MjlhZmJmNjdhYmM2ZTgzMGY2MmQ2ZDJkMi4LGic=: 00:26:42.887 03:27:17 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:26:42.887 03:27:17 -- host/auth.sh@48 -- # echo ffdhe4096 00:26:42.887 03:27:17 -- host/auth.sh@49 -- # echo DHHC-1:03:ODI0M2RhMDM4Y2QxYzRlYzI4NjI4MzVjZDA5OWJlOGFiYTI0OTc3MjlhZmJmNjdhYmM2ZTgzMGY2MmQ2ZDJkMi4LGic=: 00:26:42.887 03:27:17 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 4 00:26:42.887 03:27:17 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:42.887 03:27:17 -- host/auth.sh@68 -- # digest=sha384 00:26:42.887 03:27:17 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:26:42.887 03:27:17 -- host/auth.sh@68 -- # keyid=4 00:26:42.887 03:27:17 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:42.887 03:27:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:42.887 03:27:17 -- common/autotest_common.sh@10 -- # set +x 00:26:42.887 03:27:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:42.887 03:27:17 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:42.887 03:27:17 -- nvmf/common.sh@717 -- # local ip 00:26:42.887 03:27:17 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:42.887 03:27:17 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:42.887 03:27:17 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:42.887 03:27:17 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:42.887 03:27:17 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:42.887 03:27:17 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:42.887 03:27:17 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:42.887 03:27:17 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:42.887 03:27:17 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:42.887 03:27:17 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:42.887 03:27:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:42.887 03:27:17 -- common/autotest_common.sh@10 -- # set +x 00:26:43.147 nvme0n1 00:26:43.147 03:27:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:43.147 03:27:17 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:43.147 03:27:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:43.147 03:27:17 -- common/autotest_common.sh@10 -- # set +x 00:26:43.147 03:27:17 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:43.147 03:27:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:43.147 03:27:17 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:43.147 03:27:17 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:43.147 03:27:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:43.147 03:27:17 -- common/autotest_common.sh@10 -- # set +x 00:26:43.147 03:27:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:43.147 03:27:17 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:26:43.147 03:27:17 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:43.147 03:27:17 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:26:43.147 03:27:17 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:43.147 03:27:17 -- host/auth.sh@44 -- # digest=sha384 00:26:43.147 03:27:17 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:43.147 03:27:17 -- host/auth.sh@44 -- # keyid=0 00:26:43.147 03:27:17 -- host/auth.sh@45 -- # key=DHHC-1:00:YzNjNWE1ODdhNmEyYzEwN2Q3MTc3MzAzOWQ1OTkyM2H6n/Cu: 00:26:43.147 03:27:17 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:26:43.147 03:27:17 -- host/auth.sh@48 -- # echo ffdhe6144 00:26:43.147 03:27:17 -- host/auth.sh@49 -- # echo DHHC-1:00:YzNjNWE1ODdhNmEyYzEwN2Q3MTc3MzAzOWQ1OTkyM2H6n/Cu: 00:26:43.147 03:27:17 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 0 00:26:43.147 03:27:17 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:43.147 03:27:17 -- host/auth.sh@68 -- # digest=sha384 00:26:43.147 03:27:17 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:26:43.147 03:27:17 -- host/auth.sh@68 -- # keyid=0 00:26:43.147 03:27:17 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:43.147 03:27:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:43.147 03:27:17 -- common/autotest_common.sh@10 -- # set +x 00:26:43.147 03:27:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:43.147 03:27:17 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:43.147 03:27:17 -- nvmf/common.sh@717 -- # local ip 00:26:43.147 03:27:17 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:43.147 03:27:17 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:43.147 03:27:17 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:43.147 03:27:17 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:43.147 03:27:17 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:43.147 03:27:17 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:43.147 03:27:17 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:43.147 03:27:17 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:43.147 03:27:17 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:43.147 03:27:17 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:26:43.147 03:27:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:43.147 03:27:17 -- common/autotest_common.sh@10 -- # set +x 00:26:43.715 nvme0n1 00:26:43.715 03:27:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:43.715 03:27:18 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:43.715 03:27:18 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:43.715 03:27:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:43.715 03:27:18 -- common/autotest_common.sh@10 -- # set +x 00:26:43.715 03:27:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:43.715 03:27:18 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:43.715 03:27:18 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:43.715 03:27:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:43.715 03:27:18 -- common/autotest_common.sh@10 -- # set +x 00:26:43.715 03:27:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:43.715 03:27:18 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:43.715 03:27:18 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:26:43.715 03:27:18 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:43.715 03:27:18 -- host/auth.sh@44 -- # digest=sha384 00:26:43.715 03:27:18 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:43.715 03:27:18 -- host/auth.sh@44 -- # keyid=1 00:26:43.715 03:27:18 -- host/auth.sh@45 -- # key=DHHC-1:00:OGMxMGM0MTFkMWY1NGI5NTc0NDNiODliMThjOWU1N2MzMzViNjJhMTRjNmRjMWFjsMUppw==: 00:26:43.715 03:27:18 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:26:43.715 03:27:18 -- host/auth.sh@48 -- # echo ffdhe6144 00:26:43.715 03:27:18 -- host/auth.sh@49 -- # echo DHHC-1:00:OGMxMGM0MTFkMWY1NGI5NTc0NDNiODliMThjOWU1N2MzMzViNjJhMTRjNmRjMWFjsMUppw==: 00:26:43.715 03:27:18 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 1 00:26:43.715 03:27:18 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:43.715 03:27:18 -- host/auth.sh@68 -- # digest=sha384 00:26:43.715 03:27:18 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:26:43.715 03:27:18 -- host/auth.sh@68 -- # keyid=1 00:26:43.715 03:27:18 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:43.715 03:27:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:43.715 03:27:18 -- common/autotest_common.sh@10 -- # set +x 00:26:43.715 03:27:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:43.715 03:27:18 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:43.715 03:27:18 -- nvmf/common.sh@717 -- # local ip 00:26:43.715 03:27:18 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:43.975 03:27:18 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:43.975 03:27:18 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:43.975 03:27:18 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:43.975 03:27:18 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:43.975 03:27:18 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:43.975 03:27:18 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:43.975 03:27:18 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:43.975 03:27:18 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:43.975 03:27:18 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:26:43.975 03:27:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:43.975 03:27:18 -- common/autotest_common.sh@10 -- # set +x 00:26:44.546 nvme0n1 00:26:44.546 03:27:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:44.546 03:27:18 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:44.546 03:27:18 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:44.546 03:27:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:44.546 03:27:18 -- common/autotest_common.sh@10 -- # set +x 00:26:44.546 03:27:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:44.546 03:27:18 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:44.546 03:27:18 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:44.546 03:27:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:44.546 03:27:18 -- common/autotest_common.sh@10 -- # set +x 00:26:44.546 03:27:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:44.546 03:27:18 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:44.546 03:27:18 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:26:44.546 03:27:18 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:44.546 03:27:18 -- host/auth.sh@44 -- # digest=sha384 00:26:44.546 03:27:18 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:44.546 03:27:18 -- host/auth.sh@44 -- # keyid=2 00:26:44.546 03:27:18 -- host/auth.sh@45 -- # key=DHHC-1:01:Njc1MDkxOTg2Y2YxNzliN2ZlNWI3MmU1NjEyMjQ4OTOIsYpe: 00:26:44.546 03:27:18 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:26:44.546 03:27:18 -- host/auth.sh@48 -- # echo ffdhe6144 00:26:44.546 03:27:18 -- host/auth.sh@49 -- # echo DHHC-1:01:Njc1MDkxOTg2Y2YxNzliN2ZlNWI3MmU1NjEyMjQ4OTOIsYpe: 00:26:44.546 03:27:18 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 2 00:26:44.546 03:27:18 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:44.546 03:27:18 -- host/auth.sh@68 -- # digest=sha384 00:26:44.546 03:27:18 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:26:44.546 03:27:18 -- host/auth.sh@68 -- # keyid=2 00:26:44.546 03:27:18 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:44.546 03:27:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:44.546 03:27:18 -- common/autotest_common.sh@10 -- # set +x 00:26:44.546 03:27:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:44.546 03:27:18 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:44.546 03:27:18 -- nvmf/common.sh@717 -- # local ip 00:26:44.546 03:27:18 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:44.546 03:27:18 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:44.546 03:27:18 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:44.546 03:27:18 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:44.546 03:27:18 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:44.546 03:27:18 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:44.546 03:27:18 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:44.546 03:27:18 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:44.546 03:27:18 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:44.546 03:27:18 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:44.546 03:27:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:44.546 03:27:18 -- common/autotest_common.sh@10 -- # set +x 00:26:45.117 nvme0n1 00:26:45.117 03:27:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:45.117 03:27:19 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:45.117 03:27:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:45.117 03:27:19 -- common/autotest_common.sh@10 -- # set +x 00:26:45.117 03:27:19 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:45.117 03:27:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:45.117 03:27:19 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:45.117 03:27:19 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:45.117 03:27:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:45.117 03:27:19 -- common/autotest_common.sh@10 -- # set +x 00:26:45.117 03:27:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:45.117 03:27:19 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:45.117 03:27:19 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:26:45.117 03:27:19 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:45.117 03:27:19 -- host/auth.sh@44 -- # digest=sha384 00:26:45.117 03:27:19 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:45.117 03:27:19 -- host/auth.sh@44 -- # keyid=3 00:26:45.117 03:27:19 -- host/auth.sh@45 -- # key=DHHC-1:02:ODQ3ZDUzYjFiYmEwNmEyOGExYjBkMzIyZTE1ZTdiNDAwOGFhODEwOTQyMDNlY2Ri2DqywQ==: 00:26:45.117 03:27:19 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:26:45.117 03:27:19 -- host/auth.sh@48 -- # echo ffdhe6144 00:26:45.117 03:27:19 -- host/auth.sh@49 -- # echo DHHC-1:02:ODQ3ZDUzYjFiYmEwNmEyOGExYjBkMzIyZTE1ZTdiNDAwOGFhODEwOTQyMDNlY2Ri2DqywQ==: 00:26:45.117 03:27:19 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 3 00:26:45.117 03:27:19 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:45.117 03:27:19 -- host/auth.sh@68 -- # digest=sha384 00:26:45.117 03:27:19 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:26:45.117 03:27:19 -- host/auth.sh@68 -- # keyid=3 00:26:45.117 03:27:19 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:45.117 03:27:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:45.117 03:27:19 -- common/autotest_common.sh@10 -- # set +x 00:26:45.117 03:27:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:45.117 03:27:19 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:45.117 03:27:19 -- nvmf/common.sh@717 -- # local ip 00:26:45.117 03:27:19 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:45.117 03:27:19 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:45.117 03:27:19 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:45.117 03:27:19 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:45.117 03:27:19 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:45.117 03:27:19 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:45.117 03:27:19 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:45.117 03:27:19 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:45.117 03:27:19 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:45.117 03:27:19 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:26:45.117 03:27:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:45.117 03:27:19 -- common/autotest_common.sh@10 -- # set +x 00:26:45.684 nvme0n1 00:26:45.684 03:27:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:45.684 03:27:19 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:45.684 03:27:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:45.684 03:27:19 -- common/autotest_common.sh@10 -- # set +x 00:26:45.684 03:27:19 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:45.684 03:27:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:45.684 03:27:20 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:45.684 03:27:20 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:45.684 03:27:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:45.684 03:27:20 -- common/autotest_common.sh@10 -- # set +x 00:26:45.684 03:27:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:45.684 03:27:20 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:45.684 03:27:20 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:26:45.684 03:27:20 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:45.684 03:27:20 -- host/auth.sh@44 -- # digest=sha384 00:26:45.684 03:27:20 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:45.684 03:27:20 -- host/auth.sh@44 -- # keyid=4 00:26:45.684 03:27:20 -- host/auth.sh@45 -- # key=DHHC-1:03:ODI0M2RhMDM4Y2QxYzRlYzI4NjI4MzVjZDA5OWJlOGFiYTI0OTc3MjlhZmJmNjdhYmM2ZTgzMGY2MmQ2ZDJkMi4LGic=: 00:26:45.684 03:27:20 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:26:45.684 03:27:20 -- host/auth.sh@48 -- # echo ffdhe6144 00:26:45.684 03:27:20 -- host/auth.sh@49 -- # echo DHHC-1:03:ODI0M2RhMDM4Y2QxYzRlYzI4NjI4MzVjZDA5OWJlOGFiYTI0OTc3MjlhZmJmNjdhYmM2ZTgzMGY2MmQ2ZDJkMi4LGic=: 00:26:45.684 03:27:20 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 4 00:26:45.684 03:27:20 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:45.684 03:27:20 -- host/auth.sh@68 -- # digest=sha384 00:26:45.684 03:27:20 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:26:45.684 03:27:20 -- host/auth.sh@68 -- # keyid=4 00:26:45.684 03:27:20 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:45.684 03:27:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:45.684 03:27:20 -- common/autotest_common.sh@10 -- # set +x 00:26:45.684 03:27:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:45.684 03:27:20 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:45.684 03:27:20 -- nvmf/common.sh@717 -- # local ip 00:26:45.684 03:27:20 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:45.684 03:27:20 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:45.684 03:27:20 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:45.684 03:27:20 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:45.684 03:27:20 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:45.684 03:27:20 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:45.684 03:27:20 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:45.684 03:27:20 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:45.684 03:27:20 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:45.684 03:27:20 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:45.684 03:27:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:45.684 03:27:20 -- common/autotest_common.sh@10 -- # set +x 00:26:46.250 nvme0n1 00:26:46.250 03:27:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:46.250 03:27:20 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:46.250 03:27:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:46.250 03:27:20 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:46.250 03:27:20 -- common/autotest_common.sh@10 -- # set +x 00:26:46.250 03:27:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:46.250 03:27:20 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:46.250 03:27:20 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:46.250 03:27:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:46.250 03:27:20 -- common/autotest_common.sh@10 -- # set +x 00:26:46.250 03:27:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:46.250 03:27:20 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:26:46.250 03:27:20 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:46.250 03:27:20 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:26:46.250 03:27:20 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:46.250 03:27:20 -- host/auth.sh@44 -- # digest=sha384 00:26:46.250 03:27:20 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:46.250 03:27:20 -- host/auth.sh@44 -- # keyid=0 00:26:46.250 03:27:20 -- host/auth.sh@45 -- # key=DHHC-1:00:YzNjNWE1ODdhNmEyYzEwN2Q3MTc3MzAzOWQ1OTkyM2H6n/Cu: 00:26:46.250 03:27:20 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:26:46.250 03:27:20 -- host/auth.sh@48 -- # echo ffdhe8192 00:26:46.250 03:27:20 -- host/auth.sh@49 -- # echo DHHC-1:00:YzNjNWE1ODdhNmEyYzEwN2Q3MTc3MzAzOWQ1OTkyM2H6n/Cu: 00:26:46.250 03:27:20 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 0 00:26:46.250 03:27:20 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:46.250 03:27:20 -- host/auth.sh@68 -- # digest=sha384 00:26:46.250 03:27:20 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:26:46.250 03:27:20 -- host/auth.sh@68 -- # keyid=0 00:26:46.250 03:27:20 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:46.250 03:27:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:46.250 03:27:20 -- common/autotest_common.sh@10 -- # set +x 00:26:46.250 03:27:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:46.250 03:27:20 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:46.250 03:27:20 -- nvmf/common.sh@717 -- # local ip 00:26:46.250 03:27:20 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:46.250 03:27:20 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:46.250 03:27:20 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:46.250 03:27:20 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:46.250 03:27:20 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:46.250 03:27:20 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:46.250 03:27:20 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:46.250 03:27:20 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:46.250 03:27:20 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:46.250 03:27:20 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:26:46.250 03:27:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:46.250 03:27:20 -- common/autotest_common.sh@10 -- # set +x 00:26:47.193 nvme0n1 00:26:47.193 03:27:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:47.193 03:27:21 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:47.193 03:27:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:47.193 03:27:21 -- common/autotest_common.sh@10 -- # set +x 00:26:47.193 03:27:21 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:47.193 03:27:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:47.193 03:27:21 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:47.193 03:27:21 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:47.193 03:27:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:47.193 03:27:21 -- common/autotest_common.sh@10 -- # set +x 00:26:47.193 03:27:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:47.193 03:27:21 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:47.193 03:27:21 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:26:47.193 03:27:21 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:47.193 03:27:21 -- host/auth.sh@44 -- # digest=sha384 00:26:47.193 03:27:21 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:47.193 03:27:21 -- host/auth.sh@44 -- # keyid=1 00:26:47.193 03:27:21 -- host/auth.sh@45 -- # key=DHHC-1:00:OGMxMGM0MTFkMWY1NGI5NTc0NDNiODliMThjOWU1N2MzMzViNjJhMTRjNmRjMWFjsMUppw==: 00:26:47.193 03:27:21 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:26:47.193 03:27:21 -- host/auth.sh@48 -- # echo ffdhe8192 00:26:47.193 03:27:21 -- host/auth.sh@49 -- # echo DHHC-1:00:OGMxMGM0MTFkMWY1NGI5NTc0NDNiODliMThjOWU1N2MzMzViNjJhMTRjNmRjMWFjsMUppw==: 00:26:47.193 03:27:21 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 1 00:26:47.193 03:27:21 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:47.193 03:27:21 -- host/auth.sh@68 -- # digest=sha384 00:26:47.193 03:27:21 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:26:47.193 03:27:21 -- host/auth.sh@68 -- # keyid=1 00:26:47.193 03:27:21 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:47.193 03:27:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:47.193 03:27:21 -- common/autotest_common.sh@10 -- # set +x 00:26:47.453 03:27:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:47.453 03:27:21 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:47.453 03:27:21 -- nvmf/common.sh@717 -- # local ip 00:26:47.453 03:27:21 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:47.453 03:27:21 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:47.453 03:27:21 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:47.453 03:27:21 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:47.453 03:27:21 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:47.453 03:27:21 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:47.453 03:27:21 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:47.453 03:27:21 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:47.453 03:27:21 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:47.453 03:27:21 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:26:47.453 03:27:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:47.453 03:27:21 -- common/autotest_common.sh@10 -- # set +x 00:26:48.391 nvme0n1 00:26:48.391 03:27:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:48.391 03:27:22 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:48.391 03:27:22 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:48.391 03:27:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:48.391 03:27:22 -- common/autotest_common.sh@10 -- # set +x 00:26:48.391 03:27:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:48.391 03:27:22 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:48.391 03:27:22 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:48.391 03:27:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:48.391 03:27:22 -- common/autotest_common.sh@10 -- # set +x 00:26:48.391 03:27:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:48.391 03:27:22 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:48.391 03:27:22 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:26:48.391 03:27:22 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:48.391 03:27:22 -- host/auth.sh@44 -- # digest=sha384 00:26:48.391 03:27:22 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:48.391 03:27:22 -- host/auth.sh@44 -- # keyid=2 00:26:48.391 03:27:22 -- host/auth.sh@45 -- # key=DHHC-1:01:Njc1MDkxOTg2Y2YxNzliN2ZlNWI3MmU1NjEyMjQ4OTOIsYpe: 00:26:48.391 03:27:22 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:26:48.391 03:27:22 -- host/auth.sh@48 -- # echo ffdhe8192 00:26:48.391 03:27:22 -- host/auth.sh@49 -- # echo DHHC-1:01:Njc1MDkxOTg2Y2YxNzliN2ZlNWI3MmU1NjEyMjQ4OTOIsYpe: 00:26:48.391 03:27:22 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 2 00:26:48.391 03:27:22 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:48.391 03:27:22 -- host/auth.sh@68 -- # digest=sha384 00:26:48.391 03:27:22 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:26:48.391 03:27:22 -- host/auth.sh@68 -- # keyid=2 00:26:48.391 03:27:22 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:48.391 03:27:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:48.391 03:27:22 -- common/autotest_common.sh@10 -- # set +x 00:26:48.391 03:27:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:48.391 03:27:22 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:48.391 03:27:22 -- nvmf/common.sh@717 -- # local ip 00:26:48.391 03:27:22 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:48.391 03:27:22 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:48.391 03:27:22 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:48.391 03:27:22 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:48.391 03:27:22 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:48.391 03:27:22 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:48.391 03:27:22 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:48.391 03:27:22 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:48.391 03:27:22 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:48.391 03:27:22 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:48.391 03:27:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:48.391 03:27:22 -- common/autotest_common.sh@10 -- # set +x 00:26:49.329 nvme0n1 00:26:49.329 03:27:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:49.329 03:27:23 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:49.329 03:27:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:49.329 03:27:23 -- common/autotest_common.sh@10 -- # set +x 00:26:49.329 03:27:23 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:49.329 03:27:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:49.329 03:27:23 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:49.329 03:27:23 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:49.329 03:27:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:49.329 03:27:23 -- common/autotest_common.sh@10 -- # set +x 00:26:49.329 03:27:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:49.329 03:27:23 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:49.329 03:27:23 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:26:49.329 03:27:23 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:49.329 03:27:23 -- host/auth.sh@44 -- # digest=sha384 00:26:49.329 03:27:23 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:49.329 03:27:23 -- host/auth.sh@44 -- # keyid=3 00:26:49.329 03:27:23 -- host/auth.sh@45 -- # key=DHHC-1:02:ODQ3ZDUzYjFiYmEwNmEyOGExYjBkMzIyZTE1ZTdiNDAwOGFhODEwOTQyMDNlY2Ri2DqywQ==: 00:26:49.329 03:27:23 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:26:49.329 03:27:23 -- host/auth.sh@48 -- # echo ffdhe8192 00:26:49.329 03:27:23 -- host/auth.sh@49 -- # echo DHHC-1:02:ODQ3ZDUzYjFiYmEwNmEyOGExYjBkMzIyZTE1ZTdiNDAwOGFhODEwOTQyMDNlY2Ri2DqywQ==: 00:26:49.329 03:27:23 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 3 00:26:49.329 03:27:23 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:49.329 03:27:23 -- host/auth.sh@68 -- # digest=sha384 00:26:49.329 03:27:23 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:26:49.329 03:27:23 -- host/auth.sh@68 -- # keyid=3 00:26:49.329 03:27:23 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:49.329 03:27:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:49.329 03:27:23 -- common/autotest_common.sh@10 -- # set +x 00:26:49.329 03:27:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:49.329 03:27:23 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:49.329 03:27:23 -- nvmf/common.sh@717 -- # local ip 00:26:49.329 03:27:23 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:49.329 03:27:23 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:49.329 03:27:23 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:49.329 03:27:23 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:49.329 03:27:23 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:49.329 03:27:23 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:49.329 03:27:23 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:49.329 03:27:23 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:49.329 03:27:23 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:49.329 03:27:23 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:26:49.329 03:27:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:49.329 03:27:23 -- common/autotest_common.sh@10 -- # set +x 00:26:50.267 nvme0n1 00:26:50.267 03:27:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:50.527 03:27:24 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:50.527 03:27:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:50.527 03:27:24 -- common/autotest_common.sh@10 -- # set +x 00:26:50.527 03:27:24 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:50.527 03:27:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:50.527 03:27:24 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:50.527 03:27:24 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:50.527 03:27:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:50.527 03:27:24 -- common/autotest_common.sh@10 -- # set +x 00:26:50.527 03:27:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:50.527 03:27:24 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:50.527 03:27:24 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:26:50.527 03:27:24 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:50.527 03:27:24 -- host/auth.sh@44 -- # digest=sha384 00:26:50.527 03:27:24 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:50.527 03:27:24 -- host/auth.sh@44 -- # keyid=4 00:26:50.527 03:27:24 -- host/auth.sh@45 -- # key=DHHC-1:03:ODI0M2RhMDM4Y2QxYzRlYzI4NjI4MzVjZDA5OWJlOGFiYTI0OTc3MjlhZmJmNjdhYmM2ZTgzMGY2MmQ2ZDJkMi4LGic=: 00:26:50.527 03:27:24 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:26:50.527 03:27:24 -- host/auth.sh@48 -- # echo ffdhe8192 00:26:50.527 03:27:24 -- host/auth.sh@49 -- # echo DHHC-1:03:ODI0M2RhMDM4Y2QxYzRlYzI4NjI4MzVjZDA5OWJlOGFiYTI0OTc3MjlhZmJmNjdhYmM2ZTgzMGY2MmQ2ZDJkMi4LGic=: 00:26:50.527 03:27:24 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 4 00:26:50.527 03:27:24 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:50.527 03:27:24 -- host/auth.sh@68 -- # digest=sha384 00:26:50.527 03:27:24 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:26:50.527 03:27:24 -- host/auth.sh@68 -- # keyid=4 00:26:50.527 03:27:24 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:50.527 03:27:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:50.527 03:27:24 -- common/autotest_common.sh@10 -- # set +x 00:26:50.527 03:27:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:50.527 03:27:24 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:50.527 03:27:24 -- nvmf/common.sh@717 -- # local ip 00:26:50.527 03:27:24 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:50.527 03:27:24 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:50.527 03:27:24 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:50.527 03:27:24 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:50.527 03:27:24 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:50.527 03:27:24 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:50.527 03:27:24 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:50.527 03:27:24 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:50.527 03:27:24 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:50.527 03:27:24 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:50.527 03:27:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:50.527 03:27:24 -- common/autotest_common.sh@10 -- # set +x 00:26:51.463 nvme0n1 00:26:51.463 03:27:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:51.463 03:27:25 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:51.463 03:27:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:51.463 03:27:25 -- common/autotest_common.sh@10 -- # set +x 00:26:51.463 03:27:25 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:51.463 03:27:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:51.463 03:27:25 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:51.463 03:27:25 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:51.463 03:27:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:51.463 03:27:25 -- common/autotest_common.sh@10 -- # set +x 00:26:51.463 03:27:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:51.463 03:27:25 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:26:51.463 03:27:25 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:26:51.463 03:27:25 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:51.463 03:27:25 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:26:51.463 03:27:25 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:51.463 03:27:25 -- host/auth.sh@44 -- # digest=sha512 00:26:51.463 03:27:25 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:51.463 03:27:25 -- host/auth.sh@44 -- # keyid=0 00:26:51.463 03:27:25 -- host/auth.sh@45 -- # key=DHHC-1:00:YzNjNWE1ODdhNmEyYzEwN2Q3MTc3MzAzOWQ1OTkyM2H6n/Cu: 00:26:51.463 03:27:25 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:26:51.463 03:27:25 -- host/auth.sh@48 -- # echo ffdhe2048 00:26:51.463 03:27:25 -- host/auth.sh@49 -- # echo DHHC-1:00:YzNjNWE1ODdhNmEyYzEwN2Q3MTc3MzAzOWQ1OTkyM2H6n/Cu: 00:26:51.463 03:27:25 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 0 00:26:51.463 03:27:25 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:51.463 03:27:25 -- host/auth.sh@68 -- # digest=sha512 00:26:51.463 03:27:25 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:26:51.463 03:27:25 -- host/auth.sh@68 -- # keyid=0 00:26:51.463 03:27:25 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:51.463 03:27:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:51.463 03:27:25 -- common/autotest_common.sh@10 -- # set +x 00:26:51.463 03:27:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:51.463 03:27:25 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:51.463 03:27:25 -- nvmf/common.sh@717 -- # local ip 00:26:51.463 03:27:25 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:51.463 03:27:25 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:51.463 03:27:25 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:51.463 03:27:25 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:51.463 03:27:25 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:51.463 03:27:25 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:51.463 03:27:25 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:51.463 03:27:25 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:51.463 03:27:25 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:51.463 03:27:25 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:26:51.463 03:27:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:51.463 03:27:25 -- common/autotest_common.sh@10 -- # set +x 00:26:51.722 nvme0n1 00:26:51.722 03:27:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:51.722 03:27:26 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:51.722 03:27:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:51.722 03:27:26 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:51.722 03:27:26 -- common/autotest_common.sh@10 -- # set +x 00:26:51.722 03:27:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:51.722 03:27:26 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:51.722 03:27:26 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:51.722 03:27:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:51.722 03:27:26 -- common/autotest_common.sh@10 -- # set +x 00:26:51.722 03:27:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:51.722 03:27:26 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:51.722 03:27:26 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:26:51.722 03:27:26 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:51.722 03:27:26 -- host/auth.sh@44 -- # digest=sha512 00:26:51.722 03:27:26 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:51.722 03:27:26 -- host/auth.sh@44 -- # keyid=1 00:26:51.722 03:27:26 -- host/auth.sh@45 -- # key=DHHC-1:00:OGMxMGM0MTFkMWY1NGI5NTc0NDNiODliMThjOWU1N2MzMzViNjJhMTRjNmRjMWFjsMUppw==: 00:26:51.722 03:27:26 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:26:51.722 03:27:26 -- host/auth.sh@48 -- # echo ffdhe2048 00:26:51.722 03:27:26 -- host/auth.sh@49 -- # echo DHHC-1:00:OGMxMGM0MTFkMWY1NGI5NTc0NDNiODliMThjOWU1N2MzMzViNjJhMTRjNmRjMWFjsMUppw==: 00:26:51.722 03:27:26 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 1 00:26:51.722 03:27:26 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:51.722 03:27:26 -- host/auth.sh@68 -- # digest=sha512 00:26:51.722 03:27:26 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:26:51.722 03:27:26 -- host/auth.sh@68 -- # keyid=1 00:26:51.722 03:27:26 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:51.722 03:27:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:51.722 03:27:26 -- common/autotest_common.sh@10 -- # set +x 00:26:51.722 03:27:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:51.722 03:27:26 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:51.722 03:27:26 -- nvmf/common.sh@717 -- # local ip 00:26:51.722 03:27:26 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:51.722 03:27:26 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:51.722 03:27:26 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:51.722 03:27:26 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:51.722 03:27:26 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:51.722 03:27:26 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:51.722 03:27:26 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:51.722 03:27:26 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:51.722 03:27:26 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:51.722 03:27:26 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:26:51.722 03:27:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:51.722 03:27:26 -- common/autotest_common.sh@10 -- # set +x 00:26:51.980 nvme0n1 00:26:51.980 03:27:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:51.980 03:27:26 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:51.980 03:27:26 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:51.980 03:27:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:51.980 03:27:26 -- common/autotest_common.sh@10 -- # set +x 00:26:51.980 03:27:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:51.980 03:27:26 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:51.980 03:27:26 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:51.980 03:27:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:51.980 03:27:26 -- common/autotest_common.sh@10 -- # set +x 00:26:51.980 03:27:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:51.980 03:27:26 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:51.980 03:27:26 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:26:51.980 03:27:26 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:51.980 03:27:26 -- host/auth.sh@44 -- # digest=sha512 00:26:51.980 03:27:26 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:51.980 03:27:26 -- host/auth.sh@44 -- # keyid=2 00:26:51.980 03:27:26 -- host/auth.sh@45 -- # key=DHHC-1:01:Njc1MDkxOTg2Y2YxNzliN2ZlNWI3MmU1NjEyMjQ4OTOIsYpe: 00:26:51.980 03:27:26 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:26:51.980 03:27:26 -- host/auth.sh@48 -- # echo ffdhe2048 00:26:51.980 03:27:26 -- host/auth.sh@49 -- # echo DHHC-1:01:Njc1MDkxOTg2Y2YxNzliN2ZlNWI3MmU1NjEyMjQ4OTOIsYpe: 00:26:51.980 03:27:26 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 2 00:26:51.980 03:27:26 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:51.980 03:27:26 -- host/auth.sh@68 -- # digest=sha512 00:26:51.980 03:27:26 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:26:51.980 03:27:26 -- host/auth.sh@68 -- # keyid=2 00:26:51.980 03:27:26 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:51.980 03:27:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:51.980 03:27:26 -- common/autotest_common.sh@10 -- # set +x 00:26:51.980 03:27:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:51.980 03:27:26 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:51.980 03:27:26 -- nvmf/common.sh@717 -- # local ip 00:26:51.980 03:27:26 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:51.980 03:27:26 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:51.980 03:27:26 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:51.980 03:27:26 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:51.980 03:27:26 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:51.980 03:27:26 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:51.980 03:27:26 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:51.980 03:27:26 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:51.980 03:27:26 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:51.980 03:27:26 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:51.980 03:27:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:51.980 03:27:26 -- common/autotest_common.sh@10 -- # set +x 00:26:51.980 nvme0n1 00:26:51.980 03:27:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:51.980 03:27:26 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:51.980 03:27:26 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:51.980 03:27:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:51.980 03:27:26 -- common/autotest_common.sh@10 -- # set +x 00:26:51.980 03:27:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:52.239 03:27:26 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:52.239 03:27:26 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:52.239 03:27:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:52.239 03:27:26 -- common/autotest_common.sh@10 -- # set +x 00:26:52.239 03:27:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:52.239 03:27:26 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:52.239 03:27:26 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:26:52.239 03:27:26 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:52.239 03:27:26 -- host/auth.sh@44 -- # digest=sha512 00:26:52.239 03:27:26 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:52.239 03:27:26 -- host/auth.sh@44 -- # keyid=3 00:26:52.239 03:27:26 -- host/auth.sh@45 -- # key=DHHC-1:02:ODQ3ZDUzYjFiYmEwNmEyOGExYjBkMzIyZTE1ZTdiNDAwOGFhODEwOTQyMDNlY2Ri2DqywQ==: 00:26:52.239 03:27:26 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:26:52.239 03:27:26 -- host/auth.sh@48 -- # echo ffdhe2048 00:26:52.239 03:27:26 -- host/auth.sh@49 -- # echo DHHC-1:02:ODQ3ZDUzYjFiYmEwNmEyOGExYjBkMzIyZTE1ZTdiNDAwOGFhODEwOTQyMDNlY2Ri2DqywQ==: 00:26:52.239 03:27:26 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 3 00:26:52.239 03:27:26 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:52.239 03:27:26 -- host/auth.sh@68 -- # digest=sha512 00:26:52.239 03:27:26 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:26:52.239 03:27:26 -- host/auth.sh@68 -- # keyid=3 00:26:52.239 03:27:26 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:52.239 03:27:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:52.239 03:27:26 -- common/autotest_common.sh@10 -- # set +x 00:26:52.239 03:27:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:52.239 03:27:26 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:52.239 03:27:26 -- nvmf/common.sh@717 -- # local ip 00:26:52.239 03:27:26 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:52.239 03:27:26 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:52.239 03:27:26 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:52.239 03:27:26 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:52.239 03:27:26 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:52.239 03:27:26 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:52.239 03:27:26 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:52.239 03:27:26 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:52.239 03:27:26 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:52.239 03:27:26 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:26:52.239 03:27:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:52.239 03:27:26 -- common/autotest_common.sh@10 -- # set +x 00:26:52.239 nvme0n1 00:26:52.239 03:27:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:52.239 03:27:26 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:52.239 03:27:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:52.239 03:27:26 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:52.239 03:27:26 -- common/autotest_common.sh@10 -- # set +x 00:26:52.239 03:27:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:52.239 03:27:26 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:52.239 03:27:26 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:52.239 03:27:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:52.239 03:27:26 -- common/autotest_common.sh@10 -- # set +x 00:26:52.239 03:27:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:52.239 03:27:26 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:52.239 03:27:26 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:26:52.239 03:27:26 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:52.239 03:27:26 -- host/auth.sh@44 -- # digest=sha512 00:26:52.239 03:27:26 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:52.239 03:27:26 -- host/auth.sh@44 -- # keyid=4 00:26:52.239 03:27:26 -- host/auth.sh@45 -- # key=DHHC-1:03:ODI0M2RhMDM4Y2QxYzRlYzI4NjI4MzVjZDA5OWJlOGFiYTI0OTc3MjlhZmJmNjdhYmM2ZTgzMGY2MmQ2ZDJkMi4LGic=: 00:26:52.239 03:27:26 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:26:52.239 03:27:26 -- host/auth.sh@48 -- # echo ffdhe2048 00:26:52.239 03:27:26 -- host/auth.sh@49 -- # echo DHHC-1:03:ODI0M2RhMDM4Y2QxYzRlYzI4NjI4MzVjZDA5OWJlOGFiYTI0OTc3MjlhZmJmNjdhYmM2ZTgzMGY2MmQ2ZDJkMi4LGic=: 00:26:52.239 03:27:26 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 4 00:26:52.239 03:27:26 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:52.239 03:27:26 -- host/auth.sh@68 -- # digest=sha512 00:26:52.239 03:27:26 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:26:52.239 03:27:26 -- host/auth.sh@68 -- # keyid=4 00:26:52.239 03:27:26 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:52.239 03:27:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:52.239 03:27:26 -- common/autotest_common.sh@10 -- # set +x 00:26:52.500 03:27:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:52.500 03:27:26 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:52.500 03:27:26 -- nvmf/common.sh@717 -- # local ip 00:26:52.500 03:27:26 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:52.500 03:27:26 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:52.500 03:27:26 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:52.500 03:27:26 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:52.500 03:27:26 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:52.500 03:27:26 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:52.500 03:27:26 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:52.500 03:27:26 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:52.500 03:27:26 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:52.500 03:27:26 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:52.500 03:27:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:52.500 03:27:26 -- common/autotest_common.sh@10 -- # set +x 00:26:52.500 nvme0n1 00:26:52.500 03:27:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:52.500 03:27:26 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:52.500 03:27:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:52.500 03:27:26 -- common/autotest_common.sh@10 -- # set +x 00:26:52.500 03:27:26 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:52.500 03:27:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:52.500 03:27:26 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:52.500 03:27:26 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:52.500 03:27:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:52.500 03:27:26 -- common/autotest_common.sh@10 -- # set +x 00:26:52.500 03:27:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:52.500 03:27:26 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:26:52.500 03:27:26 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:52.500 03:27:26 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:26:52.500 03:27:26 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:52.500 03:27:26 -- host/auth.sh@44 -- # digest=sha512 00:26:52.500 03:27:26 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:52.500 03:27:26 -- host/auth.sh@44 -- # keyid=0 00:26:52.500 03:27:26 -- host/auth.sh@45 -- # key=DHHC-1:00:YzNjNWE1ODdhNmEyYzEwN2Q3MTc3MzAzOWQ1OTkyM2H6n/Cu: 00:26:52.500 03:27:26 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:26:52.500 03:27:26 -- host/auth.sh@48 -- # echo ffdhe3072 00:26:52.500 03:27:26 -- host/auth.sh@49 -- # echo DHHC-1:00:YzNjNWE1ODdhNmEyYzEwN2Q3MTc3MzAzOWQ1OTkyM2H6n/Cu: 00:26:52.500 03:27:26 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 0 00:26:52.500 03:27:26 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:52.500 03:27:26 -- host/auth.sh@68 -- # digest=sha512 00:26:52.500 03:27:26 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:26:52.500 03:27:26 -- host/auth.sh@68 -- # keyid=0 00:26:52.500 03:27:26 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:52.500 03:27:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:52.500 03:27:26 -- common/autotest_common.sh@10 -- # set +x 00:26:52.500 03:27:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:52.500 03:27:26 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:52.500 03:27:26 -- nvmf/common.sh@717 -- # local ip 00:26:52.500 03:27:26 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:52.500 03:27:26 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:52.500 03:27:26 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:52.500 03:27:26 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:52.500 03:27:26 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:52.500 03:27:26 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:52.500 03:27:26 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:52.500 03:27:26 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:52.500 03:27:26 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:52.500 03:27:26 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:26:52.500 03:27:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:52.500 03:27:26 -- common/autotest_common.sh@10 -- # set +x 00:26:52.759 nvme0n1 00:26:52.759 03:27:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:52.759 03:27:27 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:52.759 03:27:27 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:52.759 03:27:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:52.759 03:27:27 -- common/autotest_common.sh@10 -- # set +x 00:26:52.759 03:27:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:52.759 03:27:27 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:52.759 03:27:27 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:52.759 03:27:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:52.759 03:27:27 -- common/autotest_common.sh@10 -- # set +x 00:26:52.759 03:27:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:52.759 03:27:27 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:52.759 03:27:27 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:26:52.759 03:27:27 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:52.759 03:27:27 -- host/auth.sh@44 -- # digest=sha512 00:26:52.759 03:27:27 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:52.759 03:27:27 -- host/auth.sh@44 -- # keyid=1 00:26:52.759 03:27:27 -- host/auth.sh@45 -- # key=DHHC-1:00:OGMxMGM0MTFkMWY1NGI5NTc0NDNiODliMThjOWU1N2MzMzViNjJhMTRjNmRjMWFjsMUppw==: 00:26:52.759 03:27:27 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:26:52.759 03:27:27 -- host/auth.sh@48 -- # echo ffdhe3072 00:26:52.759 03:27:27 -- host/auth.sh@49 -- # echo DHHC-1:00:OGMxMGM0MTFkMWY1NGI5NTc0NDNiODliMThjOWU1N2MzMzViNjJhMTRjNmRjMWFjsMUppw==: 00:26:52.759 03:27:27 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 1 00:26:52.759 03:27:27 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:52.759 03:27:27 -- host/auth.sh@68 -- # digest=sha512 00:26:52.759 03:27:27 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:26:52.759 03:27:27 -- host/auth.sh@68 -- # keyid=1 00:26:52.759 03:27:27 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:52.759 03:27:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:52.759 03:27:27 -- common/autotest_common.sh@10 -- # set +x 00:26:52.759 03:27:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:52.759 03:27:27 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:52.759 03:27:27 -- nvmf/common.sh@717 -- # local ip 00:26:52.759 03:27:27 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:52.759 03:27:27 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:52.759 03:27:27 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:52.759 03:27:27 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:52.759 03:27:27 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:52.759 03:27:27 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:52.759 03:27:27 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:52.759 03:27:27 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:52.759 03:27:27 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:52.759 03:27:27 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:26:52.759 03:27:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:52.759 03:27:27 -- common/autotest_common.sh@10 -- # set +x 00:26:53.019 nvme0n1 00:26:53.019 03:27:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:53.019 03:27:27 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:53.019 03:27:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:53.019 03:27:27 -- common/autotest_common.sh@10 -- # set +x 00:26:53.019 03:27:27 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:53.019 03:27:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:53.019 03:27:27 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:53.020 03:27:27 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:53.020 03:27:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:53.020 03:27:27 -- common/autotest_common.sh@10 -- # set +x 00:26:53.020 03:27:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:53.020 03:27:27 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:53.020 03:27:27 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:26:53.020 03:27:27 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:53.020 03:27:27 -- host/auth.sh@44 -- # digest=sha512 00:26:53.020 03:27:27 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:53.020 03:27:27 -- host/auth.sh@44 -- # keyid=2 00:26:53.020 03:27:27 -- host/auth.sh@45 -- # key=DHHC-1:01:Njc1MDkxOTg2Y2YxNzliN2ZlNWI3MmU1NjEyMjQ4OTOIsYpe: 00:26:53.020 03:27:27 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:26:53.020 03:27:27 -- host/auth.sh@48 -- # echo ffdhe3072 00:26:53.020 03:27:27 -- host/auth.sh@49 -- # echo DHHC-1:01:Njc1MDkxOTg2Y2YxNzliN2ZlNWI3MmU1NjEyMjQ4OTOIsYpe: 00:26:53.020 03:27:27 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 2 00:26:53.020 03:27:27 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:53.020 03:27:27 -- host/auth.sh@68 -- # digest=sha512 00:26:53.020 03:27:27 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:26:53.020 03:27:27 -- host/auth.sh@68 -- # keyid=2 00:26:53.020 03:27:27 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:53.020 03:27:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:53.020 03:27:27 -- common/autotest_common.sh@10 -- # set +x 00:26:53.020 03:27:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:53.020 03:27:27 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:53.020 03:27:27 -- nvmf/common.sh@717 -- # local ip 00:26:53.020 03:27:27 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:53.020 03:27:27 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:53.020 03:27:27 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:53.020 03:27:27 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:53.020 03:27:27 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:53.020 03:27:27 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:53.020 03:27:27 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:53.020 03:27:27 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:53.020 03:27:27 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:53.020 03:27:27 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:53.020 03:27:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:53.020 03:27:27 -- common/autotest_common.sh@10 -- # set +x 00:26:53.279 nvme0n1 00:26:53.279 03:27:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:53.279 03:27:27 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:53.279 03:27:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:53.279 03:27:27 -- common/autotest_common.sh@10 -- # set +x 00:26:53.279 03:27:27 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:53.279 03:27:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:53.279 03:27:27 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:53.279 03:27:27 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:53.279 03:27:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:53.279 03:27:27 -- common/autotest_common.sh@10 -- # set +x 00:26:53.279 03:27:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:53.279 03:27:27 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:53.279 03:27:27 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:26:53.279 03:27:27 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:53.279 03:27:27 -- host/auth.sh@44 -- # digest=sha512 00:26:53.279 03:27:27 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:53.279 03:27:27 -- host/auth.sh@44 -- # keyid=3 00:26:53.279 03:27:27 -- host/auth.sh@45 -- # key=DHHC-1:02:ODQ3ZDUzYjFiYmEwNmEyOGExYjBkMzIyZTE1ZTdiNDAwOGFhODEwOTQyMDNlY2Ri2DqywQ==: 00:26:53.279 03:27:27 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:26:53.279 03:27:27 -- host/auth.sh@48 -- # echo ffdhe3072 00:26:53.279 03:27:27 -- host/auth.sh@49 -- # echo DHHC-1:02:ODQ3ZDUzYjFiYmEwNmEyOGExYjBkMzIyZTE1ZTdiNDAwOGFhODEwOTQyMDNlY2Ri2DqywQ==: 00:26:53.279 03:27:27 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 3 00:26:53.279 03:27:27 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:53.279 03:27:27 -- host/auth.sh@68 -- # digest=sha512 00:26:53.279 03:27:27 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:26:53.279 03:27:27 -- host/auth.sh@68 -- # keyid=3 00:26:53.279 03:27:27 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:53.279 03:27:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:53.279 03:27:27 -- common/autotest_common.sh@10 -- # set +x 00:26:53.279 03:27:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:53.279 03:27:27 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:53.279 03:27:27 -- nvmf/common.sh@717 -- # local ip 00:26:53.279 03:27:27 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:53.279 03:27:27 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:53.279 03:27:27 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:53.279 03:27:27 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:53.279 03:27:27 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:53.279 03:27:27 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:53.279 03:27:27 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:53.279 03:27:27 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:53.279 03:27:27 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:53.279 03:27:27 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:26:53.280 03:27:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:53.280 03:27:27 -- common/autotest_common.sh@10 -- # set +x 00:26:53.542 nvme0n1 00:26:53.542 03:27:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:53.542 03:27:27 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:53.542 03:27:27 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:53.542 03:27:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:53.542 03:27:27 -- common/autotest_common.sh@10 -- # set +x 00:26:53.542 03:27:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:53.542 03:27:27 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:53.542 03:27:27 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:53.542 03:27:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:53.542 03:27:27 -- common/autotest_common.sh@10 -- # set +x 00:26:53.542 03:27:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:53.542 03:27:28 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:53.542 03:27:28 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:26:53.542 03:27:28 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:53.542 03:27:28 -- host/auth.sh@44 -- # digest=sha512 00:26:53.542 03:27:28 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:53.542 03:27:28 -- host/auth.sh@44 -- # keyid=4 00:26:53.542 03:27:28 -- host/auth.sh@45 -- # key=DHHC-1:03:ODI0M2RhMDM4Y2QxYzRlYzI4NjI4MzVjZDA5OWJlOGFiYTI0OTc3MjlhZmJmNjdhYmM2ZTgzMGY2MmQ2ZDJkMi4LGic=: 00:26:53.542 03:27:28 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:26:53.542 03:27:28 -- host/auth.sh@48 -- # echo ffdhe3072 00:26:53.542 03:27:28 -- host/auth.sh@49 -- # echo DHHC-1:03:ODI0M2RhMDM4Y2QxYzRlYzI4NjI4MzVjZDA5OWJlOGFiYTI0OTc3MjlhZmJmNjdhYmM2ZTgzMGY2MmQ2ZDJkMi4LGic=: 00:26:53.542 03:27:28 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 4 00:26:53.542 03:27:28 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:53.542 03:27:28 -- host/auth.sh@68 -- # digest=sha512 00:26:53.542 03:27:28 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:26:53.542 03:27:28 -- host/auth.sh@68 -- # keyid=4 00:26:53.542 03:27:28 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:53.542 03:27:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:53.542 03:27:28 -- common/autotest_common.sh@10 -- # set +x 00:26:53.542 03:27:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:53.542 03:27:28 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:53.542 03:27:28 -- nvmf/common.sh@717 -- # local ip 00:26:53.542 03:27:28 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:53.542 03:27:28 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:53.542 03:27:28 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:53.542 03:27:28 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:53.542 03:27:28 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:53.542 03:27:28 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:53.542 03:27:28 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:53.542 03:27:28 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:53.542 03:27:28 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:53.542 03:27:28 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:53.542 03:27:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:53.542 03:27:28 -- common/autotest_common.sh@10 -- # set +x 00:26:53.801 nvme0n1 00:26:53.801 03:27:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:53.801 03:27:28 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:53.801 03:27:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:53.801 03:27:28 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:53.801 03:27:28 -- common/autotest_common.sh@10 -- # set +x 00:26:53.801 03:27:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:53.801 03:27:28 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:53.801 03:27:28 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:53.801 03:27:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:53.801 03:27:28 -- common/autotest_common.sh@10 -- # set +x 00:26:53.801 03:27:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:53.801 03:27:28 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:26:53.801 03:27:28 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:53.801 03:27:28 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:26:53.801 03:27:28 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:53.801 03:27:28 -- host/auth.sh@44 -- # digest=sha512 00:26:53.801 03:27:28 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:53.801 03:27:28 -- host/auth.sh@44 -- # keyid=0 00:26:53.801 03:27:28 -- host/auth.sh@45 -- # key=DHHC-1:00:YzNjNWE1ODdhNmEyYzEwN2Q3MTc3MzAzOWQ1OTkyM2H6n/Cu: 00:26:53.801 03:27:28 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:26:53.801 03:27:28 -- host/auth.sh@48 -- # echo ffdhe4096 00:26:53.801 03:27:28 -- host/auth.sh@49 -- # echo DHHC-1:00:YzNjNWE1ODdhNmEyYzEwN2Q3MTc3MzAzOWQ1OTkyM2H6n/Cu: 00:26:53.801 03:27:28 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 0 00:26:53.801 03:27:28 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:53.801 03:27:28 -- host/auth.sh@68 -- # digest=sha512 00:26:53.801 03:27:28 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:26:53.801 03:27:28 -- host/auth.sh@68 -- # keyid=0 00:26:53.801 03:27:28 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:53.801 03:27:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:53.801 03:27:28 -- common/autotest_common.sh@10 -- # set +x 00:26:53.801 03:27:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:53.801 03:27:28 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:53.801 03:27:28 -- nvmf/common.sh@717 -- # local ip 00:26:53.801 03:27:28 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:53.801 03:27:28 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:53.801 03:27:28 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:53.801 03:27:28 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:53.801 03:27:28 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:53.801 03:27:28 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:53.801 03:27:28 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:53.801 03:27:28 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:53.801 03:27:28 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:53.801 03:27:28 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:26:53.801 03:27:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:53.801 03:27:28 -- common/autotest_common.sh@10 -- # set +x 00:26:54.369 nvme0n1 00:26:54.369 03:27:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:54.369 03:27:28 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:54.369 03:27:28 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:54.369 03:27:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:54.369 03:27:28 -- common/autotest_common.sh@10 -- # set +x 00:26:54.369 03:27:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:54.369 03:27:28 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:54.369 03:27:28 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:54.369 03:27:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:54.369 03:27:28 -- common/autotest_common.sh@10 -- # set +x 00:26:54.369 03:27:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:54.369 03:27:28 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:54.369 03:27:28 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:26:54.369 03:27:28 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:54.369 03:27:28 -- host/auth.sh@44 -- # digest=sha512 00:26:54.369 03:27:28 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:54.369 03:27:28 -- host/auth.sh@44 -- # keyid=1 00:26:54.369 03:27:28 -- host/auth.sh@45 -- # key=DHHC-1:00:OGMxMGM0MTFkMWY1NGI5NTc0NDNiODliMThjOWU1N2MzMzViNjJhMTRjNmRjMWFjsMUppw==: 00:26:54.369 03:27:28 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:26:54.369 03:27:28 -- host/auth.sh@48 -- # echo ffdhe4096 00:26:54.369 03:27:28 -- host/auth.sh@49 -- # echo DHHC-1:00:OGMxMGM0MTFkMWY1NGI5NTc0NDNiODliMThjOWU1N2MzMzViNjJhMTRjNmRjMWFjsMUppw==: 00:26:54.369 03:27:28 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 1 00:26:54.369 03:27:28 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:54.369 03:27:28 -- host/auth.sh@68 -- # digest=sha512 00:26:54.369 03:27:28 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:26:54.369 03:27:28 -- host/auth.sh@68 -- # keyid=1 00:26:54.369 03:27:28 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:54.369 03:27:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:54.369 03:27:28 -- common/autotest_common.sh@10 -- # set +x 00:26:54.369 03:27:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:54.369 03:27:28 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:54.369 03:27:28 -- nvmf/common.sh@717 -- # local ip 00:26:54.369 03:27:28 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:54.369 03:27:28 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:54.369 03:27:28 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:54.369 03:27:28 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:54.369 03:27:28 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:54.369 03:27:28 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:54.369 03:27:28 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:54.369 03:27:28 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:54.369 03:27:28 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:54.369 03:27:28 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:26:54.369 03:27:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:54.369 03:27:28 -- common/autotest_common.sh@10 -- # set +x 00:26:54.629 nvme0n1 00:26:54.629 03:27:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:54.629 03:27:28 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:54.629 03:27:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:54.629 03:27:28 -- common/autotest_common.sh@10 -- # set +x 00:26:54.629 03:27:28 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:54.629 03:27:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:54.629 03:27:28 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:54.629 03:27:28 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:54.629 03:27:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:54.629 03:27:28 -- common/autotest_common.sh@10 -- # set +x 00:26:54.629 03:27:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:54.629 03:27:29 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:54.629 03:27:29 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:26:54.629 03:27:29 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:54.629 03:27:29 -- host/auth.sh@44 -- # digest=sha512 00:26:54.629 03:27:29 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:54.629 03:27:29 -- host/auth.sh@44 -- # keyid=2 00:26:54.629 03:27:29 -- host/auth.sh@45 -- # key=DHHC-1:01:Njc1MDkxOTg2Y2YxNzliN2ZlNWI3MmU1NjEyMjQ4OTOIsYpe: 00:26:54.629 03:27:29 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:26:54.629 03:27:29 -- host/auth.sh@48 -- # echo ffdhe4096 00:26:54.629 03:27:29 -- host/auth.sh@49 -- # echo DHHC-1:01:Njc1MDkxOTg2Y2YxNzliN2ZlNWI3MmU1NjEyMjQ4OTOIsYpe: 00:26:54.629 03:27:29 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 2 00:26:54.629 03:27:29 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:54.629 03:27:29 -- host/auth.sh@68 -- # digest=sha512 00:26:54.629 03:27:29 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:26:54.629 03:27:29 -- host/auth.sh@68 -- # keyid=2 00:26:54.629 03:27:29 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:54.629 03:27:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:54.629 03:27:29 -- common/autotest_common.sh@10 -- # set +x 00:26:54.629 03:27:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:54.629 03:27:29 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:54.629 03:27:29 -- nvmf/common.sh@717 -- # local ip 00:26:54.629 03:27:29 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:54.629 03:27:29 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:54.629 03:27:29 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:54.629 03:27:29 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:54.629 03:27:29 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:54.630 03:27:29 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:54.630 03:27:29 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:54.630 03:27:29 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:54.630 03:27:29 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:54.630 03:27:29 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:54.630 03:27:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:54.630 03:27:29 -- common/autotest_common.sh@10 -- # set +x 00:26:54.888 nvme0n1 00:26:54.888 03:27:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:54.888 03:27:29 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:54.888 03:27:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:54.888 03:27:29 -- common/autotest_common.sh@10 -- # set +x 00:26:54.888 03:27:29 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:54.888 03:27:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:54.888 03:27:29 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:54.888 03:27:29 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:54.888 03:27:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:54.888 03:27:29 -- common/autotest_common.sh@10 -- # set +x 00:26:54.888 03:27:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:54.888 03:27:29 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:54.888 03:27:29 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:26:54.888 03:27:29 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:54.889 03:27:29 -- host/auth.sh@44 -- # digest=sha512 00:26:54.889 03:27:29 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:54.889 03:27:29 -- host/auth.sh@44 -- # keyid=3 00:26:54.889 03:27:29 -- host/auth.sh@45 -- # key=DHHC-1:02:ODQ3ZDUzYjFiYmEwNmEyOGExYjBkMzIyZTE1ZTdiNDAwOGFhODEwOTQyMDNlY2Ri2DqywQ==: 00:26:54.889 03:27:29 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:26:54.889 03:27:29 -- host/auth.sh@48 -- # echo ffdhe4096 00:26:54.889 03:27:29 -- host/auth.sh@49 -- # echo DHHC-1:02:ODQ3ZDUzYjFiYmEwNmEyOGExYjBkMzIyZTE1ZTdiNDAwOGFhODEwOTQyMDNlY2Ri2DqywQ==: 00:26:54.889 03:27:29 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 3 00:26:54.889 03:27:29 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:54.889 03:27:29 -- host/auth.sh@68 -- # digest=sha512 00:26:54.889 03:27:29 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:26:54.889 03:27:29 -- host/auth.sh@68 -- # keyid=3 00:26:54.889 03:27:29 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:54.889 03:27:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:54.889 03:27:29 -- common/autotest_common.sh@10 -- # set +x 00:26:55.149 03:27:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:55.149 03:27:29 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:55.149 03:27:29 -- nvmf/common.sh@717 -- # local ip 00:26:55.149 03:27:29 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:55.149 03:27:29 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:55.149 03:27:29 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:55.149 03:27:29 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:55.149 03:27:29 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:55.149 03:27:29 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:55.149 03:27:29 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:55.149 03:27:29 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:55.149 03:27:29 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:55.149 03:27:29 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:26:55.149 03:27:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:55.149 03:27:29 -- common/autotest_common.sh@10 -- # set +x 00:26:55.409 nvme0n1 00:26:55.409 03:27:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:55.409 03:27:29 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:55.409 03:27:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:55.409 03:27:29 -- common/autotest_common.sh@10 -- # set +x 00:26:55.409 03:27:29 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:55.409 03:27:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:55.409 03:27:29 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:55.409 03:27:29 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:55.409 03:27:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:55.409 03:27:29 -- common/autotest_common.sh@10 -- # set +x 00:26:55.409 03:27:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:55.409 03:27:29 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:55.409 03:27:29 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:26:55.409 03:27:29 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:55.409 03:27:29 -- host/auth.sh@44 -- # digest=sha512 00:26:55.409 03:27:29 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:55.409 03:27:29 -- host/auth.sh@44 -- # keyid=4 00:26:55.409 03:27:29 -- host/auth.sh@45 -- # key=DHHC-1:03:ODI0M2RhMDM4Y2QxYzRlYzI4NjI4MzVjZDA5OWJlOGFiYTI0OTc3MjlhZmJmNjdhYmM2ZTgzMGY2MmQ2ZDJkMi4LGic=: 00:26:55.409 03:27:29 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:26:55.409 03:27:29 -- host/auth.sh@48 -- # echo ffdhe4096 00:26:55.409 03:27:29 -- host/auth.sh@49 -- # echo DHHC-1:03:ODI0M2RhMDM4Y2QxYzRlYzI4NjI4MzVjZDA5OWJlOGFiYTI0OTc3MjlhZmJmNjdhYmM2ZTgzMGY2MmQ2ZDJkMi4LGic=: 00:26:55.409 03:27:29 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 4 00:26:55.409 03:27:29 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:55.409 03:27:29 -- host/auth.sh@68 -- # digest=sha512 00:26:55.409 03:27:29 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:26:55.409 03:27:29 -- host/auth.sh@68 -- # keyid=4 00:26:55.409 03:27:29 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:55.409 03:27:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:55.409 03:27:29 -- common/autotest_common.sh@10 -- # set +x 00:26:55.409 03:27:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:55.409 03:27:29 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:55.409 03:27:29 -- nvmf/common.sh@717 -- # local ip 00:26:55.409 03:27:29 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:55.409 03:27:29 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:55.409 03:27:29 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:55.409 03:27:29 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:55.409 03:27:29 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:55.409 03:27:29 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:55.409 03:27:29 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:55.409 03:27:29 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:55.409 03:27:29 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:55.409 03:27:29 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:55.409 03:27:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:55.409 03:27:29 -- common/autotest_common.sh@10 -- # set +x 00:26:55.668 nvme0n1 00:26:55.668 03:27:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:55.668 03:27:30 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:55.668 03:27:30 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:55.668 03:27:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:55.668 03:27:30 -- common/autotest_common.sh@10 -- # set +x 00:26:55.668 03:27:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:55.668 03:27:30 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:55.668 03:27:30 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:55.668 03:27:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:55.668 03:27:30 -- common/autotest_common.sh@10 -- # set +x 00:26:55.668 03:27:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:55.668 03:27:30 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:26:55.668 03:27:30 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:55.668 03:27:30 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:26:55.668 03:27:30 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:55.668 03:27:30 -- host/auth.sh@44 -- # digest=sha512 00:26:55.668 03:27:30 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:55.668 03:27:30 -- host/auth.sh@44 -- # keyid=0 00:26:55.668 03:27:30 -- host/auth.sh@45 -- # key=DHHC-1:00:YzNjNWE1ODdhNmEyYzEwN2Q3MTc3MzAzOWQ1OTkyM2H6n/Cu: 00:26:55.668 03:27:30 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:26:55.668 03:27:30 -- host/auth.sh@48 -- # echo ffdhe6144 00:26:55.668 03:27:30 -- host/auth.sh@49 -- # echo DHHC-1:00:YzNjNWE1ODdhNmEyYzEwN2Q3MTc3MzAzOWQ1OTkyM2H6n/Cu: 00:26:55.668 03:27:30 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 0 00:26:55.668 03:27:30 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:55.668 03:27:30 -- host/auth.sh@68 -- # digest=sha512 00:26:55.668 03:27:30 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:26:55.668 03:27:30 -- host/auth.sh@68 -- # keyid=0 00:26:55.668 03:27:30 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:55.668 03:27:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:55.668 03:27:30 -- common/autotest_common.sh@10 -- # set +x 00:26:55.668 03:27:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:55.668 03:27:30 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:55.668 03:27:30 -- nvmf/common.sh@717 -- # local ip 00:26:55.668 03:27:30 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:55.668 03:27:30 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:55.668 03:27:30 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:55.668 03:27:30 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:55.668 03:27:30 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:55.668 03:27:30 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:55.668 03:27:30 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:55.668 03:27:30 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:55.668 03:27:30 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:55.668 03:27:30 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:26:55.668 03:27:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:55.668 03:27:30 -- common/autotest_common.sh@10 -- # set +x 00:26:56.235 nvme0n1 00:26:56.235 03:27:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:56.235 03:27:30 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:56.235 03:27:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:56.235 03:27:30 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:56.235 03:27:30 -- common/autotest_common.sh@10 -- # set +x 00:26:56.235 03:27:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:56.235 03:27:30 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:56.235 03:27:30 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:56.235 03:27:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:56.235 03:27:30 -- common/autotest_common.sh@10 -- # set +x 00:26:56.235 03:27:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:56.235 03:27:30 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:56.235 03:27:30 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:26:56.235 03:27:30 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:56.235 03:27:30 -- host/auth.sh@44 -- # digest=sha512 00:26:56.235 03:27:30 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:56.235 03:27:30 -- host/auth.sh@44 -- # keyid=1 00:26:56.235 03:27:30 -- host/auth.sh@45 -- # key=DHHC-1:00:OGMxMGM0MTFkMWY1NGI5NTc0NDNiODliMThjOWU1N2MzMzViNjJhMTRjNmRjMWFjsMUppw==: 00:26:56.235 03:27:30 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:26:56.235 03:27:30 -- host/auth.sh@48 -- # echo ffdhe6144 00:26:56.235 03:27:30 -- host/auth.sh@49 -- # echo DHHC-1:00:OGMxMGM0MTFkMWY1NGI5NTc0NDNiODliMThjOWU1N2MzMzViNjJhMTRjNmRjMWFjsMUppw==: 00:26:56.235 03:27:30 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 1 00:26:56.235 03:27:30 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:56.235 03:27:30 -- host/auth.sh@68 -- # digest=sha512 00:26:56.235 03:27:30 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:26:56.235 03:27:30 -- host/auth.sh@68 -- # keyid=1 00:26:56.235 03:27:30 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:56.235 03:27:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:56.235 03:27:30 -- common/autotest_common.sh@10 -- # set +x 00:26:56.235 03:27:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:56.235 03:27:30 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:56.235 03:27:30 -- nvmf/common.sh@717 -- # local ip 00:26:56.235 03:27:30 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:56.235 03:27:30 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:56.235 03:27:30 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:56.235 03:27:30 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:56.235 03:27:30 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:56.235 03:27:30 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:56.235 03:27:30 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:56.235 03:27:30 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:56.235 03:27:30 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:56.235 03:27:30 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:26:56.235 03:27:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:56.235 03:27:30 -- common/autotest_common.sh@10 -- # set +x 00:26:56.803 nvme0n1 00:26:56.803 03:27:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:56.804 03:27:31 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:56.804 03:27:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:56.804 03:27:31 -- common/autotest_common.sh@10 -- # set +x 00:26:56.804 03:27:31 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:56.804 03:27:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:57.063 03:27:31 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:57.063 03:27:31 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:57.063 03:27:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:57.063 03:27:31 -- common/autotest_common.sh@10 -- # set +x 00:26:57.063 03:27:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:57.063 03:27:31 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:57.063 03:27:31 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:26:57.063 03:27:31 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:57.063 03:27:31 -- host/auth.sh@44 -- # digest=sha512 00:26:57.063 03:27:31 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:57.063 03:27:31 -- host/auth.sh@44 -- # keyid=2 00:26:57.063 03:27:31 -- host/auth.sh@45 -- # key=DHHC-1:01:Njc1MDkxOTg2Y2YxNzliN2ZlNWI3MmU1NjEyMjQ4OTOIsYpe: 00:26:57.063 03:27:31 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:26:57.063 03:27:31 -- host/auth.sh@48 -- # echo ffdhe6144 00:26:57.063 03:27:31 -- host/auth.sh@49 -- # echo DHHC-1:01:Njc1MDkxOTg2Y2YxNzliN2ZlNWI3MmU1NjEyMjQ4OTOIsYpe: 00:26:57.063 03:27:31 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 2 00:26:57.063 03:27:31 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:57.063 03:27:31 -- host/auth.sh@68 -- # digest=sha512 00:26:57.063 03:27:31 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:26:57.063 03:27:31 -- host/auth.sh@68 -- # keyid=2 00:26:57.063 03:27:31 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:57.063 03:27:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:57.063 03:27:31 -- common/autotest_common.sh@10 -- # set +x 00:26:57.063 03:27:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:57.063 03:27:31 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:57.063 03:27:31 -- nvmf/common.sh@717 -- # local ip 00:26:57.063 03:27:31 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:57.063 03:27:31 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:57.063 03:27:31 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:57.063 03:27:31 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:57.063 03:27:31 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:57.063 03:27:31 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:57.063 03:27:31 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:57.063 03:27:31 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:57.063 03:27:31 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:57.063 03:27:31 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:57.063 03:27:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:57.063 03:27:31 -- common/autotest_common.sh@10 -- # set +x 00:26:57.632 nvme0n1 00:26:57.632 03:27:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:57.632 03:27:31 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:57.632 03:27:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:57.632 03:27:31 -- common/autotest_common.sh@10 -- # set +x 00:26:57.632 03:27:31 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:57.632 03:27:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:57.632 03:27:31 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:57.632 03:27:31 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:57.632 03:27:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:57.632 03:27:31 -- common/autotest_common.sh@10 -- # set +x 00:26:57.632 03:27:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:57.632 03:27:31 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:57.632 03:27:31 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:26:57.632 03:27:31 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:57.632 03:27:31 -- host/auth.sh@44 -- # digest=sha512 00:26:57.632 03:27:31 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:57.632 03:27:31 -- host/auth.sh@44 -- # keyid=3 00:26:57.632 03:27:31 -- host/auth.sh@45 -- # key=DHHC-1:02:ODQ3ZDUzYjFiYmEwNmEyOGExYjBkMzIyZTE1ZTdiNDAwOGFhODEwOTQyMDNlY2Ri2DqywQ==: 00:26:57.632 03:27:31 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:26:57.632 03:27:31 -- host/auth.sh@48 -- # echo ffdhe6144 00:26:57.632 03:27:31 -- host/auth.sh@49 -- # echo DHHC-1:02:ODQ3ZDUzYjFiYmEwNmEyOGExYjBkMzIyZTE1ZTdiNDAwOGFhODEwOTQyMDNlY2Ri2DqywQ==: 00:26:57.632 03:27:31 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 3 00:26:57.632 03:27:31 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:57.632 03:27:31 -- host/auth.sh@68 -- # digest=sha512 00:26:57.632 03:27:31 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:26:57.632 03:27:31 -- host/auth.sh@68 -- # keyid=3 00:26:57.632 03:27:31 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:57.632 03:27:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:57.632 03:27:31 -- common/autotest_common.sh@10 -- # set +x 00:26:57.632 03:27:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:57.632 03:27:31 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:57.632 03:27:31 -- nvmf/common.sh@717 -- # local ip 00:26:57.632 03:27:31 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:57.632 03:27:31 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:57.632 03:27:31 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:57.632 03:27:31 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:57.632 03:27:31 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:57.632 03:27:31 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:57.632 03:27:31 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:57.632 03:27:31 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:57.632 03:27:31 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:57.632 03:27:31 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:26:57.632 03:27:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:57.632 03:27:31 -- common/autotest_common.sh@10 -- # set +x 00:26:58.201 nvme0n1 00:26:58.201 03:27:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:58.201 03:27:32 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:58.201 03:27:32 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:58.201 03:27:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:58.201 03:27:32 -- common/autotest_common.sh@10 -- # set +x 00:26:58.201 03:27:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:58.201 03:27:32 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:58.201 03:27:32 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:58.201 03:27:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:58.201 03:27:32 -- common/autotest_common.sh@10 -- # set +x 00:26:58.201 03:27:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:58.201 03:27:32 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:58.201 03:27:32 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:26:58.201 03:27:32 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:58.201 03:27:32 -- host/auth.sh@44 -- # digest=sha512 00:26:58.201 03:27:32 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:58.201 03:27:32 -- host/auth.sh@44 -- # keyid=4 00:26:58.201 03:27:32 -- host/auth.sh@45 -- # key=DHHC-1:03:ODI0M2RhMDM4Y2QxYzRlYzI4NjI4MzVjZDA5OWJlOGFiYTI0OTc3MjlhZmJmNjdhYmM2ZTgzMGY2MmQ2ZDJkMi4LGic=: 00:26:58.201 03:27:32 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:26:58.201 03:27:32 -- host/auth.sh@48 -- # echo ffdhe6144 00:26:58.201 03:27:32 -- host/auth.sh@49 -- # echo DHHC-1:03:ODI0M2RhMDM4Y2QxYzRlYzI4NjI4MzVjZDA5OWJlOGFiYTI0OTc3MjlhZmJmNjdhYmM2ZTgzMGY2MmQ2ZDJkMi4LGic=: 00:26:58.201 03:27:32 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 4 00:26:58.201 03:27:32 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:58.201 03:27:32 -- host/auth.sh@68 -- # digest=sha512 00:26:58.201 03:27:32 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:26:58.201 03:27:32 -- host/auth.sh@68 -- # keyid=4 00:26:58.201 03:27:32 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:58.201 03:27:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:58.201 03:27:32 -- common/autotest_common.sh@10 -- # set +x 00:26:58.201 03:27:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:58.201 03:27:32 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:58.201 03:27:32 -- nvmf/common.sh@717 -- # local ip 00:26:58.201 03:27:32 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:58.201 03:27:32 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:58.201 03:27:32 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:58.201 03:27:32 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:58.201 03:27:32 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:58.201 03:27:32 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:58.201 03:27:32 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:58.201 03:27:32 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:58.201 03:27:32 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:58.201 03:27:32 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:58.201 03:27:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:58.201 03:27:32 -- common/autotest_common.sh@10 -- # set +x 00:26:58.770 nvme0n1 00:26:58.770 03:27:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:58.770 03:27:33 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:58.770 03:27:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:58.770 03:27:33 -- common/autotest_common.sh@10 -- # set +x 00:26:58.770 03:27:33 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:58.770 03:27:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:58.770 03:27:33 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:58.770 03:27:33 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:58.770 03:27:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:58.770 03:27:33 -- common/autotest_common.sh@10 -- # set +x 00:26:58.770 03:27:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:58.770 03:27:33 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:26:58.770 03:27:33 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:58.770 03:27:33 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:26:58.770 03:27:33 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:58.770 03:27:33 -- host/auth.sh@44 -- # digest=sha512 00:26:58.770 03:27:33 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:58.770 03:27:33 -- host/auth.sh@44 -- # keyid=0 00:26:58.770 03:27:33 -- host/auth.sh@45 -- # key=DHHC-1:00:YzNjNWE1ODdhNmEyYzEwN2Q3MTc3MzAzOWQ1OTkyM2H6n/Cu: 00:26:58.770 03:27:33 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:26:58.770 03:27:33 -- host/auth.sh@48 -- # echo ffdhe8192 00:26:58.770 03:27:33 -- host/auth.sh@49 -- # echo DHHC-1:00:YzNjNWE1ODdhNmEyYzEwN2Q3MTc3MzAzOWQ1OTkyM2H6n/Cu: 00:26:58.770 03:27:33 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 0 00:26:58.770 03:27:33 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:58.770 03:27:33 -- host/auth.sh@68 -- # digest=sha512 00:26:58.770 03:27:33 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:26:58.770 03:27:33 -- host/auth.sh@68 -- # keyid=0 00:26:58.770 03:27:33 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:58.770 03:27:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:58.770 03:27:33 -- common/autotest_common.sh@10 -- # set +x 00:26:58.770 03:27:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:58.770 03:27:33 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:58.770 03:27:33 -- nvmf/common.sh@717 -- # local ip 00:26:58.770 03:27:33 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:58.770 03:27:33 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:58.770 03:27:33 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:58.770 03:27:33 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:58.770 03:27:33 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:58.770 03:27:33 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:58.770 03:27:33 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:58.770 03:27:33 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:58.770 03:27:33 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:58.770 03:27:33 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:26:58.770 03:27:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:58.770 03:27:33 -- common/autotest_common.sh@10 -- # set +x 00:26:59.725 nvme0n1 00:26:59.725 03:27:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:59.725 03:27:34 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:59.725 03:27:34 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:59.725 03:27:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:59.725 03:27:34 -- common/autotest_common.sh@10 -- # set +x 00:26:59.725 03:27:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:59.725 03:27:34 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:59.725 03:27:34 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:59.725 03:27:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:59.725 03:27:34 -- common/autotest_common.sh@10 -- # set +x 00:26:59.725 03:27:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:59.725 03:27:34 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:59.725 03:27:34 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:26:59.725 03:27:34 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:59.725 03:27:34 -- host/auth.sh@44 -- # digest=sha512 00:26:59.725 03:27:34 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:59.725 03:27:34 -- host/auth.sh@44 -- # keyid=1 00:26:59.725 03:27:34 -- host/auth.sh@45 -- # key=DHHC-1:00:OGMxMGM0MTFkMWY1NGI5NTc0NDNiODliMThjOWU1N2MzMzViNjJhMTRjNmRjMWFjsMUppw==: 00:26:59.725 03:27:34 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:26:59.725 03:27:34 -- host/auth.sh@48 -- # echo ffdhe8192 00:26:59.725 03:27:34 -- host/auth.sh@49 -- # echo DHHC-1:00:OGMxMGM0MTFkMWY1NGI5NTc0NDNiODliMThjOWU1N2MzMzViNjJhMTRjNmRjMWFjsMUppw==: 00:26:59.725 03:27:34 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 1 00:26:59.725 03:27:34 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:59.725 03:27:34 -- host/auth.sh@68 -- # digest=sha512 00:26:59.725 03:27:34 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:26:59.725 03:27:34 -- host/auth.sh@68 -- # keyid=1 00:26:59.725 03:27:34 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:59.725 03:27:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:59.725 03:27:34 -- common/autotest_common.sh@10 -- # set +x 00:26:59.725 03:27:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:59.725 03:27:34 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:59.725 03:27:34 -- nvmf/common.sh@717 -- # local ip 00:26:59.725 03:27:34 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:59.725 03:27:34 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:59.725 03:27:34 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:59.725 03:27:34 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:59.725 03:27:34 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:59.725 03:27:34 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:59.725 03:27:34 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:59.725 03:27:34 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:59.725 03:27:34 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:59.725 03:27:34 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:26:59.725 03:27:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:59.725 03:27:34 -- common/autotest_common.sh@10 -- # set +x 00:27:00.659 nvme0n1 00:27:00.659 03:27:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:00.659 03:27:35 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:00.659 03:27:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:00.659 03:27:35 -- common/autotest_common.sh@10 -- # set +x 00:27:00.659 03:27:35 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:00.659 03:27:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:00.919 03:27:35 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:00.919 03:27:35 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:00.919 03:27:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:00.919 03:27:35 -- common/autotest_common.sh@10 -- # set +x 00:27:00.919 03:27:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:00.919 03:27:35 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:00.919 03:27:35 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:27:00.919 03:27:35 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:00.919 03:27:35 -- host/auth.sh@44 -- # digest=sha512 00:27:00.919 03:27:35 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:00.919 03:27:35 -- host/auth.sh@44 -- # keyid=2 00:27:00.919 03:27:35 -- host/auth.sh@45 -- # key=DHHC-1:01:Njc1MDkxOTg2Y2YxNzliN2ZlNWI3MmU1NjEyMjQ4OTOIsYpe: 00:27:00.919 03:27:35 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:27:00.919 03:27:35 -- host/auth.sh@48 -- # echo ffdhe8192 00:27:00.919 03:27:35 -- host/auth.sh@49 -- # echo DHHC-1:01:Njc1MDkxOTg2Y2YxNzliN2ZlNWI3MmU1NjEyMjQ4OTOIsYpe: 00:27:00.919 03:27:35 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 2 00:27:00.919 03:27:35 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:00.919 03:27:35 -- host/auth.sh@68 -- # digest=sha512 00:27:00.919 03:27:35 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:27:00.919 03:27:35 -- host/auth.sh@68 -- # keyid=2 00:27:00.919 03:27:35 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:00.919 03:27:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:00.919 03:27:35 -- common/autotest_common.sh@10 -- # set +x 00:27:00.919 03:27:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:00.919 03:27:35 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:00.919 03:27:35 -- nvmf/common.sh@717 -- # local ip 00:27:00.919 03:27:35 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:00.919 03:27:35 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:00.919 03:27:35 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:00.919 03:27:35 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:00.919 03:27:35 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:00.919 03:27:35 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:00.919 03:27:35 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:00.919 03:27:35 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:00.919 03:27:35 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:00.919 03:27:35 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:00.919 03:27:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:00.919 03:27:35 -- common/autotest_common.sh@10 -- # set +x 00:27:01.857 nvme0n1 00:27:01.857 03:27:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:01.857 03:27:36 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:01.857 03:27:36 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:01.857 03:27:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:01.857 03:27:36 -- common/autotest_common.sh@10 -- # set +x 00:27:01.857 03:27:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:01.857 03:27:36 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:01.857 03:27:36 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:01.857 03:27:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:01.857 03:27:36 -- common/autotest_common.sh@10 -- # set +x 00:27:01.857 03:27:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:01.857 03:27:36 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:01.857 03:27:36 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:27:01.857 03:27:36 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:01.857 03:27:36 -- host/auth.sh@44 -- # digest=sha512 00:27:01.857 03:27:36 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:01.857 03:27:36 -- host/auth.sh@44 -- # keyid=3 00:27:01.857 03:27:36 -- host/auth.sh@45 -- # key=DHHC-1:02:ODQ3ZDUzYjFiYmEwNmEyOGExYjBkMzIyZTE1ZTdiNDAwOGFhODEwOTQyMDNlY2Ri2DqywQ==: 00:27:01.857 03:27:36 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:27:01.857 03:27:36 -- host/auth.sh@48 -- # echo ffdhe8192 00:27:01.857 03:27:36 -- host/auth.sh@49 -- # echo DHHC-1:02:ODQ3ZDUzYjFiYmEwNmEyOGExYjBkMzIyZTE1ZTdiNDAwOGFhODEwOTQyMDNlY2Ri2DqywQ==: 00:27:01.857 03:27:36 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 3 00:27:01.857 03:27:36 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:01.857 03:27:36 -- host/auth.sh@68 -- # digest=sha512 00:27:01.857 03:27:36 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:27:01.857 03:27:36 -- host/auth.sh@68 -- # keyid=3 00:27:01.857 03:27:36 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:01.857 03:27:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:01.857 03:27:36 -- common/autotest_common.sh@10 -- # set +x 00:27:01.857 03:27:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:01.857 03:27:36 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:01.857 03:27:36 -- nvmf/common.sh@717 -- # local ip 00:27:01.857 03:27:36 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:01.857 03:27:36 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:01.857 03:27:36 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:01.857 03:27:36 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:01.857 03:27:36 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:01.857 03:27:36 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:01.857 03:27:36 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:01.857 03:27:36 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:01.857 03:27:36 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:01.857 03:27:36 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:27:01.857 03:27:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:01.857 03:27:36 -- common/autotest_common.sh@10 -- # set +x 00:27:02.796 nvme0n1 00:27:02.796 03:27:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:02.796 03:27:37 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:02.796 03:27:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:02.796 03:27:37 -- common/autotest_common.sh@10 -- # set +x 00:27:02.796 03:27:37 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:02.796 03:27:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:02.796 03:27:37 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:02.796 03:27:37 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:02.796 03:27:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:02.796 03:27:37 -- common/autotest_common.sh@10 -- # set +x 00:27:02.796 03:27:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:02.796 03:27:37 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:02.796 03:27:37 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:27:02.796 03:27:37 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:02.796 03:27:37 -- host/auth.sh@44 -- # digest=sha512 00:27:02.796 03:27:37 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:02.796 03:27:37 -- host/auth.sh@44 -- # keyid=4 00:27:02.796 03:27:37 -- host/auth.sh@45 -- # key=DHHC-1:03:ODI0M2RhMDM4Y2QxYzRlYzI4NjI4MzVjZDA5OWJlOGFiYTI0OTc3MjlhZmJmNjdhYmM2ZTgzMGY2MmQ2ZDJkMi4LGic=: 00:27:02.796 03:27:37 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:27:02.796 03:27:37 -- host/auth.sh@48 -- # echo ffdhe8192 00:27:02.796 03:27:37 -- host/auth.sh@49 -- # echo DHHC-1:03:ODI0M2RhMDM4Y2QxYzRlYzI4NjI4MzVjZDA5OWJlOGFiYTI0OTc3MjlhZmJmNjdhYmM2ZTgzMGY2MmQ2ZDJkMi4LGic=: 00:27:02.796 03:27:37 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 4 00:27:02.796 03:27:37 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:02.796 03:27:37 -- host/auth.sh@68 -- # digest=sha512 00:27:02.796 03:27:37 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:27:02.796 03:27:37 -- host/auth.sh@68 -- # keyid=4 00:27:02.796 03:27:37 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:02.796 03:27:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:02.796 03:27:37 -- common/autotest_common.sh@10 -- # set +x 00:27:02.796 03:27:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:02.796 03:27:37 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:02.796 03:27:37 -- nvmf/common.sh@717 -- # local ip 00:27:02.797 03:27:37 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:02.797 03:27:37 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:02.797 03:27:37 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:02.797 03:27:37 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:02.797 03:27:37 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:02.797 03:27:37 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:02.797 03:27:37 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:02.797 03:27:37 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:02.797 03:27:37 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:02.797 03:27:37 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:02.797 03:27:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:02.797 03:27:37 -- common/autotest_common.sh@10 -- # set +x 00:27:03.733 nvme0n1 00:27:03.733 03:27:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:03.733 03:27:38 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:03.733 03:27:38 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:03.733 03:27:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:03.733 03:27:38 -- common/autotest_common.sh@10 -- # set +x 00:27:03.733 03:27:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:03.991 03:27:38 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:03.991 03:27:38 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:03.991 03:27:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:03.991 03:27:38 -- common/autotest_common.sh@10 -- # set +x 00:27:03.991 03:27:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:03.991 03:27:38 -- host/auth.sh@117 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:03.991 03:27:38 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:03.991 03:27:38 -- host/auth.sh@44 -- # digest=sha256 00:27:03.991 03:27:38 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:03.991 03:27:38 -- host/auth.sh@44 -- # keyid=1 00:27:03.991 03:27:38 -- host/auth.sh@45 -- # key=DHHC-1:00:OGMxMGM0MTFkMWY1NGI5NTc0NDNiODliMThjOWU1N2MzMzViNjJhMTRjNmRjMWFjsMUppw==: 00:27:03.991 03:27:38 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:27:03.991 03:27:38 -- host/auth.sh@48 -- # echo ffdhe2048 00:27:03.991 03:27:38 -- host/auth.sh@49 -- # echo DHHC-1:00:OGMxMGM0MTFkMWY1NGI5NTc0NDNiODliMThjOWU1N2MzMzViNjJhMTRjNmRjMWFjsMUppw==: 00:27:03.991 03:27:38 -- host/auth.sh@118 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:03.991 03:27:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:03.991 03:27:38 -- common/autotest_common.sh@10 -- # set +x 00:27:03.991 03:27:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:03.991 03:27:38 -- host/auth.sh@119 -- # get_main_ns_ip 00:27:03.991 03:27:38 -- nvmf/common.sh@717 -- # local ip 00:27:03.991 03:27:38 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:03.991 03:27:38 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:03.991 03:27:38 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:03.991 03:27:38 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:03.991 03:27:38 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:03.991 03:27:38 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:03.991 03:27:38 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:03.991 03:27:38 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:03.991 03:27:38 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:03.991 03:27:38 -- host/auth.sh@119 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:03.991 03:27:38 -- common/autotest_common.sh@638 -- # local es=0 00:27:03.991 03:27:38 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:03.991 03:27:38 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:27:03.991 03:27:38 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:27:03.991 03:27:38 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:27:03.991 03:27:38 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:27:03.991 03:27:38 -- common/autotest_common.sh@641 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:03.991 03:27:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:03.992 03:27:38 -- common/autotest_common.sh@10 -- # set +x 00:27:03.992 request: 00:27:03.992 { 00:27:03.992 "name": "nvme0", 00:27:03.992 "trtype": "tcp", 00:27:03.992 "traddr": "10.0.0.1", 00:27:03.992 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:03.992 "adrfam": "ipv4", 00:27:03.992 "trsvcid": "4420", 00:27:03.992 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:03.992 "method": "bdev_nvme_attach_controller", 00:27:03.992 "req_id": 1 00:27:03.992 } 00:27:03.992 Got JSON-RPC error response 00:27:03.992 response: 00:27:03.992 { 00:27:03.992 "code": -32602, 00:27:03.992 "message": "Invalid parameters" 00:27:03.992 } 00:27:03.992 03:27:38 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:27:03.992 03:27:38 -- common/autotest_common.sh@641 -- # es=1 00:27:03.992 03:27:38 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:27:03.992 03:27:38 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:27:03.992 03:27:38 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:27:03.992 03:27:38 -- host/auth.sh@121 -- # rpc_cmd bdev_nvme_get_controllers 00:27:03.992 03:27:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:03.992 03:27:38 -- host/auth.sh@121 -- # jq length 00:27:03.992 03:27:38 -- common/autotest_common.sh@10 -- # set +x 00:27:03.992 03:27:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:03.992 03:27:38 -- host/auth.sh@121 -- # (( 0 == 0 )) 00:27:03.992 03:27:38 -- host/auth.sh@124 -- # get_main_ns_ip 00:27:03.992 03:27:38 -- nvmf/common.sh@717 -- # local ip 00:27:03.992 03:27:38 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:03.992 03:27:38 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:03.992 03:27:38 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:03.992 03:27:38 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:03.992 03:27:38 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:03.992 03:27:38 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:03.992 03:27:38 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:03.992 03:27:38 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:03.992 03:27:38 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:03.992 03:27:38 -- host/auth.sh@124 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:03.992 03:27:38 -- common/autotest_common.sh@638 -- # local es=0 00:27:03.992 03:27:38 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:03.992 03:27:38 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:27:03.992 03:27:38 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:27:03.992 03:27:38 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:27:03.992 03:27:38 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:27:03.992 03:27:38 -- common/autotest_common.sh@641 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:03.992 03:27:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:03.992 03:27:38 -- common/autotest_common.sh@10 -- # set +x 00:27:03.992 request: 00:27:03.992 { 00:27:03.992 "name": "nvme0", 00:27:03.992 "trtype": "tcp", 00:27:03.992 "traddr": "10.0.0.1", 00:27:03.992 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:03.992 "adrfam": "ipv4", 00:27:03.992 "trsvcid": "4420", 00:27:03.992 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:03.992 "dhchap_key": "key2", 00:27:03.992 "method": "bdev_nvme_attach_controller", 00:27:03.992 "req_id": 1 00:27:03.992 } 00:27:03.992 Got JSON-RPC error response 00:27:03.992 response: 00:27:03.992 { 00:27:03.992 "code": -32602, 00:27:03.992 "message": "Invalid parameters" 00:27:03.992 } 00:27:03.992 03:27:38 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:27:03.992 03:27:38 -- common/autotest_common.sh@641 -- # es=1 00:27:03.992 03:27:38 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:27:03.992 03:27:38 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:27:03.992 03:27:38 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:27:03.992 03:27:38 -- host/auth.sh@127 -- # rpc_cmd bdev_nvme_get_controllers 00:27:03.992 03:27:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:03.992 03:27:38 -- host/auth.sh@127 -- # jq length 00:27:03.992 03:27:38 -- common/autotest_common.sh@10 -- # set +x 00:27:03.992 03:27:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:04.252 03:27:38 -- host/auth.sh@127 -- # (( 0 == 0 )) 00:27:04.252 03:27:38 -- host/auth.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:27:04.252 03:27:38 -- host/auth.sh@130 -- # cleanup 00:27:04.252 03:27:38 -- host/auth.sh@24 -- # nvmftestfini 00:27:04.252 03:27:38 -- nvmf/common.sh@477 -- # nvmfcleanup 00:27:04.252 03:27:38 -- nvmf/common.sh@117 -- # sync 00:27:04.252 03:27:38 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:04.252 03:27:38 -- nvmf/common.sh@120 -- # set +e 00:27:04.252 03:27:38 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:04.252 03:27:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:04.252 rmmod nvme_tcp 00:27:04.252 rmmod nvme_fabrics 00:27:04.252 03:27:38 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:04.252 03:27:38 -- nvmf/common.sh@124 -- # set -e 00:27:04.252 03:27:38 -- nvmf/common.sh@125 -- # return 0 00:27:04.252 03:27:38 -- nvmf/common.sh@478 -- # '[' -n 1603055 ']' 00:27:04.252 03:27:38 -- nvmf/common.sh@479 -- # killprocess 1603055 00:27:04.252 03:27:38 -- common/autotest_common.sh@936 -- # '[' -z 1603055 ']' 00:27:04.252 03:27:38 -- common/autotest_common.sh@940 -- # kill -0 1603055 00:27:04.252 03:27:38 -- common/autotest_common.sh@941 -- # uname 00:27:04.252 03:27:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:04.252 03:27:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1603055 00:27:04.252 03:27:38 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:04.252 03:27:38 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:04.252 03:27:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1603055' 00:27:04.252 killing process with pid 1603055 00:27:04.252 03:27:38 -- common/autotest_common.sh@955 -- # kill 1603055 00:27:04.252 03:27:38 -- common/autotest_common.sh@960 -- # wait 1603055 00:27:04.511 03:27:38 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:27:04.511 03:27:38 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:27:04.511 03:27:38 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:27:04.511 03:27:38 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:04.511 03:27:38 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:04.511 03:27:38 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:04.511 03:27:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:04.511 03:27:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:06.432 03:27:40 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:06.432 03:27:40 -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:06.432 03:27:40 -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:06.432 03:27:40 -- host/auth.sh@27 -- # clean_kernel_target 00:27:06.432 03:27:40 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:27:06.432 03:27:40 -- nvmf/common.sh@675 -- # echo 0 00:27:06.432 03:27:40 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:06.432 03:27:40 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:06.432 03:27:40 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:06.432 03:27:40 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:06.432 03:27:40 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:27:06.432 03:27:40 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:27:06.432 03:27:40 -- nvmf/common.sh@687 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:07.807 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:27:07.807 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:27:07.807 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:27:07.807 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:27:07.807 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:27:07.807 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:27:07.807 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:27:07.807 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:27:07.807 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:27:07.807 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:27:07.807 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:27:07.807 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:27:07.807 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:27:07.807 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:27:07.807 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:27:07.807 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:27:08.744 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:27:08.744 03:27:43 -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.6Zf /tmp/spdk.key-null.dT4 /tmp/spdk.key-sha256.Yej /tmp/spdk.key-sha384.zpA /tmp/spdk.key-sha512.ajr /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:27:08.744 03:27:43 -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:10.120 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:27:10.120 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:27:10.120 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:27:10.120 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:27:10.120 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:27:10.120 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:27:10.120 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:27:10.120 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:27:10.120 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:27:10.120 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:27:10.120 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:27:10.120 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:27:10.120 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:27:10.120 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:27:10.120 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:27:10.120 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:27:10.120 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:27:10.120 00:27:10.120 real 0m49.558s 00:27:10.120 user 0m47.158s 00:27:10.120 sys 0m5.597s 00:27:10.120 03:27:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:27:10.120 03:27:44 -- common/autotest_common.sh@10 -- # set +x 00:27:10.120 ************************************ 00:27:10.120 END TEST nvmf_auth 00:27:10.120 ************************************ 00:27:10.120 03:27:44 -- nvmf/nvmf.sh@104 -- # [[ tcp == \t\c\p ]] 00:27:10.120 03:27:44 -- nvmf/nvmf.sh@105 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:27:10.120 03:27:44 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:27:10.120 03:27:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:10.120 03:27:44 -- common/autotest_common.sh@10 -- # set +x 00:27:10.120 ************************************ 00:27:10.120 START TEST nvmf_digest 00:27:10.120 ************************************ 00:27:10.120 03:27:44 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:27:10.379 * Looking for test storage... 00:27:10.379 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:10.379 03:27:44 -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:10.379 03:27:44 -- nvmf/common.sh@7 -- # uname -s 00:27:10.379 03:27:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:10.379 03:27:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:10.379 03:27:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:10.379 03:27:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:10.379 03:27:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:10.379 03:27:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:10.379 03:27:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:10.379 03:27:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:10.379 03:27:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:10.379 03:27:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:10.379 03:27:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:10.379 03:27:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:10.379 03:27:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:10.379 03:27:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:10.379 03:27:44 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:10.379 03:27:44 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:10.379 03:27:44 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:10.379 03:27:44 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:10.379 03:27:44 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:10.379 03:27:44 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:10.379 03:27:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:10.379 03:27:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:10.379 03:27:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:10.379 03:27:44 -- paths/export.sh@5 -- # export PATH 00:27:10.379 03:27:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:10.379 03:27:44 -- nvmf/common.sh@47 -- # : 0 00:27:10.379 03:27:44 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:10.379 03:27:44 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:10.379 03:27:44 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:10.379 03:27:44 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:10.379 03:27:44 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:10.379 03:27:44 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:10.379 03:27:44 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:10.379 03:27:44 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:10.379 03:27:44 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:27:10.379 03:27:44 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:27:10.379 03:27:44 -- host/digest.sh@16 -- # runtime=2 00:27:10.379 03:27:44 -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:27:10.379 03:27:44 -- host/digest.sh@138 -- # nvmftestinit 00:27:10.379 03:27:44 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:27:10.379 03:27:44 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:10.379 03:27:44 -- nvmf/common.sh@437 -- # prepare_net_devs 00:27:10.379 03:27:44 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:27:10.379 03:27:44 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:27:10.379 03:27:44 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:10.379 03:27:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:10.379 03:27:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:10.379 03:27:44 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:27:10.379 03:27:44 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:27:10.379 03:27:44 -- nvmf/common.sh@285 -- # xtrace_disable 00:27:10.379 03:27:44 -- common/autotest_common.sh@10 -- # set +x 00:27:12.284 03:27:46 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:12.284 03:27:46 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:12.284 03:27:46 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:12.284 03:27:46 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:12.284 03:27:46 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:12.284 03:27:46 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:12.284 03:27:46 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:12.284 03:27:46 -- nvmf/common.sh@295 -- # net_devs=() 00:27:12.284 03:27:46 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:12.284 03:27:46 -- nvmf/common.sh@296 -- # e810=() 00:27:12.284 03:27:46 -- nvmf/common.sh@296 -- # local -ga e810 00:27:12.284 03:27:46 -- nvmf/common.sh@297 -- # x722=() 00:27:12.284 03:27:46 -- nvmf/common.sh@297 -- # local -ga x722 00:27:12.284 03:27:46 -- nvmf/common.sh@298 -- # mlx=() 00:27:12.284 03:27:46 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:12.284 03:27:46 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:12.284 03:27:46 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:12.284 03:27:46 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:12.284 03:27:46 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:12.284 03:27:46 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:12.284 03:27:46 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:12.284 03:27:46 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:12.284 03:27:46 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:12.284 03:27:46 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:12.284 03:27:46 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:12.284 03:27:46 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:12.284 03:27:46 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:12.284 03:27:46 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:12.284 03:27:46 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:12.284 03:27:46 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:12.284 03:27:46 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:12.284 03:27:46 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:12.284 03:27:46 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:12.284 03:27:46 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:12.284 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:12.284 03:27:46 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:12.284 03:27:46 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:12.284 03:27:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:12.284 03:27:46 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:12.284 03:27:46 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:12.284 03:27:46 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:12.284 03:27:46 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:12.284 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:12.284 03:27:46 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:12.284 03:27:46 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:12.284 03:27:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:12.284 03:27:46 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:12.284 03:27:46 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:12.284 03:27:46 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:12.284 03:27:46 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:12.284 03:27:46 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:12.284 03:27:46 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:12.284 03:27:46 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:12.284 03:27:46 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:27:12.284 03:27:46 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:12.284 03:27:46 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:12.284 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:12.284 03:27:46 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:27:12.284 03:27:46 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:12.284 03:27:46 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:12.284 03:27:46 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:27:12.284 03:27:46 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:12.284 03:27:46 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:12.284 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:12.284 03:27:46 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:27:12.284 03:27:46 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:27:12.284 03:27:46 -- nvmf/common.sh@403 -- # is_hw=yes 00:27:12.284 03:27:46 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:27:12.284 03:27:46 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:27:12.284 03:27:46 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:27:12.284 03:27:46 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:12.284 03:27:46 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:12.284 03:27:46 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:12.284 03:27:46 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:12.284 03:27:46 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:12.284 03:27:46 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:12.284 03:27:46 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:12.284 03:27:46 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:12.284 03:27:46 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:12.284 03:27:46 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:12.284 03:27:46 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:12.284 03:27:46 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:12.284 03:27:46 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:12.284 03:27:46 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:12.284 03:27:46 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:12.284 03:27:46 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:12.284 03:27:46 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:12.284 03:27:46 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:12.284 03:27:46 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:12.284 03:27:46 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:12.284 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:12.284 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.213 ms 00:27:12.284 00:27:12.284 --- 10.0.0.2 ping statistics --- 00:27:12.284 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:12.284 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:27:12.284 03:27:46 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:12.543 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:12.543 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.139 ms 00:27:12.543 00:27:12.543 --- 10.0.0.1 ping statistics --- 00:27:12.543 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:12.543 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:27:12.543 03:27:46 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:12.543 03:27:46 -- nvmf/common.sh@411 -- # return 0 00:27:12.543 03:27:46 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:27:12.543 03:27:46 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:12.543 03:27:46 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:27:12.543 03:27:46 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:27:12.543 03:27:46 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:12.543 03:27:46 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:27:12.543 03:27:46 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:27:12.543 03:27:46 -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:27:12.543 03:27:46 -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:27:12.543 03:27:46 -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:27:12.543 03:27:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:12.543 03:27:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:12.543 03:27:46 -- common/autotest_common.sh@10 -- # set +x 00:27:12.543 ************************************ 00:27:12.543 START TEST nvmf_digest_clean 00:27:12.543 ************************************ 00:27:12.543 03:27:46 -- common/autotest_common.sh@1111 -- # run_digest 00:27:12.543 03:27:46 -- host/digest.sh@120 -- # local dsa_initiator 00:27:12.543 03:27:46 -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:27:12.543 03:27:46 -- host/digest.sh@121 -- # dsa_initiator=false 00:27:12.543 03:27:46 -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:27:12.543 03:27:46 -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:27:12.543 03:27:46 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:27:12.543 03:27:46 -- common/autotest_common.sh@710 -- # xtrace_disable 00:27:12.543 03:27:46 -- common/autotest_common.sh@10 -- # set +x 00:27:12.543 03:27:46 -- nvmf/common.sh@470 -- # nvmfpid=1613104 00:27:12.543 03:27:46 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:27:12.543 03:27:46 -- nvmf/common.sh@471 -- # waitforlisten 1613104 00:27:12.543 03:27:46 -- common/autotest_common.sh@817 -- # '[' -z 1613104 ']' 00:27:12.543 03:27:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:12.543 03:27:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:12.543 03:27:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:12.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:12.543 03:27:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:12.543 03:27:46 -- common/autotest_common.sh@10 -- # set +x 00:27:12.543 [2024-04-25 03:27:46.942365] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:27:12.543 [2024-04-25 03:27:46.942468] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:12.543 EAL: No free 2048 kB hugepages reported on node 1 00:27:12.543 [2024-04-25 03:27:47.012266] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:12.803 [2024-04-25 03:27:47.130454] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:12.803 [2024-04-25 03:27:47.130523] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:12.803 [2024-04-25 03:27:47.130547] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:12.803 [2024-04-25 03:27:47.130561] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:12.803 [2024-04-25 03:27:47.130572] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:12.803 [2024-04-25 03:27:47.130605] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:13.765 03:27:47 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:13.765 03:27:47 -- common/autotest_common.sh@850 -- # return 0 00:27:13.765 03:27:47 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:27:13.765 03:27:47 -- common/autotest_common.sh@716 -- # xtrace_disable 00:27:13.765 03:27:47 -- common/autotest_common.sh@10 -- # set +x 00:27:13.765 03:27:47 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:13.765 03:27:47 -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:27:13.765 03:27:47 -- host/digest.sh@126 -- # common_target_config 00:27:13.765 03:27:47 -- host/digest.sh@43 -- # rpc_cmd 00:27:13.765 03:27:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:13.765 03:27:47 -- common/autotest_common.sh@10 -- # set +x 00:27:13.765 null0 00:27:13.765 [2024-04-25 03:27:48.035982] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:13.765 [2024-04-25 03:27:48.060199] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:13.765 03:27:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:13.765 03:27:48 -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:27:13.765 03:27:48 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:13.765 03:27:48 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:13.765 03:27:48 -- host/digest.sh@80 -- # rw=randread 00:27:13.765 03:27:48 -- host/digest.sh@80 -- # bs=4096 00:27:13.765 03:27:48 -- host/digest.sh@80 -- # qd=128 00:27:13.765 03:27:48 -- host/digest.sh@80 -- # scan_dsa=false 00:27:13.765 03:27:48 -- host/digest.sh@83 -- # bperfpid=1613256 00:27:13.765 03:27:48 -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:27:13.765 03:27:48 -- host/digest.sh@84 -- # waitforlisten 1613256 /var/tmp/bperf.sock 00:27:13.765 03:27:48 -- common/autotest_common.sh@817 -- # '[' -z 1613256 ']' 00:27:13.765 03:27:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:13.765 03:27:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:13.765 03:27:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:13.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:13.765 03:27:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:13.765 03:27:48 -- common/autotest_common.sh@10 -- # set +x 00:27:13.766 [2024-04-25 03:27:48.105262] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:27:13.766 [2024-04-25 03:27:48.105336] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1613256 ] 00:27:13.766 EAL: No free 2048 kB hugepages reported on node 1 00:27:13.766 [2024-04-25 03:27:48.165772] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:14.031 [2024-04-25 03:27:48.283222] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:14.598 03:27:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:14.598 03:27:49 -- common/autotest_common.sh@850 -- # return 0 00:27:14.598 03:27:49 -- host/digest.sh@86 -- # false 00:27:14.598 03:27:49 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:14.598 03:27:49 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:15.169 03:27:49 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:15.169 03:27:49 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:15.428 nvme0n1 00:27:15.429 03:27:49 -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:15.429 03:27:49 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:15.429 Running I/O for 2 seconds... 00:27:17.331 00:27:17.331 Latency(us) 00:27:17.331 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:17.331 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:27:17.331 nvme0n1 : 2.00 18647.58 72.84 0.00 0.00 6856.55 3082.62 11845.03 00:27:17.331 =================================================================================================================== 00:27:17.331 Total : 18647.58 72.84 0.00 0.00 6856.55 3082.62 11845.03 00:27:17.331 0 00:27:17.331 03:27:51 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:17.331 03:27:51 -- host/digest.sh@93 -- # get_accel_stats 00:27:17.331 03:27:51 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:17.331 03:27:51 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:17.331 03:27:51 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:17.331 | select(.opcode=="crc32c") 00:27:17.331 | "\(.module_name) \(.executed)"' 00:27:17.590 03:27:52 -- host/digest.sh@94 -- # false 00:27:17.590 03:27:52 -- host/digest.sh@94 -- # exp_module=software 00:27:17.590 03:27:52 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:17.590 03:27:52 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:17.590 03:27:52 -- host/digest.sh@98 -- # killprocess 1613256 00:27:17.590 03:27:52 -- common/autotest_common.sh@936 -- # '[' -z 1613256 ']' 00:27:17.590 03:27:52 -- common/autotest_common.sh@940 -- # kill -0 1613256 00:27:17.590 03:27:52 -- common/autotest_common.sh@941 -- # uname 00:27:17.590 03:27:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:17.590 03:27:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1613256 00:27:17.849 03:27:52 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:27:17.849 03:27:52 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:27:17.849 03:27:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1613256' 00:27:17.849 killing process with pid 1613256 00:27:17.849 03:27:52 -- common/autotest_common.sh@955 -- # kill 1613256 00:27:17.849 Received shutdown signal, test time was about 2.000000 seconds 00:27:17.849 00:27:17.849 Latency(us) 00:27:17.849 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:17.849 =================================================================================================================== 00:27:17.849 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:17.849 03:27:52 -- common/autotest_common.sh@960 -- # wait 1613256 00:27:18.110 03:27:52 -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:27:18.110 03:27:52 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:18.110 03:27:52 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:18.110 03:27:52 -- host/digest.sh@80 -- # rw=randread 00:27:18.110 03:27:52 -- host/digest.sh@80 -- # bs=131072 00:27:18.110 03:27:52 -- host/digest.sh@80 -- # qd=16 00:27:18.110 03:27:52 -- host/digest.sh@80 -- # scan_dsa=false 00:27:18.110 03:27:52 -- host/digest.sh@83 -- # bperfpid=1613792 00:27:18.110 03:27:52 -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:27:18.110 03:27:52 -- host/digest.sh@84 -- # waitforlisten 1613792 /var/tmp/bperf.sock 00:27:18.110 03:27:52 -- common/autotest_common.sh@817 -- # '[' -z 1613792 ']' 00:27:18.110 03:27:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:18.110 03:27:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:18.110 03:27:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:18.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:18.110 03:27:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:18.110 03:27:52 -- common/autotest_common.sh@10 -- # set +x 00:27:18.110 [2024-04-25 03:27:52.397601] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:27:18.110 [2024-04-25 03:27:52.397719] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1613792 ] 00:27:18.110 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:18.110 Zero copy mechanism will not be used. 00:27:18.110 EAL: No free 2048 kB hugepages reported on node 1 00:27:18.110 [2024-04-25 03:27:52.457459] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:18.110 [2024-04-25 03:27:52.574524] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:18.369 03:27:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:18.369 03:27:52 -- common/autotest_common.sh@850 -- # return 0 00:27:18.369 03:27:52 -- host/digest.sh@86 -- # false 00:27:18.369 03:27:52 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:18.369 03:27:52 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:18.628 03:27:53 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:18.628 03:27:53 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:19.193 nvme0n1 00:27:19.193 03:27:53 -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:19.193 03:27:53 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:19.193 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:19.193 Zero copy mechanism will not be used. 00:27:19.193 Running I/O for 2 seconds... 00:27:21.095 00:27:21.095 Latency(us) 00:27:21.095 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:21.095 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:27:21.095 nvme0n1 : 2.01 2082.25 260.28 0.00 0.00 7678.90 7136.14 17767.54 00:27:21.095 =================================================================================================================== 00:27:21.095 Total : 2082.25 260.28 0.00 0.00 7678.90 7136.14 17767.54 00:27:21.095 0 00:27:21.353 03:27:55 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:21.353 03:27:55 -- host/digest.sh@93 -- # get_accel_stats 00:27:21.353 03:27:55 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:21.353 03:27:55 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:21.353 03:27:55 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:21.353 | select(.opcode=="crc32c") 00:27:21.353 | "\(.module_name) \(.executed)"' 00:27:21.611 03:27:55 -- host/digest.sh@94 -- # false 00:27:21.611 03:27:55 -- host/digest.sh@94 -- # exp_module=software 00:27:21.611 03:27:55 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:21.611 03:27:55 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:21.611 03:27:55 -- host/digest.sh@98 -- # killprocess 1613792 00:27:21.611 03:27:55 -- common/autotest_common.sh@936 -- # '[' -z 1613792 ']' 00:27:21.611 03:27:55 -- common/autotest_common.sh@940 -- # kill -0 1613792 00:27:21.611 03:27:55 -- common/autotest_common.sh@941 -- # uname 00:27:21.611 03:27:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:21.611 03:27:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1613792 00:27:21.611 03:27:55 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:27:21.611 03:27:55 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:27:21.611 03:27:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1613792' 00:27:21.611 killing process with pid 1613792 00:27:21.611 03:27:55 -- common/autotest_common.sh@955 -- # kill 1613792 00:27:21.611 Received shutdown signal, test time was about 2.000000 seconds 00:27:21.611 00:27:21.611 Latency(us) 00:27:21.611 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:21.611 =================================================================================================================== 00:27:21.611 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:21.611 03:27:55 -- common/autotest_common.sh@960 -- # wait 1613792 00:27:21.870 03:27:56 -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:27:21.870 03:27:56 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:21.870 03:27:56 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:21.870 03:27:56 -- host/digest.sh@80 -- # rw=randwrite 00:27:21.870 03:27:56 -- host/digest.sh@80 -- # bs=4096 00:27:21.870 03:27:56 -- host/digest.sh@80 -- # qd=128 00:27:21.870 03:27:56 -- host/digest.sh@80 -- # scan_dsa=false 00:27:21.870 03:27:56 -- host/digest.sh@83 -- # bperfpid=1614201 00:27:21.870 03:27:56 -- host/digest.sh@84 -- # waitforlisten 1614201 /var/tmp/bperf.sock 00:27:21.870 03:27:56 -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:27:21.870 03:27:56 -- common/autotest_common.sh@817 -- # '[' -z 1614201 ']' 00:27:21.870 03:27:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:21.870 03:27:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:21.870 03:27:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:21.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:21.870 03:27:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:21.870 03:27:56 -- common/autotest_common.sh@10 -- # set +x 00:27:21.870 [2024-04-25 03:27:56.197532] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:27:21.870 [2024-04-25 03:27:56.197601] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1614201 ] 00:27:21.870 EAL: No free 2048 kB hugepages reported on node 1 00:27:21.870 [2024-04-25 03:27:56.257907] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:22.128 [2024-04-25 03:27:56.371892] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:22.128 03:27:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:22.128 03:27:56 -- common/autotest_common.sh@850 -- # return 0 00:27:22.128 03:27:56 -- host/digest.sh@86 -- # false 00:27:22.128 03:27:56 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:22.128 03:27:56 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:22.387 03:27:56 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:22.387 03:27:56 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:22.957 nvme0n1 00:27:22.957 03:27:57 -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:22.957 03:27:57 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:22.957 Running I/O for 2 seconds... 00:27:24.866 00:27:24.866 Latency(us) 00:27:24.866 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:24.866 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:24.866 nvme0n1 : 2.01 18155.63 70.92 0.00 0.00 7033.78 6262.33 15922.82 00:27:24.866 =================================================================================================================== 00:27:24.866 Total : 18155.63 70.92 0.00 0.00 7033.78 6262.33 15922.82 00:27:24.866 0 00:27:24.866 03:27:59 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:24.866 03:27:59 -- host/digest.sh@93 -- # get_accel_stats 00:27:24.866 03:27:59 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:24.866 03:27:59 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:24.866 03:27:59 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:24.866 | select(.opcode=="crc32c") 00:27:24.866 | "\(.module_name) \(.executed)"' 00:27:25.125 03:27:59 -- host/digest.sh@94 -- # false 00:27:25.125 03:27:59 -- host/digest.sh@94 -- # exp_module=software 00:27:25.125 03:27:59 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:25.125 03:27:59 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:25.125 03:27:59 -- host/digest.sh@98 -- # killprocess 1614201 00:27:25.125 03:27:59 -- common/autotest_common.sh@936 -- # '[' -z 1614201 ']' 00:27:25.125 03:27:59 -- common/autotest_common.sh@940 -- # kill -0 1614201 00:27:25.125 03:27:59 -- common/autotest_common.sh@941 -- # uname 00:27:25.125 03:27:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:25.125 03:27:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1614201 00:27:25.125 03:27:59 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:27:25.125 03:27:59 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:27:25.125 03:27:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1614201' 00:27:25.125 killing process with pid 1614201 00:27:25.125 03:27:59 -- common/autotest_common.sh@955 -- # kill 1614201 00:27:25.125 Received shutdown signal, test time was about 2.000000 seconds 00:27:25.125 00:27:25.126 Latency(us) 00:27:25.126 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:25.126 =================================================================================================================== 00:27:25.126 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:25.126 03:27:59 -- common/autotest_common.sh@960 -- # wait 1614201 00:27:25.384 03:27:59 -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:27:25.384 03:27:59 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:25.384 03:27:59 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:25.384 03:27:59 -- host/digest.sh@80 -- # rw=randwrite 00:27:25.384 03:27:59 -- host/digest.sh@80 -- # bs=131072 00:27:25.384 03:27:59 -- host/digest.sh@80 -- # qd=16 00:27:25.384 03:27:59 -- host/digest.sh@80 -- # scan_dsa=false 00:27:25.384 03:27:59 -- host/digest.sh@83 -- # bperfpid=1614611 00:27:25.384 03:27:59 -- host/digest.sh@84 -- # waitforlisten 1614611 /var/tmp/bperf.sock 00:27:25.384 03:27:59 -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:27:25.384 03:27:59 -- common/autotest_common.sh@817 -- # '[' -z 1614611 ']' 00:27:25.384 03:27:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:25.384 03:27:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:25.384 03:27:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:25.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:25.384 03:27:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:25.384 03:27:59 -- common/autotest_common.sh@10 -- # set +x 00:27:25.641 [2024-04-25 03:27:59.914267] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:27:25.641 [2024-04-25 03:27:59.914346] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1614611 ] 00:27:25.641 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:25.641 Zero copy mechanism will not be used. 00:27:25.641 EAL: No free 2048 kB hugepages reported on node 1 00:27:25.641 [2024-04-25 03:27:59.972316] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:25.641 [2024-04-25 03:28:00.084333] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:25.641 03:28:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:25.641 03:28:00 -- common/autotest_common.sh@850 -- # return 0 00:27:25.641 03:28:00 -- host/digest.sh@86 -- # false 00:27:25.641 03:28:00 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:25.641 03:28:00 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:26.207 03:28:00 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:26.207 03:28:00 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:26.467 nvme0n1 00:27:26.467 03:28:00 -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:26.467 03:28:00 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:26.467 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:26.467 Zero copy mechanism will not be used. 00:27:26.467 Running I/O for 2 seconds... 00:27:29.051 00:27:29.051 Latency(us) 00:27:29.051 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:29.051 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:27:29.051 nvme0n1 : 2.01 1342.94 167.87 0.00 0.00 11874.10 8738.13 20097.71 00:27:29.051 =================================================================================================================== 00:27:29.051 Total : 1342.94 167.87 0.00 0.00 11874.10 8738.13 20097.71 00:27:29.051 0 00:27:29.051 03:28:02 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:29.051 03:28:02 -- host/digest.sh@93 -- # get_accel_stats 00:27:29.051 03:28:02 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:29.051 03:28:02 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:29.051 03:28:02 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:29.051 | select(.opcode=="crc32c") 00:27:29.051 | "\(.module_name) \(.executed)"' 00:27:29.051 03:28:03 -- host/digest.sh@94 -- # false 00:27:29.051 03:28:03 -- host/digest.sh@94 -- # exp_module=software 00:27:29.051 03:28:03 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:29.051 03:28:03 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:29.051 03:28:03 -- host/digest.sh@98 -- # killprocess 1614611 00:27:29.051 03:28:03 -- common/autotest_common.sh@936 -- # '[' -z 1614611 ']' 00:27:29.051 03:28:03 -- common/autotest_common.sh@940 -- # kill -0 1614611 00:27:29.051 03:28:03 -- common/autotest_common.sh@941 -- # uname 00:27:29.051 03:28:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:29.051 03:28:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1614611 00:27:29.051 03:28:03 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:27:29.051 03:28:03 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:27:29.051 03:28:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1614611' 00:27:29.051 killing process with pid 1614611 00:27:29.051 03:28:03 -- common/autotest_common.sh@955 -- # kill 1614611 00:27:29.051 Received shutdown signal, test time was about 2.000000 seconds 00:27:29.051 00:27:29.051 Latency(us) 00:27:29.051 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:29.051 =================================================================================================================== 00:27:29.051 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:29.051 03:28:03 -- common/autotest_common.sh@960 -- # wait 1614611 00:27:29.051 03:28:03 -- host/digest.sh@132 -- # killprocess 1613104 00:27:29.051 03:28:03 -- common/autotest_common.sh@936 -- # '[' -z 1613104 ']' 00:27:29.051 03:28:03 -- common/autotest_common.sh@940 -- # kill -0 1613104 00:27:29.051 03:28:03 -- common/autotest_common.sh@941 -- # uname 00:27:29.051 03:28:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:29.051 03:28:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1613104 00:27:29.051 03:28:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:29.051 03:28:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:29.051 03:28:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1613104' 00:27:29.051 killing process with pid 1613104 00:27:29.051 03:28:03 -- common/autotest_common.sh@955 -- # kill 1613104 00:27:29.051 03:28:03 -- common/autotest_common.sh@960 -- # wait 1613104 00:27:29.619 00:27:29.619 real 0m16.925s 00:27:29.619 user 0m33.762s 00:27:29.619 sys 0m3.814s 00:27:29.619 03:28:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:27:29.619 03:28:03 -- common/autotest_common.sh@10 -- # set +x 00:27:29.619 ************************************ 00:27:29.619 END TEST nvmf_digest_clean 00:27:29.619 ************************************ 00:27:29.619 03:28:03 -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:27:29.619 03:28:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:29.619 03:28:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:29.619 03:28:03 -- common/autotest_common.sh@10 -- # set +x 00:27:29.619 ************************************ 00:27:29.619 START TEST nvmf_digest_error 00:27:29.619 ************************************ 00:27:29.619 03:28:03 -- common/autotest_common.sh@1111 -- # run_digest_error 00:27:29.619 03:28:03 -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:27:29.619 03:28:03 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:27:29.619 03:28:03 -- common/autotest_common.sh@710 -- # xtrace_disable 00:27:29.619 03:28:03 -- common/autotest_common.sh@10 -- # set +x 00:27:29.619 03:28:03 -- nvmf/common.sh@470 -- # nvmfpid=1615173 00:27:29.619 03:28:03 -- nvmf/common.sh@471 -- # waitforlisten 1615173 00:27:29.619 03:28:03 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:27:29.619 03:28:03 -- common/autotest_common.sh@817 -- # '[' -z 1615173 ']' 00:27:29.619 03:28:03 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:29.619 03:28:03 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:29.619 03:28:03 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:29.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:29.619 03:28:03 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:29.619 03:28:03 -- common/autotest_common.sh@10 -- # set +x 00:27:29.619 [2024-04-25 03:28:03.990969] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:27:29.619 [2024-04-25 03:28:03.991056] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:29.619 EAL: No free 2048 kB hugepages reported on node 1 00:27:29.619 [2024-04-25 03:28:04.060056] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:29.878 [2024-04-25 03:28:04.177381] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:29.878 [2024-04-25 03:28:04.177446] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:29.878 [2024-04-25 03:28:04.177472] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:29.878 [2024-04-25 03:28:04.177485] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:29.878 [2024-04-25 03:28:04.177497] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:29.878 [2024-04-25 03:28:04.177529] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:30.448 03:28:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:30.448 03:28:04 -- common/autotest_common.sh@850 -- # return 0 00:27:30.448 03:28:04 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:27:30.448 03:28:04 -- common/autotest_common.sh@716 -- # xtrace_disable 00:27:30.448 03:28:04 -- common/autotest_common.sh@10 -- # set +x 00:27:30.706 03:28:04 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:30.706 03:28:04 -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:27:30.706 03:28:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:30.706 03:28:04 -- common/autotest_common.sh@10 -- # set +x 00:27:30.706 [2024-04-25 03:28:04.968049] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:27:30.706 03:28:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:30.706 03:28:04 -- host/digest.sh@105 -- # common_target_config 00:27:30.706 03:28:04 -- host/digest.sh@43 -- # rpc_cmd 00:27:30.706 03:28:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:30.706 03:28:04 -- common/autotest_common.sh@10 -- # set +x 00:27:30.706 null0 00:27:30.706 [2024-04-25 03:28:05.091399] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:30.706 [2024-04-25 03:28:05.115621] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:30.706 03:28:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:30.706 03:28:05 -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:27:30.706 03:28:05 -- host/digest.sh@54 -- # local rw bs qd 00:27:30.706 03:28:05 -- host/digest.sh@56 -- # rw=randread 00:27:30.706 03:28:05 -- host/digest.sh@56 -- # bs=4096 00:27:30.706 03:28:05 -- host/digest.sh@56 -- # qd=128 00:27:30.706 03:28:05 -- host/digest.sh@58 -- # bperfpid=1615324 00:27:30.706 03:28:05 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:27:30.706 03:28:05 -- host/digest.sh@60 -- # waitforlisten 1615324 /var/tmp/bperf.sock 00:27:30.706 03:28:05 -- common/autotest_common.sh@817 -- # '[' -z 1615324 ']' 00:27:30.706 03:28:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:30.706 03:28:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:30.706 03:28:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:30.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:30.706 03:28:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:30.706 03:28:05 -- common/autotest_common.sh@10 -- # set +x 00:27:30.706 [2024-04-25 03:28:05.157871] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:27:30.706 [2024-04-25 03:28:05.157951] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1615324 ] 00:27:30.706 EAL: No free 2048 kB hugepages reported on node 1 00:27:30.964 [2024-04-25 03:28:05.218775] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:30.964 [2024-04-25 03:28:05.334573] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:30.964 03:28:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:30.964 03:28:05 -- common/autotest_common.sh@850 -- # return 0 00:27:30.964 03:28:05 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:30.964 03:28:05 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:31.531 03:28:05 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:31.531 03:28:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:31.531 03:28:05 -- common/autotest_common.sh@10 -- # set +x 00:27:31.531 03:28:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:31.531 03:28:05 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:31.531 03:28:05 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:31.790 nvme0n1 00:27:31.790 03:28:06 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:27:31.790 03:28:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:31.790 03:28:06 -- common/autotest_common.sh@10 -- # set +x 00:27:31.790 03:28:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:31.790 03:28:06 -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:31.790 03:28:06 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:31.790 Running I/O for 2 seconds... 00:27:32.048 [2024-04-25 03:28:06.305949] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:32.048 [2024-04-25 03:28:06.306021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7408 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.048 [2024-04-25 03:28:06.306066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.048 [2024-04-25 03:28:06.321703] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:32.048 [2024-04-25 03:28:06.321736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:1077 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.048 [2024-04-25 03:28:06.321754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.048 [2024-04-25 03:28:06.336124] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:32.048 [2024-04-25 03:28:06.336160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.048 [2024-04-25 03:28:06.336184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.048 [2024-04-25 03:28:06.349028] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:32.048 [2024-04-25 03:28:06.349063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:10236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.048 [2024-04-25 03:28:06.349082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.048 [2024-04-25 03:28:06.364497] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:32.048 [2024-04-25 03:28:06.364532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:23567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.048 [2024-04-25 03:28:06.364558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.048 [2024-04-25 03:28:06.379890] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:32.048 [2024-04-25 03:28:06.379920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:6913 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.048 [2024-04-25 03:28:06.379954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.048 [2024-04-25 03:28:06.392228] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:32.049 [2024-04-25 03:28:06.392261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:17188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.049 [2024-04-25 03:28:06.392281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.049 [2024-04-25 03:28:06.406566] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:32.049 [2024-04-25 03:28:06.406600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:13254 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.049 [2024-04-25 03:28:06.406634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.049 [2024-04-25 03:28:06.421331] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:32.049 [2024-04-25 03:28:06.421365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:18452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.049 [2024-04-25 03:28:06.421391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.049 [2024-04-25 03:28:06.433324] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:32.049 [2024-04-25 03:28:06.433358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:1709 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.049 [2024-04-25 03:28:06.433376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.049 [2024-04-25 03:28:06.447727] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:32.049 [2024-04-25 03:28:06.447757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:6868 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.049 [2024-04-25 03:28:06.447785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.049 [2024-04-25 03:28:06.462456] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:32.049 [2024-04-25 03:28:06.462490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:24355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.049 [2024-04-25 03:28:06.462510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.049 [2024-04-25 03:28:06.475912] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:32.049 [2024-04-25 03:28:06.475942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9369 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.049 [2024-04-25 03:28:06.475976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.049 [2024-04-25 03:28:06.490816] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:32.049 [2024-04-25 03:28:06.490846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:17369 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.049 [2024-04-25 03:28:06.490866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.049 [2024-04-25 03:28:06.502168] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:32.049 [2024-04-25 03:28:06.502202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:5525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.049 [2024-04-25 03:28:06.502220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.049 [2024-04-25 03:28:06.517673] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:32.049 [2024-04-25 03:28:06.517703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:16875 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.049 [2024-04-25 03:28:06.517724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.049 [2024-04-25 03:28:06.531225] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:32.049 [2024-04-25 03:28:06.531258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:12703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.049 [2024-04-25 03:28:06.531278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.049 [2024-04-25 03:28:06.546084] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:32.049 [2024-04-25 03:28:06.546124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:18199 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.049 [2024-04-25 03:28:06.546143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.307 [2024-04-25 03:28:06.559948] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:32.307 [2024-04-25 03:28:06.559982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:22503 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.307 [2024-04-25 03:28:06.560004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.307 [2024-04-25 03:28:06.574148] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:32.307 [2024-04-25 03:28:06.574183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:8935 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.307 [2024-04-25 03:28:06.574202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.307 [2024-04-25 03:28:06.588311] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:32.307 [2024-04-25 03:28:06.588344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:17823 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.307 [2024-04-25 03:28:06.588363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.308 [2024-04-25 03:28:06.601477] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:32.308 [2024-04-25 03:28:06.601510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:12008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.308 [2024-04-25 03:28:06.601533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.308 [2024-04-25 03:28:06.615483] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:32.308 [2024-04-25 03:28:06.615516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:1973 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.308 [2024-04-25 03:28:06.615535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.308 [2024-04-25 03:28:06.631101] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:32.308 [2024-04-25 03:28:06.631134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:2514 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.308 [2024-04-25 03:28:06.631152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.308 [2024-04-25 03:28:06.643170] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:32.308 [2024-04-25 03:28:06.643205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:10108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.308 [2024-04-25 03:28:06.643224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.308 [2024-04-25 03:28:06.658270] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:32.308 [2024-04-25 03:28:06.658303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:25022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.308 [2024-04-25 03:28:06.658321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.308 [2024-04-25 03:28:06.672443] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:32.308 [2024-04-25 03:28:06.672476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:7774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.308 [2024-04-25 03:28:06.672496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.308 [2024-04-25 03:28:06.685573] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:32.308 [2024-04-25 03:28:06.685607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:2388 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.308 [2024-04-25 03:28:06.685640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.308 [2024-04-25 03:28:06.700708] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:32.308 [2024-04-25 03:28:06.700739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:19597 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.308 [2024-04-25 03:28:06.700756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.308 [2024-04-25 03:28:06.713207] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:32.308 [2024-04-25 03:28:06.713241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:3066 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.308 [2024-04-25 03:28:06.713260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.308 [2024-04-25 03:28:06.728245] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:32.308 [2024-04-25 03:28:06.728279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:25264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.308 [2024-04-25 03:28:06.728298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.308 [2024-04-25 03:28:06.741255] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:32.308 [2024-04-25 03:28:06.741288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:15624 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.308 [2024-04-25 03:28:06.741309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.308 [2024-04-25 03:28:06.756226] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:32.308 [2024-04-25 03:28:06.756259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:20119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.308 [2024-04-25 03:28:06.756277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.308 [2024-04-25 03:28:06.770036] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:32.308 [2024-04-25 03:28:06.770069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:23023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.308 [2024-04-25 03:28:06.770088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.308 [2024-04-25 03:28:06.782201] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:32.308 [2024-04-25 03:28:06.782234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:18152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.308 [2024-04-25 03:28:06.782252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.308 [2024-04-25 03:28:06.797279] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:32.308 [2024-04-25 03:28:06.797314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:11125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.308 [2024-04-25 03:28:06.797333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.567 [2024-04-25 03:28:06.811938] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:32.567 [2024-04-25 03:28:06.811992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:22991 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.567 [2024-04-25 03:28:06.812012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.567 [2024-04-25 03:28:06.825336] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:32.567 [2024-04-25 03:28:06.825370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:11740 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.567 [2024-04-25 03:28:06.825389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.567 [2024-04-25 03:28:06.838193] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:32.567 [2024-04-25 03:28:06.838227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:1714 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.567 [2024-04-25 03:28:06.838247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.567 [2024-04-25 03:28:06.852919] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:32.567 [2024-04-25 03:28:06.852967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:15383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.567 [2024-04-25 03:28:06.852987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.567 [2024-04-25 03:28:06.866348] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:32.567 [2024-04-25 03:28:06.866382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4828 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.567 [2024-04-25 03:28:06.866401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.567 [2024-04-25 03:28:06.880202] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:32.567 [2024-04-25 03:28:06.880235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:22005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.567 [2024-04-25 03:28:06.880254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.567 [2024-04-25 03:28:06.894322] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:32.567 [2024-04-25 03:28:06.894356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.567 [2024-04-25 03:28:06.894374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.567 [2024-04-25 03:28:06.906954] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:32.567 [2024-04-25 03:28:06.906988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:17750 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.567 [2024-04-25 03:28:06.907007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.567 [2024-04-25 03:28:06.922882] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:32.568 [2024-04-25 03:28:06.922912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:6353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.568 [2024-04-25 03:28:06.922929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.568 [2024-04-25 03:28:06.935072] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:32.568 [2024-04-25 03:28:06.935107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:16986 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.568 [2024-04-25 03:28:06.935126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.568 [2024-04-25 03:28:06.950459] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:32.568 [2024-04-25 03:28:06.950494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:25320 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.568 [2024-04-25 03:28:06.950513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.568 [2024-04-25 03:28:06.964828] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:32.568 [2024-04-25 03:28:06.964859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:12942 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.568 [2024-04-25 03:28:06.964876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.568 [2024-04-25 03:28:06.976998] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:32.568 [2024-04-25 03:28:06.977032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:23350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.568 [2024-04-25 03:28:06.977051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.568 [2024-04-25 03:28:06.992897] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:32.568 [2024-04-25 03:28:06.992927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:12060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.568 [2024-04-25 03:28:06.992944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.568 [2024-04-25 03:28:07.004566] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:32.568 [2024-04-25 03:28:07.004599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12253 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.568 [2024-04-25 03:28:07.004618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.568 [2024-04-25 03:28:07.018985] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:32.568 [2024-04-25 03:28:07.019020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:12805 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.568 [2024-04-25 03:28:07.019039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.568 [2024-04-25 03:28:07.034040] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:32.568 [2024-04-25 03:28:07.034075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:6143 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.568 [2024-04-25 03:28:07.034093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.568 [2024-04-25 03:28:07.048477] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:32.568 [2024-04-25 03:28:07.048511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2622 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.568 [2024-04-25 03:28:07.048535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.568 [2024-04-25 03:28:07.063668] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:32.568 [2024-04-25 03:28:07.063714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.568 [2024-04-25 03:28:07.063732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.827 [2024-04-25 03:28:07.075268] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:32.827 [2024-04-25 03:28:07.075305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:20214 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.827 [2024-04-25 03:28:07.075324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.827 [2024-04-25 03:28:07.091417] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:32.827 [2024-04-25 03:28:07.091451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12691 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.827 [2024-04-25 03:28:07.091470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.827 [2024-04-25 03:28:07.104375] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:32.827 [2024-04-25 03:28:07.104409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15794 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.827 [2024-04-25 03:28:07.104427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.827 [2024-04-25 03:28:07.117711] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:32.827 [2024-04-25 03:28:07.117741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13473 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.827 [2024-04-25 03:28:07.117773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.827 [2024-04-25 03:28:07.133110] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:32.827 [2024-04-25 03:28:07.133145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:21387 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.827 [2024-04-25 03:28:07.133163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.827 [2024-04-25 03:28:07.148306] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:32.827 [2024-04-25 03:28:07.148340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:9413 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.827 [2024-04-25 03:28:07.148359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.827 [2024-04-25 03:28:07.160198] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:32.827 [2024-04-25 03:28:07.160231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:17751 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.827 [2024-04-25 03:28:07.160250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.827 [2024-04-25 03:28:07.174885] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:32.827 [2024-04-25 03:28:07.174919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:18329 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.827 [2024-04-25 03:28:07.174937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.827 [2024-04-25 03:28:07.188686] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:32.827 [2024-04-25 03:28:07.188716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:25308 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.827 [2024-04-25 03:28:07.188733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.827 [2024-04-25 03:28:07.201877] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:32.827 [2024-04-25 03:28:07.201906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:18141 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.827 [2024-04-25 03:28:07.201923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.827 [2024-04-25 03:28:07.217215] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:32.827 [2024-04-25 03:28:07.217249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:19713 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.827 [2024-04-25 03:28:07.217268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.827 [2024-04-25 03:28:07.229264] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:32.827 [2024-04-25 03:28:07.229298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:8229 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.827 [2024-04-25 03:28:07.229316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.827 [2024-04-25 03:28:07.244487] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:32.827 [2024-04-25 03:28:07.244521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:10997 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.827 [2024-04-25 03:28:07.244540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.827 [2024-04-25 03:28:07.258175] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:32.827 [2024-04-25 03:28:07.258209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:3984 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.827 [2024-04-25 03:28:07.258228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.827 [2024-04-25 03:28:07.270992] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:32.828 [2024-04-25 03:28:07.271026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:7790 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.828 [2024-04-25 03:28:07.271045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.828 [2024-04-25 03:28:07.286167] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:32.828 [2024-04-25 03:28:07.286201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:19824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.828 [2024-04-25 03:28:07.286225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.828 [2024-04-25 03:28:07.301233] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:32.828 [2024-04-25 03:28:07.301267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:4815 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.828 [2024-04-25 03:28:07.301287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.828 [2024-04-25 03:28:07.313042] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:32.828 [2024-04-25 03:28:07.313076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:16130 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.828 [2024-04-25 03:28:07.313095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.088 [2024-04-25 03:28:07.328359] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:33.088 [2024-04-25 03:28:07.328394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:15188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.088 [2024-04-25 03:28:07.328413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.088 [2024-04-25 03:28:07.342009] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:33.088 [2024-04-25 03:28:07.342061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:25111 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.088 [2024-04-25 03:28:07.342086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.088 [2024-04-25 03:28:07.354828] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:33.088 [2024-04-25 03:28:07.354857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:12368 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.088 [2024-04-25 03:28:07.354874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.088 [2024-04-25 03:28:07.369953] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:33.088 [2024-04-25 03:28:07.370001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:1994 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.088 [2024-04-25 03:28:07.370021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.088 [2024-04-25 03:28:07.384231] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:33.088 [2024-04-25 03:28:07.384265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:3922 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.088 [2024-04-25 03:28:07.384285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.088 [2024-04-25 03:28:07.397282] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:33.088 [2024-04-25 03:28:07.397316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:2601 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.088 [2024-04-25 03:28:07.397335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.088 [2024-04-25 03:28:07.412403] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:33.088 [2024-04-25 03:28:07.412443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:12886 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.088 [2024-04-25 03:28:07.412463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.088 [2024-04-25 03:28:07.426294] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:33.088 [2024-04-25 03:28:07.426330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.089 [2024-04-25 03:28:07.426349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.089 [2024-04-25 03:28:07.440718] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:33.089 [2024-04-25 03:28:07.440751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:12794 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.089 [2024-04-25 03:28:07.440769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.089 [2024-04-25 03:28:07.453519] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:33.089 [2024-04-25 03:28:07.453554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:10530 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.089 [2024-04-25 03:28:07.453573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.089 [2024-04-25 03:28:07.467233] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:33.089 [2024-04-25 03:28:07.467267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:9897 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.089 [2024-04-25 03:28:07.467286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.089 [2024-04-25 03:28:07.482331] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:33.089 [2024-04-25 03:28:07.482366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:991 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.089 [2024-04-25 03:28:07.482384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.089 [2024-04-25 03:28:07.496100] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:33.089 [2024-04-25 03:28:07.496133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:6348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.089 [2024-04-25 03:28:07.496152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.089 [2024-04-25 03:28:07.510182] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:33.089 [2024-04-25 03:28:07.510216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.089 [2024-04-25 03:28:07.510235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.089 [2024-04-25 03:28:07.523988] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:33.089 [2024-04-25 03:28:07.524022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:1024 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.089 [2024-04-25 03:28:07.524041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.089 [2024-04-25 03:28:07.537566] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:33.089 [2024-04-25 03:28:07.537600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:6792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.089 [2024-04-25 03:28:07.537619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.089 [2024-04-25 03:28:07.551954] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:33.089 [2024-04-25 03:28:07.551986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16701 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.089 [2024-04-25 03:28:07.552005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.089 [2024-04-25 03:28:07.566301] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:33.089 [2024-04-25 03:28:07.566335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:17137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.089 [2024-04-25 03:28:07.566354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.089 [2024-04-25 03:28:07.580431] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:33.089 [2024-04-25 03:28:07.580464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:14344 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.089 [2024-04-25 03:28:07.580483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.349 [2024-04-25 03:28:07.591905] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:33.349 [2024-04-25 03:28:07.591936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:2364 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.349 [2024-04-25 03:28:07.591969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.349 [2024-04-25 03:28:07.606415] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:33.350 [2024-04-25 03:28:07.606449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.350 [2024-04-25 03:28:07.606469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.350 [2024-04-25 03:28:07.621501] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:33.350 [2024-04-25 03:28:07.621535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:17394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.350 [2024-04-25 03:28:07.621554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.350 [2024-04-25 03:28:07.635401] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:33.350 [2024-04-25 03:28:07.635436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:10586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.350 [2024-04-25 03:28:07.635456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.350 [2024-04-25 03:28:07.647718] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:33.350 [2024-04-25 03:28:07.647749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:13506 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.350 [2024-04-25 03:28:07.647786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.350 [2024-04-25 03:28:07.663178] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:33.350 [2024-04-25 03:28:07.663213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:18536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.350 [2024-04-25 03:28:07.663232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.350 [2024-04-25 03:28:07.676600] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:33.350 [2024-04-25 03:28:07.676641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:22578 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.350 [2024-04-25 03:28:07.676661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.350 [2024-04-25 03:28:07.690102] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:33.350 [2024-04-25 03:28:07.690135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:17789 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.350 [2024-04-25 03:28:07.690154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.350 [2024-04-25 03:28:07.705111] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:33.350 [2024-04-25 03:28:07.705145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:4174 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.350 [2024-04-25 03:28:07.705164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.350 [2024-04-25 03:28:07.718835] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:33.350 [2024-04-25 03:28:07.718866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:10168 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.350 [2024-04-25 03:28:07.718882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.350 [2024-04-25 03:28:07.730879] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:33.350 [2024-04-25 03:28:07.730924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.350 [2024-04-25 03:28:07.730944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.350 [2024-04-25 03:28:07.746576] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:33.350 [2024-04-25 03:28:07.746609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:14061 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.350 [2024-04-25 03:28:07.746635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.350 [2024-04-25 03:28:07.760260] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:33.350 [2024-04-25 03:28:07.760294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:17287 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.350 [2024-04-25 03:28:07.760313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.350 [2024-04-25 03:28:07.774048] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:33.350 [2024-04-25 03:28:07.774093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:19915 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.350 [2024-04-25 03:28:07.774112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.350 [2024-04-25 03:28:07.788428] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:33.350 [2024-04-25 03:28:07.788462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:1167 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.350 [2024-04-25 03:28:07.788480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.350 [2024-04-25 03:28:07.802155] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:33.350 [2024-04-25 03:28:07.802189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:17170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.350 [2024-04-25 03:28:07.802208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.350 [2024-04-25 03:28:07.816529] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:33.350 [2024-04-25 03:28:07.816562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:7296 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.350 [2024-04-25 03:28:07.816581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.350 [2024-04-25 03:28:07.830245] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:33.350 [2024-04-25 03:28:07.830279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:10689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.350 [2024-04-25 03:28:07.830297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.350 [2024-04-25 03:28:07.844360] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:33.350 [2024-04-25 03:28:07.844394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:7769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.350 [2024-04-25 03:28:07.844413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.610 [2024-04-25 03:28:07.859189] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:33.610 [2024-04-25 03:28:07.859224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:23622 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.610 [2024-04-25 03:28:07.859243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.610 [2024-04-25 03:28:07.871989] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:33.610 [2024-04-25 03:28:07.872021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.610 [2024-04-25 03:28:07.872038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.610 [2024-04-25 03:28:07.884657] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:33.610 [2024-04-25 03:28:07.884695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12973 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.610 [2024-04-25 03:28:07.884717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.610 [2024-04-25 03:28:07.898363] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:33.610 [2024-04-25 03:28:07.898393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:9910 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.610 [2024-04-25 03:28:07.898424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.610 [2024-04-25 03:28:07.909698] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:33.610 [2024-04-25 03:28:07.909728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:3027 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.610 [2024-04-25 03:28:07.909745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.610 [2024-04-25 03:28:07.924095] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:33.610 [2024-04-25 03:28:07.924125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:22589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.610 [2024-04-25 03:28:07.924141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.610 [2024-04-25 03:28:07.937757] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:33.610 [2024-04-25 03:28:07.937787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:22930 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.610 [2024-04-25 03:28:07.937804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.610 [2024-04-25 03:28:07.948334] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:33.610 [2024-04-25 03:28:07.948362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:12820 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.610 [2024-04-25 03:28:07.948378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.610 [2024-04-25 03:28:07.962882] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:33.610 [2024-04-25 03:28:07.962913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:23538 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.610 [2024-04-25 03:28:07.962930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.610 [2024-04-25 03:28:07.977003] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:33.610 [2024-04-25 03:28:07.977033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:2397 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.610 [2024-04-25 03:28:07.977050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.610 [2024-04-25 03:28:07.989339] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:33.610 [2024-04-25 03:28:07.989369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:12126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.610 [2024-04-25 03:28:07.989386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.610 [2024-04-25 03:28:08.001361] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:33.610 [2024-04-25 03:28:08.001397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:18513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.610 [2024-04-25 03:28:08.001415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.610 [2024-04-25 03:28:08.015597] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:33.610 [2024-04-25 03:28:08.015634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:21965 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.610 [2024-04-25 03:28:08.015653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.610 [2024-04-25 03:28:08.028388] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:33.610 [2024-04-25 03:28:08.028434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:19698 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.610 [2024-04-25 03:28:08.028452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.610 [2024-04-25 03:28:08.040693] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:33.610 [2024-04-25 03:28:08.040723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.610 [2024-04-25 03:28:08.040740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.610 [2024-04-25 03:28:08.053856] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:33.610 [2024-04-25 03:28:08.053886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:20154 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.610 [2024-04-25 03:28:08.053904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.610 [2024-04-25 03:28:08.066277] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:33.610 [2024-04-25 03:28:08.066307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:22397 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.610 [2024-04-25 03:28:08.066324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.610 [2024-04-25 03:28:08.079427] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:33.610 [2024-04-25 03:28:08.079456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.610 [2024-04-25 03:28:08.079471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.610 [2024-04-25 03:28:08.092101] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:33.610 [2024-04-25 03:28:08.092131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:4904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.610 [2024-04-25 03:28:08.092148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.610 [2024-04-25 03:28:08.104309] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:33.610 [2024-04-25 03:28:08.104341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:12813 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.610 [2024-04-25 03:28:08.104357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.869 [2024-04-25 03:28:08.117151] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:33.869 [2024-04-25 03:28:08.117184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:3595 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.869 [2024-04-25 03:28:08.117200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.869 [2024-04-25 03:28:08.130928] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:33.869 [2024-04-25 03:28:08.130959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:19212 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.869 [2024-04-25 03:28:08.130975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.869 [2024-04-25 03:28:08.143911] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:33.869 [2024-04-25 03:28:08.143941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:14520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.869 [2024-04-25 03:28:08.143957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.869 [2024-04-25 03:28:08.157022] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:33.869 [2024-04-25 03:28:08.157067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:4215 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.869 [2024-04-25 03:28:08.157084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.869 [2024-04-25 03:28:08.170766] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:33.869 [2024-04-25 03:28:08.170798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:13170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.869 [2024-04-25 03:28:08.170815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.869 [2024-04-25 03:28:08.184186] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:33.869 [2024-04-25 03:28:08.184219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17258 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.869 [2024-04-25 03:28:08.184235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.869 [2024-04-25 03:28:08.196742] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:33.869 [2024-04-25 03:28:08.196772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:2136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.869 [2024-04-25 03:28:08.196789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.869 [2024-04-25 03:28:08.210566] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:33.869 [2024-04-25 03:28:08.210611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:11159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.869 [2024-04-25 03:28:08.210637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.869 [2024-04-25 03:28:08.222198] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:33.869 [2024-04-25 03:28:08.222229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:8346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.869 [2024-04-25 03:28:08.222253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.869 [2024-04-25 03:28:08.235478] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:33.869 [2024-04-25 03:28:08.235508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:10741 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.869 [2024-04-25 03:28:08.235524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.869 [2024-04-25 03:28:08.249339] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:33.870 [2024-04-25 03:28:08.249369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:4263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.870 [2024-04-25 03:28:08.249385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.870 [2024-04-25 03:28:08.261386] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:33.870 [2024-04-25 03:28:08.261415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:8824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.870 [2024-04-25 03:28:08.261432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.870 [2024-04-25 03:28:08.274678] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c14a0) 00:27:33.870 [2024-04-25 03:28:08.274708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:7005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.870 [2024-04-25 03:28:08.274725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.870 00:27:33.870 Latency(us) 00:27:33.870 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:33.870 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:27:33.870 nvme0n1 : 2.00 18458.67 72.10 0.00 0.00 6926.23 2924.85 19612.25 00:27:33.870 =================================================================================================================== 00:27:33.870 Total : 18458.67 72.10 0.00 0.00 6926.23 2924.85 19612.25 00:27:33.870 0 00:27:33.870 03:28:08 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:33.870 03:28:08 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:33.870 03:28:08 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:33.870 03:28:08 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:33.870 | .driver_specific 00:27:33.870 | .nvme_error 00:27:33.870 | .status_code 00:27:33.870 | .command_transient_transport_error' 00:27:34.129 03:28:08 -- host/digest.sh@71 -- # (( 144 > 0 )) 00:27:34.129 03:28:08 -- host/digest.sh@73 -- # killprocess 1615324 00:27:34.129 03:28:08 -- common/autotest_common.sh@936 -- # '[' -z 1615324 ']' 00:27:34.129 03:28:08 -- common/autotest_common.sh@940 -- # kill -0 1615324 00:27:34.129 03:28:08 -- common/autotest_common.sh@941 -- # uname 00:27:34.129 03:28:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:34.129 03:28:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1615324 00:27:34.129 03:28:08 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:27:34.129 03:28:08 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:27:34.129 03:28:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1615324' 00:27:34.129 killing process with pid 1615324 00:27:34.129 03:28:08 -- common/autotest_common.sh@955 -- # kill 1615324 00:27:34.129 Received shutdown signal, test time was about 2.000000 seconds 00:27:34.129 00:27:34.129 Latency(us) 00:27:34.129 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:34.129 =================================================================================================================== 00:27:34.129 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:34.129 03:28:08 -- common/autotest_common.sh@960 -- # wait 1615324 00:27:34.388 03:28:08 -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:27:34.388 03:28:08 -- host/digest.sh@54 -- # local rw bs qd 00:27:34.388 03:28:08 -- host/digest.sh@56 -- # rw=randread 00:27:34.388 03:28:08 -- host/digest.sh@56 -- # bs=131072 00:27:34.388 03:28:08 -- host/digest.sh@56 -- # qd=16 00:27:34.388 03:28:08 -- host/digest.sh@58 -- # bperfpid=1615734 00:27:34.388 03:28:08 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:27:34.388 03:28:08 -- host/digest.sh@60 -- # waitforlisten 1615734 /var/tmp/bperf.sock 00:27:34.388 03:28:08 -- common/autotest_common.sh@817 -- # '[' -z 1615734 ']' 00:27:34.388 03:28:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:34.388 03:28:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:34.388 03:28:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:34.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:34.388 03:28:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:34.388 03:28:08 -- common/autotest_common.sh@10 -- # set +x 00:27:34.648 [2024-04-25 03:28:08.913026] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:27:34.648 [2024-04-25 03:28:08.913100] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1615734 ] 00:27:34.648 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:34.648 Zero copy mechanism will not be used. 00:27:34.648 EAL: No free 2048 kB hugepages reported on node 1 00:27:34.648 [2024-04-25 03:28:08.974938] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:34.648 [2024-04-25 03:28:09.088307] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:35.585 03:28:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:35.585 03:28:09 -- common/autotest_common.sh@850 -- # return 0 00:27:35.585 03:28:09 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:35.585 03:28:09 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:35.844 03:28:10 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:35.844 03:28:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:35.844 03:28:10 -- common/autotest_common.sh@10 -- # set +x 00:27:35.844 03:28:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:35.844 03:28:10 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:35.844 03:28:10 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:36.102 nvme0n1 00:27:36.102 03:28:10 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:27:36.102 03:28:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:36.102 03:28:10 -- common/autotest_common.sh@10 -- # set +x 00:27:36.102 03:28:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:36.102 03:28:10 -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:36.102 03:28:10 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:36.361 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:36.361 Zero copy mechanism will not be used. 00:27:36.361 Running I/O for 2 seconds... 00:27:36.361 [2024-04-25 03:28:10.687577] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:36.361 [2024-04-25 03:28:10.687671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.361 [2024-04-25 03:28:10.687696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:36.361 [2024-04-25 03:28:10.701902] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:36.361 [2024-04-25 03:28:10.701935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.361 [2024-04-25 03:28:10.701953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:36.361 [2024-04-25 03:28:10.715860] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:36.361 [2024-04-25 03:28:10.715892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.361 [2024-04-25 03:28:10.715909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:36.361 [2024-04-25 03:28:10.729753] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:36.361 [2024-04-25 03:28:10.729784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.361 [2024-04-25 03:28:10.729801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.361 [2024-04-25 03:28:10.743961] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:36.361 [2024-04-25 03:28:10.744006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.361 [2024-04-25 03:28:10.744022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:36.361 [2024-04-25 03:28:10.757868] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:36.361 [2024-04-25 03:28:10.757899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.361 [2024-04-25 03:28:10.757916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:36.361 [2024-04-25 03:28:10.771881] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:36.361 [2024-04-25 03:28:10.771927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.361 [2024-04-25 03:28:10.771944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:36.361 [2024-04-25 03:28:10.785836] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:36.361 [2024-04-25 03:28:10.785866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.361 [2024-04-25 03:28:10.785882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.361 [2024-04-25 03:28:10.799697] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:36.361 [2024-04-25 03:28:10.799727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.361 [2024-04-25 03:28:10.799744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:36.361 [2024-04-25 03:28:10.813587] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:36.361 [2024-04-25 03:28:10.813644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.361 [2024-04-25 03:28:10.813680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:36.361 [2024-04-25 03:28:10.827606] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:36.361 [2024-04-25 03:28:10.827678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.361 [2024-04-25 03:28:10.827699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:36.361 [2024-04-25 03:28:10.841770] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:36.361 [2024-04-25 03:28:10.841817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.361 [2024-04-25 03:28:10.841836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.361 [2024-04-25 03:28:10.855936] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:36.361 [2024-04-25 03:28:10.855966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.361 [2024-04-25 03:28:10.855983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:36.621 [2024-04-25 03:28:10.869659] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:36.621 [2024-04-25 03:28:10.869706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.621 [2024-04-25 03:28:10.869724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:36.621 [2024-04-25 03:28:10.883655] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:36.621 [2024-04-25 03:28:10.883688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.621 [2024-04-25 03:28:10.883706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:36.621 [2024-04-25 03:28:10.897923] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:36.621 [2024-04-25 03:28:10.897955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.621 [2024-04-25 03:28:10.897972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.621 [2024-04-25 03:28:10.912369] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:36.621 [2024-04-25 03:28:10.912400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.621 [2024-04-25 03:28:10.912419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:36.621 [2024-04-25 03:28:10.926579] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:36.622 [2024-04-25 03:28:10.926639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.622 [2024-04-25 03:28:10.926675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:36.622 [2024-04-25 03:28:10.940779] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:36.622 [2024-04-25 03:28:10.940811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.622 [2024-04-25 03:28:10.940829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:36.622 [2024-04-25 03:28:10.954720] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:36.622 [2024-04-25 03:28:10.954765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.622 [2024-04-25 03:28:10.954782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.622 [2024-04-25 03:28:10.969084] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:36.622 [2024-04-25 03:28:10.969118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.622 [2024-04-25 03:28:10.969138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:36.622 [2024-04-25 03:28:10.984621] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:36.622 [2024-04-25 03:28:10.984665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.622 [2024-04-25 03:28:10.984693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:36.622 [2024-04-25 03:28:10.999778] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:36.622 [2024-04-25 03:28:10.999822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.622 [2024-04-25 03:28:10.999839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:36.622 [2024-04-25 03:28:11.015212] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:36.622 [2024-04-25 03:28:11.015246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.622 [2024-04-25 03:28:11.015264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.622 [2024-04-25 03:28:11.030051] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:36.622 [2024-04-25 03:28:11.030085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.622 [2024-04-25 03:28:11.030103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:36.622 [2024-04-25 03:28:11.045087] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:36.622 [2024-04-25 03:28:11.045120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.622 [2024-04-25 03:28:11.045139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:36.622 [2024-04-25 03:28:11.060460] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:36.622 [2024-04-25 03:28:11.060494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.622 [2024-04-25 03:28:11.060513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:36.622 [2024-04-25 03:28:11.076011] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:36.622 [2024-04-25 03:28:11.076045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.622 [2024-04-25 03:28:11.076064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.622 [2024-04-25 03:28:11.091126] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:36.622 [2024-04-25 03:28:11.091159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.622 [2024-04-25 03:28:11.091178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:36.622 [2024-04-25 03:28:11.106396] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:36.622 [2024-04-25 03:28:11.106430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.622 [2024-04-25 03:28:11.106449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:36.882 [2024-04-25 03:28:11.122102] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:36.882 [2024-04-25 03:28:11.122137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.882 [2024-04-25 03:28:11.122157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:36.882 [2024-04-25 03:28:11.137657] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:36.882 [2024-04-25 03:28:11.137703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.882 [2024-04-25 03:28:11.137720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.882 [2024-04-25 03:28:11.152976] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:36.882 [2024-04-25 03:28:11.153010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.882 [2024-04-25 03:28:11.153030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:36.882 [2024-04-25 03:28:11.168337] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:36.882 [2024-04-25 03:28:11.168370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.882 [2024-04-25 03:28:11.168389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:36.882 [2024-04-25 03:28:11.183524] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:36.882 [2024-04-25 03:28:11.183557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.882 [2024-04-25 03:28:11.183582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:36.882 [2024-04-25 03:28:11.198605] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:36.882 [2024-04-25 03:28:11.198647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.882 [2024-04-25 03:28:11.198683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.882 [2024-04-25 03:28:11.213536] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:36.882 [2024-04-25 03:28:11.213569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.882 [2024-04-25 03:28:11.213587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:36.882 [2024-04-25 03:28:11.229094] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:36.882 [2024-04-25 03:28:11.229128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.882 [2024-04-25 03:28:11.229147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:36.883 [2024-04-25 03:28:11.244412] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:36.883 [2024-04-25 03:28:11.244445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.883 [2024-04-25 03:28:11.244464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:36.883 [2024-04-25 03:28:11.259521] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:36.883 [2024-04-25 03:28:11.259554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.883 [2024-04-25 03:28:11.259572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.883 [2024-04-25 03:28:11.274531] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:36.883 [2024-04-25 03:28:11.274564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.883 [2024-04-25 03:28:11.274583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:36.883 [2024-04-25 03:28:11.289641] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:36.883 [2024-04-25 03:28:11.289686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.883 [2024-04-25 03:28:11.289703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:36.883 [2024-04-25 03:28:11.304539] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:36.883 [2024-04-25 03:28:11.304572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.883 [2024-04-25 03:28:11.304591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:36.883 [2024-04-25 03:28:11.320090] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:36.883 [2024-04-25 03:28:11.320128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.883 [2024-04-25 03:28:11.320148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.883 [2024-04-25 03:28:11.335078] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:36.883 [2024-04-25 03:28:11.335123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.883 [2024-04-25 03:28:11.335139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:36.883 [2024-04-25 03:28:11.350146] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:36.883 [2024-04-25 03:28:11.350179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.883 [2024-04-25 03:28:11.350198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:36.883 [2024-04-25 03:28:11.365382] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:36.883 [2024-04-25 03:28:11.365415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.883 [2024-04-25 03:28:11.365433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:36.883 [2024-04-25 03:28:11.380531] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:36.883 [2024-04-25 03:28:11.380564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.883 [2024-04-25 03:28:11.380583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.142 [2024-04-25 03:28:11.395734] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:37.142 [2024-04-25 03:28:11.395764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.142 [2024-04-25 03:28:11.395781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:37.142 [2024-04-25 03:28:11.410834] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:37.142 [2024-04-25 03:28:11.410864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.142 [2024-04-25 03:28:11.410880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:37.142 [2024-04-25 03:28:11.425117] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:37.142 [2024-04-25 03:28:11.425150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.142 [2024-04-25 03:28:11.425169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:37.142 [2024-04-25 03:28:11.439410] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:37.142 [2024-04-25 03:28:11.439443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.142 [2024-04-25 03:28:11.439462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.142 [2024-04-25 03:28:11.453548] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:37.142 [2024-04-25 03:28:11.453581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.142 [2024-04-25 03:28:11.453599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:37.142 [2024-04-25 03:28:11.467503] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:37.142 [2024-04-25 03:28:11.467536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.142 [2024-04-25 03:28:11.467554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:37.142 [2024-04-25 03:28:11.481426] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:37.142 [2024-04-25 03:28:11.481458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.142 [2024-04-25 03:28:11.481476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:37.142 [2024-04-25 03:28:11.495909] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:37.142 [2024-04-25 03:28:11.495938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.142 [2024-04-25 03:28:11.495971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.142 [2024-04-25 03:28:11.510313] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:37.142 [2024-04-25 03:28:11.510356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.142 [2024-04-25 03:28:11.510374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:37.142 [2024-04-25 03:28:11.524266] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:37.142 [2024-04-25 03:28:11.524297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.142 [2024-04-25 03:28:11.524316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:37.142 [2024-04-25 03:28:11.538494] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:37.142 [2024-04-25 03:28:11.538527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.142 [2024-04-25 03:28:11.538547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:37.142 [2024-04-25 03:28:11.552715] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:37.142 [2024-04-25 03:28:11.552744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.142 [2024-04-25 03:28:11.552760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.142 [2024-04-25 03:28:11.567087] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:37.142 [2024-04-25 03:28:11.567129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.142 [2024-04-25 03:28:11.567149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:37.143 [2024-04-25 03:28:11.581186] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:37.143 [2024-04-25 03:28:11.581219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.143 [2024-04-25 03:28:11.581238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:37.143 [2024-04-25 03:28:11.595563] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:37.143 [2024-04-25 03:28:11.595596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.143 [2024-04-25 03:28:11.595614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:37.143 [2024-04-25 03:28:11.609897] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:37.143 [2024-04-25 03:28:11.609928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.143 [2024-04-25 03:28:11.609960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.143 [2024-04-25 03:28:11.624114] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:37.143 [2024-04-25 03:28:11.624147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.143 [2024-04-25 03:28:11.624165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:37.143 [2024-04-25 03:28:11.638242] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:37.143 [2024-04-25 03:28:11.638274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.143 [2024-04-25 03:28:11.638292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:37.403 [2024-04-25 03:28:11.652537] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:37.403 [2024-04-25 03:28:11.652571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.403 [2024-04-25 03:28:11.652590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:37.403 [2024-04-25 03:28:11.667045] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:37.403 [2024-04-25 03:28:11.667093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.403 [2024-04-25 03:28:11.667111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.403 [2024-04-25 03:28:11.681219] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:37.403 [2024-04-25 03:28:11.681252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.403 [2024-04-25 03:28:11.681271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:37.403 [2024-04-25 03:28:11.695733] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:37.403 [2024-04-25 03:28:11.695763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.403 [2024-04-25 03:28:11.695779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:37.403 [2024-04-25 03:28:11.709666] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:37.403 [2024-04-25 03:28:11.709695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.403 [2024-04-25 03:28:11.709712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:37.403 [2024-04-25 03:28:11.723873] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:37.403 [2024-04-25 03:28:11.723904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.403 [2024-04-25 03:28:11.723921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.403 [2024-04-25 03:28:11.738073] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:37.403 [2024-04-25 03:28:11.738108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.403 [2024-04-25 03:28:11.738127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:37.403 [2024-04-25 03:28:11.752452] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:37.403 [2024-04-25 03:28:11.752486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.403 [2024-04-25 03:28:11.752505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:37.403 [2024-04-25 03:28:11.766492] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:37.403 [2024-04-25 03:28:11.766525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.403 [2024-04-25 03:28:11.766544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:37.403 [2024-04-25 03:28:11.780829] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:37.403 [2024-04-25 03:28:11.780858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.403 [2024-04-25 03:28:11.780875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.403 [2024-04-25 03:28:11.795072] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:37.403 [2024-04-25 03:28:11.795106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.403 [2024-04-25 03:28:11.795125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:37.403 [2024-04-25 03:28:11.809393] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:37.403 [2024-04-25 03:28:11.809426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.403 [2024-04-25 03:28:11.809451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:37.403 [2024-04-25 03:28:11.823720] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:37.403 [2024-04-25 03:28:11.823749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.403 [2024-04-25 03:28:11.823766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:37.403 [2024-04-25 03:28:11.837828] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:37.403 [2024-04-25 03:28:11.837857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.404 [2024-04-25 03:28:11.837873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.404 [2024-04-25 03:28:11.852337] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:37.404 [2024-04-25 03:28:11.852370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.404 [2024-04-25 03:28:11.852389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:37.404 [2024-04-25 03:28:11.866992] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:37.404 [2024-04-25 03:28:11.867025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.404 [2024-04-25 03:28:11.867044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:37.404 [2024-04-25 03:28:11.881280] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:37.404 [2024-04-25 03:28:11.881315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.404 [2024-04-25 03:28:11.881335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:37.404 [2024-04-25 03:28:11.896093] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:37.404 [2024-04-25 03:28:11.896127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.404 [2024-04-25 03:28:11.896146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.663 [2024-04-25 03:28:11.910513] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:37.663 [2024-04-25 03:28:11.910548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.663 [2024-04-25 03:28:11.910568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:37.663 [2024-04-25 03:28:11.925099] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:37.663 [2024-04-25 03:28:11.925132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.663 [2024-04-25 03:28:11.925151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:37.663 [2024-04-25 03:28:11.939791] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:37.663 [2024-04-25 03:28:11.939841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.663 [2024-04-25 03:28:11.939859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:37.663 [2024-04-25 03:28:11.954204] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:37.663 [2024-04-25 03:28:11.954238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.663 [2024-04-25 03:28:11.954257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.663 [2024-04-25 03:28:11.968520] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:37.663 [2024-04-25 03:28:11.968553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.663 [2024-04-25 03:28:11.968571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:37.663 [2024-04-25 03:28:11.983017] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:37.663 [2024-04-25 03:28:11.983051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.663 [2024-04-25 03:28:11.983070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:37.663 [2024-04-25 03:28:11.997298] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:37.663 [2024-04-25 03:28:11.997336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.663 [2024-04-25 03:28:11.997356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:37.663 [2024-04-25 03:28:12.011735] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:37.663 [2024-04-25 03:28:12.011781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.663 [2024-04-25 03:28:12.011800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.663 [2024-04-25 03:28:12.026156] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:37.663 [2024-04-25 03:28:12.026190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.663 [2024-04-25 03:28:12.026213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:37.663 [2024-04-25 03:28:12.040555] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:37.663 [2024-04-25 03:28:12.040589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.664 [2024-04-25 03:28:12.040609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:37.664 [2024-04-25 03:28:12.054999] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:37.664 [2024-04-25 03:28:12.055033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.664 [2024-04-25 03:28:12.055059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:37.664 [2024-04-25 03:28:12.069155] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:37.664 [2024-04-25 03:28:12.069190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.664 [2024-04-25 03:28:12.069210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.664 [2024-04-25 03:28:12.083654] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:37.664 [2024-04-25 03:28:12.083700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.664 [2024-04-25 03:28:12.083722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:37.664 [2024-04-25 03:28:12.097584] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:37.664 [2024-04-25 03:28:12.097636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.664 [2024-04-25 03:28:12.097658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:37.664 [2024-04-25 03:28:12.111470] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:37.664 [2024-04-25 03:28:12.111502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.664 [2024-04-25 03:28:12.111521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:37.664 [2024-04-25 03:28:12.125719] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:37.664 [2024-04-25 03:28:12.125748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.664 [2024-04-25 03:28:12.125767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.664 [2024-04-25 03:28:12.139975] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:37.664 [2024-04-25 03:28:12.140003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.664 [2024-04-25 03:28:12.140036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:37.664 [2024-04-25 03:28:12.154272] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:37.664 [2024-04-25 03:28:12.154305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.664 [2024-04-25 03:28:12.154326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:37.923 [2024-04-25 03:28:12.168556] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:37.923 [2024-04-25 03:28:12.168590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.923 [2024-04-25 03:28:12.168615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:37.923 [2024-04-25 03:28:12.183027] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:37.923 [2024-04-25 03:28:12.183065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.923 [2024-04-25 03:28:12.183084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.923 [2024-04-25 03:28:12.197241] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:37.923 [2024-04-25 03:28:12.197273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.923 [2024-04-25 03:28:12.197294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:37.923 [2024-04-25 03:28:12.211372] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:37.923 [2024-04-25 03:28:12.211404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.923 [2024-04-25 03:28:12.211423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:37.923 [2024-04-25 03:28:12.225616] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:37.923 [2024-04-25 03:28:12.225658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.923 [2024-04-25 03:28:12.225678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:37.923 [2024-04-25 03:28:12.240063] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:37.923 [2024-04-25 03:28:12.240096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.923 [2024-04-25 03:28:12.240114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.923 [2024-04-25 03:28:12.254464] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:37.923 [2024-04-25 03:28:12.254496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.923 [2024-04-25 03:28:12.254515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:37.923 [2024-04-25 03:28:12.268541] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:37.923 [2024-04-25 03:28:12.268574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.923 [2024-04-25 03:28:12.268593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:37.923 [2024-04-25 03:28:12.282669] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:37.923 [2024-04-25 03:28:12.282714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.923 [2024-04-25 03:28:12.282734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:37.923 [2024-04-25 03:28:12.296524] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:37.923 [2024-04-25 03:28:12.296556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.923 [2024-04-25 03:28:12.296574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.923 [2024-04-25 03:28:12.310669] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:37.923 [2024-04-25 03:28:12.310714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.923 [2024-04-25 03:28:12.310737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:37.923 [2024-04-25 03:28:12.324600] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:37.923 [2024-04-25 03:28:12.324638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.923 [2024-04-25 03:28:12.324658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:37.923 [2024-04-25 03:28:12.338922] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:37.923 [2024-04-25 03:28:12.338966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.923 [2024-04-25 03:28:12.338984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:37.923 [2024-04-25 03:28:12.353174] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:37.923 [2024-04-25 03:28:12.353208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.923 [2024-04-25 03:28:12.353234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.923 [2024-04-25 03:28:12.368555] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:37.923 [2024-04-25 03:28:12.368588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.923 [2024-04-25 03:28:12.368608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:37.923 [2024-04-25 03:28:12.383726] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:37.923 [2024-04-25 03:28:12.383768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.923 [2024-04-25 03:28:12.383788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:37.923 [2024-04-25 03:28:12.398741] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:37.923 [2024-04-25 03:28:12.398770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.923 [2024-04-25 03:28:12.398786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:37.923 [2024-04-25 03:28:12.413963] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:37.923 [2024-04-25 03:28:12.413991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.923 [2024-04-25 03:28:12.414010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.182 [2024-04-25 03:28:12.428887] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:38.182 [2024-04-25 03:28:12.428917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.182 [2024-04-25 03:28:12.428960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:38.182 [2024-04-25 03:28:12.443934] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:38.182 [2024-04-25 03:28:12.443963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.182 [2024-04-25 03:28:12.443996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:38.182 [2024-04-25 03:28:12.459190] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:38.182 [2024-04-25 03:28:12.459222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.182 [2024-04-25 03:28:12.459242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:38.182 [2024-04-25 03:28:12.474178] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:38.182 [2024-04-25 03:28:12.474210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.182 [2024-04-25 03:28:12.474228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.182 [2024-04-25 03:28:12.489227] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:38.182 [2024-04-25 03:28:12.489259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.182 [2024-04-25 03:28:12.489278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:38.182 [2024-04-25 03:28:12.504194] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:38.182 [2024-04-25 03:28:12.504226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.183 [2024-04-25 03:28:12.504248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:38.183 [2024-04-25 03:28:12.519158] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:38.183 [2024-04-25 03:28:12.519190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.183 [2024-04-25 03:28:12.519214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:38.183 [2024-04-25 03:28:12.534857] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:38.183 [2024-04-25 03:28:12.534885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.183 [2024-04-25 03:28:12.534902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.183 [2024-04-25 03:28:12.549810] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:38.183 [2024-04-25 03:28:12.549840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.183 [2024-04-25 03:28:12.549859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:38.183 [2024-04-25 03:28:12.564842] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:38.183 [2024-04-25 03:28:12.564877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.183 [2024-04-25 03:28:12.564897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:38.183 [2024-04-25 03:28:12.579890] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:38.183 [2024-04-25 03:28:12.579918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.183 [2024-04-25 03:28:12.579937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:38.183 [2024-04-25 03:28:12.594244] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:38.183 [2024-04-25 03:28:12.594276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.183 [2024-04-25 03:28:12.594296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.183 [2024-04-25 03:28:12.609239] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:38.183 [2024-04-25 03:28:12.609270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.183 [2024-04-25 03:28:12.609288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:38.183 [2024-04-25 03:28:12.624175] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:38.183 [2024-04-25 03:28:12.624208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.183 [2024-04-25 03:28:12.624227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:38.183 [2024-04-25 03:28:12.639114] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:38.183 [2024-04-25 03:28:12.639146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.183 [2024-04-25 03:28:12.639165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:38.183 [2024-04-25 03:28:12.654075] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:38.183 [2024-04-25 03:28:12.654107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.183 [2024-04-25 03:28:12.654126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.183 [2024-04-25 03:28:12.669618] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c70fe0) 00:27:38.183 [2024-04-25 03:28:12.669658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.183 [2024-04-25 03:28:12.669694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:38.183 00:27:38.183 Latency(us) 00:27:38.183 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:38.183 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:27:38.183 nvme0n1 : 2.01 2124.60 265.57 0.00 0.00 7526.61 6796.33 15728.64 00:27:38.183 =================================================================================================================== 00:27:38.183 Total : 2124.60 265.57 0.00 0.00 7526.61 6796.33 15728.64 00:27:38.183 0 00:27:38.443 03:28:12 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:38.443 03:28:12 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:38.443 03:28:12 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:38.443 03:28:12 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:38.443 | .driver_specific 00:27:38.443 | .nvme_error 00:27:38.443 | .status_code 00:27:38.443 | .command_transient_transport_error' 00:27:38.701 03:28:12 -- host/digest.sh@71 -- # (( 137 > 0 )) 00:27:38.701 03:28:12 -- host/digest.sh@73 -- # killprocess 1615734 00:27:38.701 03:28:12 -- common/autotest_common.sh@936 -- # '[' -z 1615734 ']' 00:27:38.701 03:28:12 -- common/autotest_common.sh@940 -- # kill -0 1615734 00:27:38.701 03:28:12 -- common/autotest_common.sh@941 -- # uname 00:27:38.701 03:28:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:38.701 03:28:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1615734 00:27:38.701 03:28:12 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:27:38.701 03:28:12 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:27:38.701 03:28:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1615734' 00:27:38.701 killing process with pid 1615734 00:27:38.701 03:28:12 -- common/autotest_common.sh@955 -- # kill 1615734 00:27:38.701 Received shutdown signal, test time was about 2.000000 seconds 00:27:38.701 00:27:38.701 Latency(us) 00:27:38.701 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:38.701 =================================================================================================================== 00:27:38.701 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:38.701 03:28:12 -- common/autotest_common.sh@960 -- # wait 1615734 00:27:38.959 03:28:13 -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:27:38.959 03:28:13 -- host/digest.sh@54 -- # local rw bs qd 00:27:38.959 03:28:13 -- host/digest.sh@56 -- # rw=randwrite 00:27:38.959 03:28:13 -- host/digest.sh@56 -- # bs=4096 00:27:38.959 03:28:13 -- host/digest.sh@56 -- # qd=128 00:27:38.959 03:28:13 -- host/digest.sh@58 -- # bperfpid=1616266 00:27:38.959 03:28:13 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:27:38.959 03:28:13 -- host/digest.sh@60 -- # waitforlisten 1616266 /var/tmp/bperf.sock 00:27:38.959 03:28:13 -- common/autotest_common.sh@817 -- # '[' -z 1616266 ']' 00:27:38.959 03:28:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:38.959 03:28:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:38.959 03:28:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:38.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:38.959 03:28:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:38.959 03:28:13 -- common/autotest_common.sh@10 -- # set +x 00:27:38.959 [2024-04-25 03:28:13.288858] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:27:38.959 [2024-04-25 03:28:13.288956] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1616266 ] 00:27:38.959 EAL: No free 2048 kB hugepages reported on node 1 00:27:38.959 [2024-04-25 03:28:13.351951] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:39.217 [2024-04-25 03:28:13.464736] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:39.217 03:28:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:39.217 03:28:13 -- common/autotest_common.sh@850 -- # return 0 00:27:39.217 03:28:13 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:39.217 03:28:13 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:39.474 03:28:13 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:39.474 03:28:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:39.474 03:28:13 -- common/autotest_common.sh@10 -- # set +x 00:27:39.474 03:28:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:39.474 03:28:13 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:39.474 03:28:13 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:39.733 nvme0n1 00:27:39.733 03:28:14 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:27:39.733 03:28:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:39.733 03:28:14 -- common/autotest_common.sh@10 -- # set +x 00:27:39.733 03:28:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:39.733 03:28:14 -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:39.733 03:28:14 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:39.991 Running I/O for 2 seconds... 00:27:39.991 [2024-04-25 03:28:14.309626] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190f7100 00:27:39.991 [2024-04-25 03:28:14.310709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:3986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:39.991 [2024-04-25 03:28:14.310749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:39.991 [2024-04-25 03:28:14.323200] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190e1f80 00:27:39.991 [2024-04-25 03:28:14.324261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:1301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:39.991 [2024-04-25 03:28:14.324295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:39.991 [2024-04-25 03:28:14.336338] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190e4140 00:27:39.991 [2024-04-25 03:28:14.337383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:4981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:39.991 [2024-04-25 03:28:14.337418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:39.991 [2024-04-25 03:28:14.349555] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190f4f40 00:27:39.991 [2024-04-25 03:28:14.350601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:18884 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:39.991 [2024-04-25 03:28:14.350646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:39.991 [2024-04-25 03:28:14.362755] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190f7100 00:27:39.991 [2024-04-25 03:28:14.363803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:11284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:39.991 [2024-04-25 03:28:14.363832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:39.991 [2024-04-25 03:28:14.375675] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190f92c0 00:27:39.991 [2024-04-25 03:28:14.376743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:14635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:39.991 [2024-04-25 03:28:14.376772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:39.991 [2024-04-25 03:28:14.388789] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190eea00 00:27:39.991 [2024-04-25 03:28:14.389829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:39.991 [2024-04-25 03:28:14.389858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:39.991 [2024-04-25 03:28:14.401783] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190ec840 00:27:39.991 [2024-04-25 03:28:14.402883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8772 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:39.991 [2024-04-25 03:28:14.402923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:39.991 [2024-04-25 03:28:14.414667] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190ea680 00:27:39.991 [2024-04-25 03:28:14.415723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:39.991 [2024-04-25 03:28:14.415753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:39.991 [2024-04-25 03:28:14.427367] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190e1f80 00:27:39.991 [2024-04-25 03:28:14.428442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:39.991 [2024-04-25 03:28:14.428475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:39.991 [2024-04-25 03:28:14.440229] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190e4140 00:27:39.991 [2024-04-25 03:28:14.441283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:39.991 [2024-04-25 03:28:14.441315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:39.991 [2024-04-25 03:28:14.453068] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190f4f40 00:27:39.991 [2024-04-25 03:28:14.454114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:39.991 [2024-04-25 03:28:14.454145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:39.991 [2024-04-25 03:28:14.465882] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190f7100 00:27:39.991 [2024-04-25 03:28:14.466985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:39.991 [2024-04-25 03:28:14.467017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:39.991 [2024-04-25 03:28:14.478577] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190f92c0 00:27:39.991 [2024-04-25 03:28:14.479650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:20503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:39.991 [2024-04-25 03:28:14.479694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:40.250 [2024-04-25 03:28:14.491416] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190eea00 00:27:40.250 [2024-04-25 03:28:14.492445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:7816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:40.250 [2024-04-25 03:28:14.492482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:40.250 [2024-04-25 03:28:14.504420] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190ec840 00:27:40.250 [2024-04-25 03:28:14.505471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:25017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:40.250 [2024-04-25 03:28:14.505502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:40.250 [2024-04-25 03:28:14.517383] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190ea680 00:27:40.250 [2024-04-25 03:28:14.518423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:2620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:40.250 [2024-04-25 03:28:14.518454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:40.250 [2024-04-25 03:28:14.530363] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190e1f80 00:27:40.250 [2024-04-25 03:28:14.531404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:4508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:40.250 [2024-04-25 03:28:14.531434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:40.250 [2024-04-25 03:28:14.543350] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190e4140 00:27:40.250 [2024-04-25 03:28:14.544400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:40.250 [2024-04-25 03:28:14.544431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:40.250 [2024-04-25 03:28:14.556340] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190f4f40 00:27:40.250 [2024-04-25 03:28:14.557387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:40.250 [2024-04-25 03:28:14.557418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:40.250 [2024-04-25 03:28:14.569372] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190f7100 00:27:40.250 [2024-04-25 03:28:14.570430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:23849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:40.250 [2024-04-25 03:28:14.570458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:40.250 [2024-04-25 03:28:14.582358] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190f92c0 00:27:40.250 [2024-04-25 03:28:14.583393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:18632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:40.250 [2024-04-25 03:28:14.583424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:40.250 [2024-04-25 03:28:14.595384] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190eea00 00:27:40.250 [2024-04-25 03:28:14.596411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:15569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:40.250 [2024-04-25 03:28:14.596442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:40.250 [2024-04-25 03:28:14.608318] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190ec840 00:27:40.250 [2024-04-25 03:28:14.609363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:9947 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:40.250 [2024-04-25 03:28:14.609393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:40.250 [2024-04-25 03:28:14.621326] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190ea680 00:27:40.250 [2024-04-25 03:28:14.622362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:25387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:40.250 [2024-04-25 03:28:14.622392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:40.250 [2024-04-25 03:28:14.634273] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190e1f80 00:27:40.250 [2024-04-25 03:28:14.635297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:5542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:40.250 [2024-04-25 03:28:14.635328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:40.250 [2024-04-25 03:28:14.647219] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190e4140 00:27:40.250 [2024-04-25 03:28:14.648239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:19590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:40.250 [2024-04-25 03:28:14.648270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:40.250 [2024-04-25 03:28:14.660213] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190f4f40 00:27:40.250 [2024-04-25 03:28:14.661262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:7748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:40.250 [2024-04-25 03:28:14.661292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:40.250 [2024-04-25 03:28:14.673158] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190f7100 00:27:40.250 [2024-04-25 03:28:14.674191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:1796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:40.250 [2024-04-25 03:28:14.674222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:40.250 [2024-04-25 03:28:14.686102] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190f92c0 00:27:40.251 [2024-04-25 03:28:14.687145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:14584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:40.251 [2024-04-25 03:28:14.687176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:40.251 [2024-04-25 03:28:14.699132] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190eea00 00:27:40.251 [2024-04-25 03:28:14.700165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:16904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:40.251 [2024-04-25 03:28:14.700196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:40.251 [2024-04-25 03:28:14.712141] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190ec840 00:27:40.251 [2024-04-25 03:28:14.713178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:11299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:40.251 [2024-04-25 03:28:14.713208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:40.251 [2024-04-25 03:28:14.725152] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190ea680 00:27:40.251 [2024-04-25 03:28:14.726203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:18600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:40.251 [2024-04-25 03:28:14.726233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:40.251 [2024-04-25 03:28:14.738177] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190e1f80 00:27:40.251 [2024-04-25 03:28:14.739227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:14893 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:40.251 [2024-04-25 03:28:14.739258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:40.510 [2024-04-25 03:28:14.751241] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190e4140 00:27:40.510 [2024-04-25 03:28:14.752251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:12063 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:40.510 [2024-04-25 03:28:14.752282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:40.510 [2024-04-25 03:28:14.764210] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190f4f40 00:27:40.510 [2024-04-25 03:28:14.765234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:22142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:40.510 [2024-04-25 03:28:14.765265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:40.510 [2024-04-25 03:28:14.777224] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190f7100 00:27:40.510 [2024-04-25 03:28:14.778261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:20281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:40.510 [2024-04-25 03:28:14.778291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:40.510 [2024-04-25 03:28:14.790218] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190f92c0 00:27:40.510 [2024-04-25 03:28:14.791251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:14694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:40.510 [2024-04-25 03:28:14.791282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:40.510 [2024-04-25 03:28:14.803254] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190eea00 00:27:40.510 [2024-04-25 03:28:14.804295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:22472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:40.510 [2024-04-25 03:28:14.804327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:40.510 [2024-04-25 03:28:14.816150] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190ec840 00:27:40.510 [2024-04-25 03:28:14.817197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:14094 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:40.510 [2024-04-25 03:28:14.817229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:40.510 [2024-04-25 03:28:14.829190] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190ea680 00:27:40.510 [2024-04-25 03:28:14.830236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:6925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:40.510 [2024-04-25 03:28:14.830273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:40.510 [2024-04-25 03:28:14.841980] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190e1f80 00:27:40.510 [2024-04-25 03:28:14.843074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:13679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:40.510 [2024-04-25 03:28:14.843106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:40.510 [2024-04-25 03:28:14.854909] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190e4140 00:27:40.510 [2024-04-25 03:28:14.856047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:1071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:40.510 [2024-04-25 03:28:14.856079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:40.510 [2024-04-25 03:28:14.867883] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190f4f40 00:27:40.511 [2024-04-25 03:28:14.869026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:23866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:40.511 [2024-04-25 03:28:14.869057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:40.511 [2024-04-25 03:28:14.880885] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190f7100 00:27:40.511 [2024-04-25 03:28:14.881963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:16294 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:40.511 [2024-04-25 03:28:14.881995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:40.511 [2024-04-25 03:28:14.893828] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190f92c0 00:27:40.511 [2024-04-25 03:28:14.894899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:11722 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:40.511 [2024-04-25 03:28:14.894945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:40.511 [2024-04-25 03:28:14.906833] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190eea00 00:27:40.511 [2024-04-25 03:28:14.907996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:18094 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:40.511 [2024-04-25 03:28:14.908028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:40.511 [2024-04-25 03:28:14.919895] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190ec840 00:27:40.511 [2024-04-25 03:28:14.921043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:15385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:40.511 [2024-04-25 03:28:14.921086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:40.511 [2024-04-25 03:28:14.932924] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190ea680 00:27:40.511 [2024-04-25 03:28:14.934007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:10679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:40.511 [2024-04-25 03:28:14.934039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:40.511 [2024-04-25 03:28:14.945943] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190e1f80 00:27:40.511 [2024-04-25 03:28:14.947025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:6311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:40.511 [2024-04-25 03:28:14.947057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:40.511 [2024-04-25 03:28:14.959010] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190e4140 00:27:40.511 [2024-04-25 03:28:14.960090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:25042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:40.511 [2024-04-25 03:28:14.960121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:40.511 [2024-04-25 03:28:14.972014] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190f4f40 00:27:40.511 [2024-04-25 03:28:14.973081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:11378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:40.511 [2024-04-25 03:28:14.973112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:40.511 [2024-04-25 03:28:14.985064] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190f7100 00:27:40.511 [2024-04-25 03:28:14.986113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:20772 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:40.511 [2024-04-25 03:28:14.986144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:40.511 [2024-04-25 03:28:14.998155] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190f92c0 00:27:40.511 [2024-04-25 03:28:14.999195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:12515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:40.511 [2024-04-25 03:28:14.999226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:40.772 [2024-04-25 03:28:15.011103] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190eea00 00:27:40.772 [2024-04-25 03:28:15.012124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:5717 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:40.772 [2024-04-25 03:28:15.012156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:40.772 [2024-04-25 03:28:15.024091] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190ec840 00:27:40.772 [2024-04-25 03:28:15.025139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:17086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:40.772 [2024-04-25 03:28:15.025170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:40.772 [2024-04-25 03:28:15.037387] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190ea680 00:27:40.772 [2024-04-25 03:28:15.038436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:17154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:40.772 [2024-04-25 03:28:15.038468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:40.772 [2024-04-25 03:28:15.050405] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190e1f80 00:27:40.772 [2024-04-25 03:28:15.051434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:18449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:40.772 [2024-04-25 03:28:15.051466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:40.772 [2024-04-25 03:28:15.063387] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190e4140 00:27:40.772 [2024-04-25 03:28:15.064431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:4389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:40.772 [2024-04-25 03:28:15.064462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:40.772 [2024-04-25 03:28:15.076333] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190f4f40 00:27:40.772 [2024-04-25 03:28:15.077376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:5412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:40.772 [2024-04-25 03:28:15.077407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:40.772 [2024-04-25 03:28:15.089253] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190f7100 00:27:40.772 [2024-04-25 03:28:15.090288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:40.772 [2024-04-25 03:28:15.090319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:40.772 [2024-04-25 03:28:15.102314] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190f92c0 00:27:40.772 [2024-04-25 03:28:15.103366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:40.772 [2024-04-25 03:28:15.103397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:40.772 [2024-04-25 03:28:15.115333] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190eea00 00:27:40.772 [2024-04-25 03:28:15.116378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:40.772 [2024-04-25 03:28:15.116409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:40.772 [2024-04-25 03:28:15.128320] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190ec840 00:27:40.772 [2024-04-25 03:28:15.129372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16764 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:40.772 [2024-04-25 03:28:15.129403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:40.772 [2024-04-25 03:28:15.141321] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190ea680 00:27:40.772 [2024-04-25 03:28:15.142345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:40.772 [2024-04-25 03:28:15.142376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:40.772 [2024-04-25 03:28:15.154338] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190e1f80 00:27:40.772 [2024-04-25 03:28:15.155374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:40.772 [2024-04-25 03:28:15.155405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:40.772 [2024-04-25 03:28:15.167258] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190e4140 00:27:40.772 [2024-04-25 03:28:15.168311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:40.772 [2024-04-25 03:28:15.168348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:40.772 [2024-04-25 03:28:15.180238] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190f4f40 00:27:40.772 [2024-04-25 03:28:15.181257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:15179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:40.772 [2024-04-25 03:28:15.181288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:40.772 [2024-04-25 03:28:15.193132] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190f7100 00:27:40.772 [2024-04-25 03:28:15.194162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:3529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:40.772 [2024-04-25 03:28:15.194193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:40.772 [2024-04-25 03:28:15.206132] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190f92c0 00:27:40.772 [2024-04-25 03:28:15.207174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:11554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:40.772 [2024-04-25 03:28:15.207206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:40.772 [2024-04-25 03:28:15.219143] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190eea00 00:27:40.772 [2024-04-25 03:28:15.220192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:2394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:40.772 [2024-04-25 03:28:15.220223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:40.772 [2024-04-25 03:28:15.232086] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190ec840 00:27:40.772 [2024-04-25 03:28:15.233121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:8651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:40.772 [2024-04-25 03:28:15.233152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:40.772 [2024-04-25 03:28:15.245104] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190ea680 00:27:40.772 [2024-04-25 03:28:15.246157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:23016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:40.772 [2024-04-25 03:28:15.246188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:40.772 [2024-04-25 03:28:15.258132] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190e1f80 00:27:40.772 [2024-04-25 03:28:15.259177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:40.772 [2024-04-25 03:28:15.259208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:40.772 [2024-04-25 03:28:15.271034] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190e4140 00:27:41.031 [2024-04-25 03:28:15.272176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:4171 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:41.031 [2024-04-25 03:28:15.272209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:41.031 [2024-04-25 03:28:15.284081] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190f4f40 00:27:41.031 [2024-04-25 03:28:15.285106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:2531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:41.031 [2024-04-25 03:28:15.285137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:41.031 [2024-04-25 03:28:15.296995] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190f7100 00:27:41.031 [2024-04-25 03:28:15.298165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:9640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:41.031 [2024-04-25 03:28:15.298196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:41.031 [2024-04-25 03:28:15.309918] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190f92c0 00:27:41.031 [2024-04-25 03:28:15.311134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:15041 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:41.031 [2024-04-25 03:28:15.311166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:41.031 [2024-04-25 03:28:15.322946] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190eea00 00:27:41.031 [2024-04-25 03:28:15.323999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:23279 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:41.031 [2024-04-25 03:28:15.324030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:41.031 [2024-04-25 03:28:15.335925] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190ec840 00:27:41.031 [2024-04-25 03:28:15.336996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:1066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:41.031 [2024-04-25 03:28:15.337028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:41.031 [2024-04-25 03:28:15.348955] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190ea680 00:27:41.031 [2024-04-25 03:28:15.350021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:15165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:41.031 [2024-04-25 03:28:15.350065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:41.032 [2024-04-25 03:28:15.361858] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190e1f80 00:27:41.032 [2024-04-25 03:28:15.362920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:18131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:41.032 [2024-04-25 03:28:15.362948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:41.032 [2024-04-25 03:28:15.374875] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190e4140 00:27:41.032 [2024-04-25 03:28:15.375846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:24917 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:41.032 [2024-04-25 03:28:15.375874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:41.032 [2024-04-25 03:28:15.387720] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190f4f40 00:27:41.032 [2024-04-25 03:28:15.388792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:7913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:41.032 [2024-04-25 03:28:15.388822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:41.032 [2024-04-25 03:28:15.400662] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190f7100 00:27:41.032 [2024-04-25 03:28:15.401761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:6154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:41.032 [2024-04-25 03:28:15.401789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:41.032 [2024-04-25 03:28:15.413762] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190f92c0 00:27:41.032 [2024-04-25 03:28:15.414819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:10242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:41.032 [2024-04-25 03:28:15.414848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:41.032 [2024-04-25 03:28:15.426809] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190eea00 00:27:41.032 [2024-04-25 03:28:15.427911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:3510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:41.032 [2024-04-25 03:28:15.427955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:41.032 [2024-04-25 03:28:15.439503] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190ec840 00:27:41.032 [2024-04-25 03:28:15.440587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:10261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:41.032 [2024-04-25 03:28:15.440616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:41.032 [2024-04-25 03:28:15.452317] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190ea680 00:27:41.032 [2024-04-25 03:28:15.453376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:21155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:41.032 [2024-04-25 03:28:15.453404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:41.032 [2024-04-25 03:28:15.464963] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190e1f80 00:27:41.032 [2024-04-25 03:28:15.466039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:12638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:41.032 [2024-04-25 03:28:15.466068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:41.032 [2024-04-25 03:28:15.477684] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190e4140 00:27:41.032 [2024-04-25 03:28:15.478744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:6783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:41.032 [2024-04-25 03:28:15.478774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:41.032 [2024-04-25 03:28:15.490452] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190f4f40 00:27:41.032 [2024-04-25 03:28:15.491524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:3928 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:41.032 [2024-04-25 03:28:15.491556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:41.032 [2024-04-25 03:28:15.503253] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190f7100 00:27:41.032 [2024-04-25 03:28:15.504303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:2250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:41.032 [2024-04-25 03:28:15.504338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:41.032 [2024-04-25 03:28:15.516011] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190f92c0 00:27:41.032 [2024-04-25 03:28:15.517054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:9893 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:41.032 [2024-04-25 03:28:15.517084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:41.032 [2024-04-25 03:28:15.528657] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190eea00 00:27:41.032 [2024-04-25 03:28:15.529731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:3147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:41.032 [2024-04-25 03:28:15.529759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:41.293 [2024-04-25 03:28:15.541485] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190ec840 00:27:41.293 [2024-04-25 03:28:15.542531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:4102 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:41.293 [2024-04-25 03:28:15.542562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:41.293 [2024-04-25 03:28:15.554527] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190ea680 00:27:41.293 [2024-04-25 03:28:15.555600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:22901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:41.293 [2024-04-25 03:28:15.555643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:41.293 [2024-04-25 03:28:15.567571] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190e1f80 00:27:41.293 [2024-04-25 03:28:15.568651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:19015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:41.293 [2024-04-25 03:28:15.568696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:41.293 [2024-04-25 03:28:15.580577] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190e4140 00:27:41.293 [2024-04-25 03:28:15.581652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:13284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:41.293 [2024-04-25 03:28:15.581700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:41.293 [2024-04-25 03:28:15.593587] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190f4f40 00:27:41.293 [2024-04-25 03:28:15.594680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:23339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:41.293 [2024-04-25 03:28:15.594710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:41.293 [2024-04-25 03:28:15.606598] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190f7100 00:27:41.293 [2024-04-25 03:28:15.607655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:15046 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:41.293 [2024-04-25 03:28:15.607683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:41.293 [2024-04-25 03:28:15.619535] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190f92c0 00:27:41.293 [2024-04-25 03:28:15.620597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:14179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:41.293 [2024-04-25 03:28:15.620636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:41.293 [2024-04-25 03:28:15.632590] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190eea00 00:27:41.293 [2024-04-25 03:28:15.633616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:1096 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:41.293 [2024-04-25 03:28:15.633654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:41.293 [2024-04-25 03:28:15.645491] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190ec840 00:27:41.293 [2024-04-25 03:28:15.646549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:21883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:41.293 [2024-04-25 03:28:15.646580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:41.293 [2024-04-25 03:28:15.658454] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190ea680 00:27:41.293 [2024-04-25 03:28:15.659502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:7184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:41.293 [2024-04-25 03:28:15.659533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:41.293 [2024-04-25 03:28:15.671437] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190e1f80 00:27:41.293 [2024-04-25 03:28:15.672476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:13258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:41.293 [2024-04-25 03:28:15.672508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:41.293 [2024-04-25 03:28:15.684393] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190e4140 00:27:41.293 [2024-04-25 03:28:15.685450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:14497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:41.293 [2024-04-25 03:28:15.685482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:41.293 [2024-04-25 03:28:15.697382] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190f4f40 00:27:41.293 [2024-04-25 03:28:15.698431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:10592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:41.293 [2024-04-25 03:28:15.698462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:41.293 [2024-04-25 03:28:15.710329] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190f7100 00:27:41.293 [2024-04-25 03:28:15.711362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:16343 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:41.293 [2024-04-25 03:28:15.711393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:41.293 [2024-04-25 03:28:15.723283] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190f92c0 00:27:41.293 [2024-04-25 03:28:15.724313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:41.293 [2024-04-25 03:28:15.724343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:41.293 [2024-04-25 03:28:15.736280] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190eea00 00:27:41.293 [2024-04-25 03:28:15.737317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:21068 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:41.293 [2024-04-25 03:28:15.737347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:41.293 [2024-04-25 03:28:15.749219] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190ec840 00:27:41.293 [2024-04-25 03:28:15.750269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:12528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:41.293 [2024-04-25 03:28:15.750300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:41.293 [2024-04-25 03:28:15.762221] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190ea680 00:27:41.294 [2024-04-25 03:28:15.763253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:19919 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:41.294 [2024-04-25 03:28:15.763284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:41.294 [2024-04-25 03:28:15.775108] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190e1f80 00:27:41.294 [2024-04-25 03:28:15.776164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:7971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:41.294 [2024-04-25 03:28:15.776194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:41.294 [2024-04-25 03:28:15.788040] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190e4140 00:27:41.294 [2024-04-25 03:28:15.789117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:41.294 [2024-04-25 03:28:15.789148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:41.555 [2024-04-25 03:28:15.801069] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190f4f40 00:27:41.555 [2024-04-25 03:28:15.802151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:41.555 [2024-04-25 03:28:15.802182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:41.555 [2024-04-25 03:28:15.814053] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190f7100 00:27:41.555 [2024-04-25 03:28:15.815099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:41.555 [2024-04-25 03:28:15.815130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:41.555 [2024-04-25 03:28:15.826869] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190f92c0 00:27:41.555 [2024-04-25 03:28:15.827946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:41.555 [2024-04-25 03:28:15.827980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:41.555 [2024-04-25 03:28:15.839939] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190eea00 00:27:41.555 [2024-04-25 03:28:15.840990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:41.555 [2024-04-25 03:28:15.841023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:41.555 [2024-04-25 03:28:15.853000] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190ec840 00:27:41.555 [2024-04-25 03:28:15.854070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:41.555 [2024-04-25 03:28:15.854102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:41.555 [2024-04-25 03:28:15.866009] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190ea680 00:27:41.555 [2024-04-25 03:28:15.867125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:8463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:41.555 [2024-04-25 03:28:15.867157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:41.555 [2024-04-25 03:28:15.878981] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190e1f80 00:27:41.555 [2024-04-25 03:28:15.880077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:18575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:41.555 [2024-04-25 03:28:15.880109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:41.555 [2024-04-25 03:28:15.892036] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190e4140 00:27:41.555 [2024-04-25 03:28:15.893104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:3987 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:41.555 [2024-04-25 03:28:15.893135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:41.555 [2024-04-25 03:28:15.905110] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190f4f40 00:27:41.555 [2024-04-25 03:28:15.906173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:16349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:41.555 [2024-04-25 03:28:15.906204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:41.555 [2024-04-25 03:28:15.918032] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190f7100 00:27:41.555 [2024-04-25 03:28:15.919094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:1454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:41.555 [2024-04-25 03:28:15.919125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:41.555 [2024-04-25 03:28:15.931067] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190f92c0 00:27:41.555 [2024-04-25 03:28:15.932116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:9888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:41.555 [2024-04-25 03:28:15.932147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:41.555 [2024-04-25 03:28:15.944026] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190eea00 00:27:41.555 [2024-04-25 03:28:15.945084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:18900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:41.555 [2024-04-25 03:28:15.945115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:41.555 [2024-04-25 03:28:15.957037] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190ec840 00:27:41.555 [2024-04-25 03:28:15.958162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:41.555 [2024-04-25 03:28:15.958200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:41.555 [2024-04-25 03:28:15.970039] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190ea680 00:27:41.555 [2024-04-25 03:28:15.971102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:5932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:41.555 [2024-04-25 03:28:15.971133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:41.555 [2024-04-25 03:28:15.983006] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190e1f80 00:27:41.555 [2024-04-25 03:28:15.984067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:19272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:41.555 [2024-04-25 03:28:15.984098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:41.555 [2024-04-25 03:28:15.995971] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190e4140 00:27:41.555 [2024-04-25 03:28:15.997038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:13056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:41.555 [2024-04-25 03:28:15.997070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:41.555 [2024-04-25 03:28:16.009001] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190f4f40 00:27:41.555 [2024-04-25 03:28:16.010116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:41.555 [2024-04-25 03:28:16.010147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:41.555 [2024-04-25 03:28:16.022069] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190f7100 00:27:41.555 [2024-04-25 03:28:16.023114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:2347 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:41.555 [2024-04-25 03:28:16.023145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:41.556 [2024-04-25 03:28:16.035079] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190f92c0 00:27:41.556 [2024-04-25 03:28:16.036128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:6951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:41.556 [2024-04-25 03:28:16.036160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:41.556 [2024-04-25 03:28:16.048307] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190eea00 00:27:41.556 [2024-04-25 03:28:16.049363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:5008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:41.556 [2024-04-25 03:28:16.049394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:41.817 [2024-04-25 03:28:16.061284] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190ec840 00:27:41.817 [2024-04-25 03:28:16.062331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:13599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:41.817 [2024-04-25 03:28:16.062363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:41.817 [2024-04-25 03:28:16.074265] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190ea680 00:27:41.817 [2024-04-25 03:28:16.075325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:15489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:41.817 [2024-04-25 03:28:16.075355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:41.817 [2024-04-25 03:28:16.087335] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190e1f80 00:27:41.817 [2024-04-25 03:28:16.088369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:17804 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:41.817 [2024-04-25 03:28:16.088399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:41.817 [2024-04-25 03:28:16.100344] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190e4140 00:27:41.817 [2024-04-25 03:28:16.101421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:1076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:41.817 [2024-04-25 03:28:16.101453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:41.817 [2024-04-25 03:28:16.113406] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190f4f40 00:27:41.817 [2024-04-25 03:28:16.114425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:12976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:41.817 [2024-04-25 03:28:16.114456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:41.817 [2024-04-25 03:28:16.126368] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190f7100 00:27:41.817 [2024-04-25 03:28:16.127433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:7472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:41.817 [2024-04-25 03:28:16.127461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:41.817 [2024-04-25 03:28:16.139255] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190f92c0 00:27:41.817 [2024-04-25 03:28:16.140308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:21111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:41.817 [2024-04-25 03:28:16.140338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:41.817 [2024-04-25 03:28:16.152235] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190eea00 00:27:41.817 [2024-04-25 03:28:16.153305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:5059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:41.817 [2024-04-25 03:28:16.153336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:41.817 [2024-04-25 03:28:16.165235] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190ec840 00:27:41.817 [2024-04-25 03:28:16.166277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:4456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:41.817 [2024-04-25 03:28:16.166319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:41.817 [2024-04-25 03:28:16.178184] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190ea680 00:27:41.817 [2024-04-25 03:28:16.179235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:6251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:41.817 [2024-04-25 03:28:16.179265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:41.817 [2024-04-25 03:28:16.191165] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190e1f80 00:27:41.817 [2024-04-25 03:28:16.192241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:15510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:41.817 [2024-04-25 03:28:16.192272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:41.817 [2024-04-25 03:28:16.204138] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190e4140 00:27:41.817 [2024-04-25 03:28:16.205190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:4427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:41.817 [2024-04-25 03:28:16.205221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:41.817 [2024-04-25 03:28:16.217113] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190f4f40 00:27:41.817 [2024-04-25 03:28:16.218179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:1679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:41.817 [2024-04-25 03:28:16.218210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:41.817 [2024-04-25 03:28:16.230128] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190f7100 00:27:41.817 [2024-04-25 03:28:16.231160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:16023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:41.817 [2024-04-25 03:28:16.231191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:41.817 [2024-04-25 03:28:16.243075] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190f92c0 00:27:41.817 [2024-04-25 03:28:16.244119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:22346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:41.817 [2024-04-25 03:28:16.244150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:41.817 [2024-04-25 03:28:16.256099] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190eea00 00:27:41.817 [2024-04-25 03:28:16.257131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:16903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:41.817 [2024-04-25 03:28:16.257163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:41.817 [2024-04-25 03:28:16.269099] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190ec840 00:27:41.817 [2024-04-25 03:28:16.270143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:24348 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:41.817 [2024-04-25 03:28:16.270175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:41.817 [2024-04-25 03:28:16.282111] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190ea680 00:27:41.817 [2024-04-25 03:28:16.283149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:4423 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:41.818 [2024-04-25 03:28:16.283180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:41.818 [2024-04-25 03:28:16.295062] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a3f0) with pdu=0x2000190e1f80 00:27:41.818 [2024-04-25 03:28:16.296124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:10402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:41.818 [2024-04-25 03:28:16.296165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:41.818 00:27:41.818 Latency(us) 00:27:41.818 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:41.818 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:41.818 nvme0n1 : 2.01 19648.40 76.75 0.00 0.00 6503.81 3373.89 13883.92 00:27:41.818 =================================================================================================================== 00:27:41.818 Total : 19648.40 76.75 0.00 0.00 6503.81 3373.89 13883.92 00:27:41.818 0 00:27:42.076 03:28:16 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:42.076 03:28:16 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:42.076 03:28:16 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:42.076 | .driver_specific 00:27:42.076 | .nvme_error 00:27:42.076 | .status_code 00:27:42.076 | .command_transient_transport_error' 00:27:42.076 03:28:16 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:42.076 03:28:16 -- host/digest.sh@71 -- # (( 154 > 0 )) 00:27:42.076 03:28:16 -- host/digest.sh@73 -- # killprocess 1616266 00:27:42.076 03:28:16 -- common/autotest_common.sh@936 -- # '[' -z 1616266 ']' 00:27:42.076 03:28:16 -- common/autotest_common.sh@940 -- # kill -0 1616266 00:27:42.076 03:28:16 -- common/autotest_common.sh@941 -- # uname 00:27:42.076 03:28:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:42.076 03:28:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1616266 00:27:42.336 03:28:16 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:27:42.336 03:28:16 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:27:42.336 03:28:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1616266' 00:27:42.336 killing process with pid 1616266 00:27:42.336 03:28:16 -- common/autotest_common.sh@955 -- # kill 1616266 00:27:42.336 Received shutdown signal, test time was about 2.000000 seconds 00:27:42.336 00:27:42.336 Latency(us) 00:27:42.336 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:42.336 =================================================================================================================== 00:27:42.336 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:42.336 03:28:16 -- common/autotest_common.sh@960 -- # wait 1616266 00:27:42.597 03:28:16 -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:27:42.597 03:28:16 -- host/digest.sh@54 -- # local rw bs qd 00:27:42.597 03:28:16 -- host/digest.sh@56 -- # rw=randwrite 00:27:42.597 03:28:16 -- host/digest.sh@56 -- # bs=131072 00:27:42.597 03:28:16 -- host/digest.sh@56 -- # qd=16 00:27:42.597 03:28:16 -- host/digest.sh@58 -- # bperfpid=1616677 00:27:42.597 03:28:16 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:27:42.597 03:28:16 -- host/digest.sh@60 -- # waitforlisten 1616677 /var/tmp/bperf.sock 00:27:42.597 03:28:16 -- common/autotest_common.sh@817 -- # '[' -z 1616677 ']' 00:27:42.597 03:28:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:42.597 03:28:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:42.597 03:28:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:42.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:42.597 03:28:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:42.597 03:28:16 -- common/autotest_common.sh@10 -- # set +x 00:27:42.598 [2024-04-25 03:28:16.901157] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:27:42.598 [2024-04-25 03:28:16.901249] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1616677 ] 00:27:42.598 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:42.598 Zero copy mechanism will not be used. 00:27:42.598 EAL: No free 2048 kB hugepages reported on node 1 00:27:42.598 [2024-04-25 03:28:16.963145] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:42.598 [2024-04-25 03:28:17.076937] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:42.856 03:28:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:42.856 03:28:17 -- common/autotest_common.sh@850 -- # return 0 00:27:42.856 03:28:17 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:42.856 03:28:17 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:43.114 03:28:17 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:43.114 03:28:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:43.114 03:28:17 -- common/autotest_common.sh@10 -- # set +x 00:27:43.114 03:28:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:43.114 03:28:17 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:43.114 03:28:17 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:43.372 nvme0n1 00:27:43.372 03:28:17 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:27:43.372 03:28:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:43.372 03:28:17 -- common/autotest_common.sh@10 -- # set +x 00:27:43.372 03:28:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:43.372 03:28:17 -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:43.372 03:28:17 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:43.631 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:43.631 Zero copy mechanism will not be used. 00:27:43.631 Running I/O for 2 seconds... 00:27:43.631 [2024-04-25 03:28:17.988815] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a730) with pdu=0x2000190fef90 00:27:43.631 [2024-04-25 03:28:17.989246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.631 [2024-04-25 03:28:17.989302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:43.631 [2024-04-25 03:28:18.014882] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a730) with pdu=0x2000190fef90 00:27:43.631 [2024-04-25 03:28:18.015371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.631 [2024-04-25 03:28:18.015402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:43.631 [2024-04-25 03:28:18.039941] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a730) with pdu=0x2000190fef90 00:27:43.631 [2024-04-25 03:28:18.040504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.631 [2024-04-25 03:28:18.040534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:43.631 [2024-04-25 03:28:18.065822] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a730) with pdu=0x2000190fef90 00:27:43.631 [2024-04-25 03:28:18.066304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.631 [2024-04-25 03:28:18.066333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.631 [2024-04-25 03:28:18.092867] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a730) with pdu=0x2000190fef90 00:27:43.631 [2024-04-25 03:28:18.093368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.631 [2024-04-25 03:28:18.093395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:43.631 [2024-04-25 03:28:18.119699] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a730) with pdu=0x2000190fef90 00:27:43.631 [2024-04-25 03:28:18.120357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.631 [2024-04-25 03:28:18.120385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:43.889 [2024-04-25 03:28:18.146245] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a730) with pdu=0x2000190fef90 00:27:43.889 [2024-04-25 03:28:18.146834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.889 [2024-04-25 03:28:18.146863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:43.889 [2024-04-25 03:28:18.171951] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a730) with pdu=0x2000190fef90 00:27:43.889 [2024-04-25 03:28:18.172519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.889 [2024-04-25 03:28:18.172548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.889 [2024-04-25 03:28:18.197708] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a730) with pdu=0x2000190fef90 00:27:43.889 [2024-04-25 03:28:18.198416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.890 [2024-04-25 03:28:18.198445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:43.890 [2024-04-25 03:28:18.220456] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a730) with pdu=0x2000190fef90 00:27:43.890 [2024-04-25 03:28:18.220955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.890 [2024-04-25 03:28:18.220984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:43.890 [2024-04-25 03:28:18.247081] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a730) with pdu=0x2000190fef90 00:27:43.890 [2024-04-25 03:28:18.247750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.890 [2024-04-25 03:28:18.247779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:43.890 [2024-04-25 03:28:18.272923] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a730) with pdu=0x2000190fef90 00:27:43.890 [2024-04-25 03:28:18.273568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.890 [2024-04-25 03:28:18.273596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.890 [2024-04-25 03:28:18.298229] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a730) with pdu=0x2000190fef90 00:27:43.890 [2024-04-25 03:28:18.298954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.890 [2024-04-25 03:28:18.298981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:43.890 [2024-04-25 03:28:18.320726] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a730) with pdu=0x2000190fef90 00:27:43.890 [2024-04-25 03:28:18.321362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.890 [2024-04-25 03:28:18.321389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:43.890 [2024-04-25 03:28:18.345280] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a730) with pdu=0x2000190fef90 00:27:43.890 [2024-04-25 03:28:18.345781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.890 [2024-04-25 03:28:18.345824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:43.890 [2024-04-25 03:28:18.366286] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a730) with pdu=0x2000190fef90 00:27:43.890 [2024-04-25 03:28:18.366712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.890 [2024-04-25 03:28:18.366741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.890 [2024-04-25 03:28:18.388532] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a730) with pdu=0x2000190fef90 00:27:43.890 [2024-04-25 03:28:18.389022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.890 [2024-04-25 03:28:18.389051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:44.148 [2024-04-25 03:28:18.412496] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a730) with pdu=0x2000190fef90 00:27:44.148 [2024-04-25 03:28:18.413171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.148 [2024-04-25 03:28:18.413200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:44.148 [2024-04-25 03:28:18.439426] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a730) with pdu=0x2000190fef90 00:27:44.148 [2024-04-25 03:28:18.440092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.148 [2024-04-25 03:28:18.440136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:44.148 [2024-04-25 03:28:18.466766] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a730) with pdu=0x2000190fef90 00:27:44.148 [2024-04-25 03:28:18.467304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.148 [2024-04-25 03:28:18.467332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:44.148 [2024-04-25 03:28:18.491589] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a730) with pdu=0x2000190fef90 00:27:44.148 [2024-04-25 03:28:18.491988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.148 [2024-04-25 03:28:18.492017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:44.148 [2024-04-25 03:28:18.517916] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a730) with pdu=0x2000190fef90 00:27:44.148 [2024-04-25 03:28:18.518307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.148 [2024-04-25 03:28:18.518340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:44.148 [2024-04-25 03:28:18.542651] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a730) with pdu=0x2000190fef90 00:27:44.148 [2024-04-25 03:28:18.543046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.148 [2024-04-25 03:28:18.543074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:44.148 [2024-04-25 03:28:18.567837] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a730) with pdu=0x2000190fef90 00:27:44.148 [2024-04-25 03:28:18.568463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.148 [2024-04-25 03:28:18.568490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:44.148 [2024-04-25 03:28:18.593055] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a730) with pdu=0x2000190fef90 00:27:44.148 [2024-04-25 03:28:18.593536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.148 [2024-04-25 03:28:18.593564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:44.148 [2024-04-25 03:28:18.617847] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a730) with pdu=0x2000190fef90 00:27:44.148 [2024-04-25 03:28:18.618328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.148 [2024-04-25 03:28:18.618356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:44.148 [2024-04-25 03:28:18.643090] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a730) with pdu=0x2000190fef90 00:27:44.148 [2024-04-25 03:28:18.643697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.148 [2024-04-25 03:28:18.643739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:44.407 [2024-04-25 03:28:18.665280] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a730) with pdu=0x2000190fef90 00:27:44.407 [2024-04-25 03:28:18.665729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.407 [2024-04-25 03:28:18.665771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:44.407 [2024-04-25 03:28:18.690626] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a730) with pdu=0x2000190fef90 00:27:44.407 [2024-04-25 03:28:18.691273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.407 [2024-04-25 03:28:18.691301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:44.407 [2024-04-25 03:28:18.713725] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a730) with pdu=0x2000190fef90 00:27:44.407 [2024-04-25 03:28:18.714438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.407 [2024-04-25 03:28:18.714465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:44.407 [2024-04-25 03:28:18.739820] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a730) with pdu=0x2000190fef90 00:27:44.407 [2024-04-25 03:28:18.740426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.407 [2024-04-25 03:28:18.740454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:44.407 [2024-04-25 03:28:18.765681] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a730) with pdu=0x2000190fef90 00:27:44.407 [2024-04-25 03:28:18.766194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.407 [2024-04-25 03:28:18.766222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:44.407 [2024-04-25 03:28:18.788697] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a730) with pdu=0x2000190fef90 00:27:44.407 [2024-04-25 03:28:18.789097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.407 [2024-04-25 03:28:18.789139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:44.407 [2024-04-25 03:28:18.812896] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a730) with pdu=0x2000190fef90 00:27:44.407 [2024-04-25 03:28:18.813535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.407 [2024-04-25 03:28:18.813563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:44.407 [2024-04-25 03:28:18.837824] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a730) with pdu=0x2000190fef90 00:27:44.407 [2024-04-25 03:28:18.838405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.407 [2024-04-25 03:28:18.838434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:44.407 [2024-04-25 03:28:18.861717] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a730) with pdu=0x2000190fef90 00:27:44.407 [2024-04-25 03:28:18.862214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.407 [2024-04-25 03:28:18.862242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:44.407 [2024-04-25 03:28:18.887836] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a730) with pdu=0x2000190fef90 00:27:44.407 [2024-04-25 03:28:18.888504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.407 [2024-04-25 03:28:18.888532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:44.668 [2024-04-25 03:28:18.912400] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a730) with pdu=0x2000190fef90 00:27:44.668 [2024-04-25 03:28:18.913032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.668 [2024-04-25 03:28:18.913062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:44.668 [2024-04-25 03:28:18.939239] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a730) with pdu=0x2000190fef90 00:27:44.668 [2024-04-25 03:28:18.939656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.668 [2024-04-25 03:28:18.939703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:44.668 [2024-04-25 03:28:18.964766] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a730) with pdu=0x2000190fef90 00:27:44.668 [2024-04-25 03:28:18.965519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.668 [2024-04-25 03:28:18.965548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:44.668 [2024-04-25 03:28:18.988244] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a730) with pdu=0x2000190fef90 00:27:44.668 [2024-04-25 03:28:18.988811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.668 [2024-04-25 03:28:18.988856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:44.668 [2024-04-25 03:28:19.013987] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a730) with pdu=0x2000190fef90 00:27:44.668 [2024-04-25 03:28:19.014467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.668 [2024-04-25 03:28:19.014496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:44.668 [2024-04-25 03:28:19.038836] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a730) with pdu=0x2000190fef90 00:27:44.668 [2024-04-25 03:28:19.039429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.668 [2024-04-25 03:28:19.039458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:44.668 [2024-04-25 03:28:19.064049] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a730) with pdu=0x2000190fef90 00:27:44.668 [2024-04-25 03:28:19.064753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.668 [2024-04-25 03:28:19.064782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:44.668 [2024-04-25 03:28:19.089925] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a730) with pdu=0x2000190fef90 00:27:44.668 [2024-04-25 03:28:19.090411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.668 [2024-04-25 03:28:19.090440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:44.668 [2024-04-25 03:28:19.115547] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a730) with pdu=0x2000190fef90 00:27:44.668 [2024-04-25 03:28:19.116194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.668 [2024-04-25 03:28:19.116223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:44.668 [2024-04-25 03:28:19.141277] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a730) with pdu=0x2000190fef90 00:27:44.668 [2024-04-25 03:28:19.141883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.668 [2024-04-25 03:28:19.141912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:44.928 [2024-04-25 03:28:19.168146] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a730) with pdu=0x2000190fef90 00:27:44.928 [2024-04-25 03:28:19.168826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.928 [2024-04-25 03:28:19.168870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:44.928 [2024-04-25 03:28:19.192203] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a730) with pdu=0x2000190fef90 00:27:44.928 [2024-04-25 03:28:19.192715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.928 [2024-04-25 03:28:19.192757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:44.928 [2024-04-25 03:28:19.215908] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a730) with pdu=0x2000190fef90 00:27:44.928 [2024-04-25 03:28:19.216487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.928 [2024-04-25 03:28:19.216515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:44.928 [2024-04-25 03:28:19.238728] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a730) with pdu=0x2000190fef90 00:27:44.928 [2024-04-25 03:28:19.239138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.928 [2024-04-25 03:28:19.239166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:44.928 [2024-04-25 03:28:19.263455] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a730) with pdu=0x2000190fef90 00:27:44.928 [2024-04-25 03:28:19.264033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.928 [2024-04-25 03:28:19.264061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:44.928 [2024-04-25 03:28:19.285962] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a730) with pdu=0x2000190fef90 00:27:44.928 [2024-04-25 03:28:19.286692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.928 [2024-04-25 03:28:19.286721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:44.928 [2024-04-25 03:28:19.309959] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a730) with pdu=0x2000190fef90 00:27:44.928 [2024-04-25 03:28:19.310537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.928 [2024-04-25 03:28:19.310567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:44.928 [2024-04-25 03:28:19.334787] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a730) with pdu=0x2000190fef90 00:27:44.928 [2024-04-25 03:28:19.335510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.928 [2024-04-25 03:28:19.335538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:44.928 [2024-04-25 03:28:19.361115] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a730) with pdu=0x2000190fef90 00:27:44.928 [2024-04-25 03:28:19.361850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.928 [2024-04-25 03:28:19.361879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:44.928 [2024-04-25 03:28:19.386257] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a730) with pdu=0x2000190fef90 00:27:44.928 [2024-04-25 03:28:19.387001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.928 [2024-04-25 03:28:19.387043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:44.928 [2024-04-25 03:28:19.408675] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a730) with pdu=0x2000190fef90 00:27:44.928 [2024-04-25 03:28:19.409244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.928 [2024-04-25 03:28:19.409272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:45.188 [2024-04-25 03:28:19.434139] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a730) with pdu=0x2000190fef90 00:27:45.188 [2024-04-25 03:28:19.434647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.188 [2024-04-25 03:28:19.434704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:45.188 [2024-04-25 03:28:19.458417] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a730) with pdu=0x2000190fef90 00:27:45.188 [2024-04-25 03:28:19.458946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.188 [2024-04-25 03:28:19.458992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.188 [2024-04-25 03:28:19.484550] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a730) with pdu=0x2000190fef90 00:27:45.188 [2024-04-25 03:28:19.485044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.188 [2024-04-25 03:28:19.485071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:45.188 [2024-04-25 03:28:19.510331] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a730) with pdu=0x2000190fef90 00:27:45.188 [2024-04-25 03:28:19.510763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.188 [2024-04-25 03:28:19.510793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:45.188 [2024-04-25 03:28:19.532711] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a730) with pdu=0x2000190fef90 00:27:45.188 [2024-04-25 03:28:19.533100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.188 [2024-04-25 03:28:19.533128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:45.188 [2024-04-25 03:28:19.553874] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a730) with pdu=0x2000190fef90 00:27:45.188 [2024-04-25 03:28:19.554453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.188 [2024-04-25 03:28:19.554481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.188 [2024-04-25 03:28:19.577431] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a730) with pdu=0x2000190fef90 00:27:45.188 [2024-04-25 03:28:19.578027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.188 [2024-04-25 03:28:19.578076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:45.188 [2024-04-25 03:28:19.602579] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a730) with pdu=0x2000190fef90 00:27:45.188 [2024-04-25 03:28:19.603101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.188 [2024-04-25 03:28:19.603129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:45.188 [2024-04-25 03:28:19.624339] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a730) with pdu=0x2000190fef90 00:27:45.188 [2024-04-25 03:28:19.624913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.188 [2024-04-25 03:28:19.624942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:45.188 [2024-04-25 03:28:19.648618] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a730) with pdu=0x2000190fef90 00:27:45.188 [2024-04-25 03:28:19.649217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.188 [2024-04-25 03:28:19.649246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.188 [2024-04-25 03:28:19.673270] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a730) with pdu=0x2000190fef90 00:27:45.188 [2024-04-25 03:28:19.673898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.188 [2024-04-25 03:28:19.673926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:45.449 [2024-04-25 03:28:19.699619] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a730) with pdu=0x2000190fef90 00:27:45.449 [2024-04-25 03:28:19.700183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.449 [2024-04-25 03:28:19.700211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:45.449 [2024-04-25 03:28:19.724650] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a730) with pdu=0x2000190fef90 00:27:45.449 [2024-04-25 03:28:19.725286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.449 [2024-04-25 03:28:19.725315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:45.449 [2024-04-25 03:28:19.748791] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a730) with pdu=0x2000190fef90 00:27:45.449 [2024-04-25 03:28:19.749380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.449 [2024-04-25 03:28:19.749407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.449 [2024-04-25 03:28:19.773958] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a730) with pdu=0x2000190fef90 00:27:45.449 [2024-04-25 03:28:19.774516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.449 [2024-04-25 03:28:19.774545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:45.449 [2024-04-25 03:28:19.798478] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a730) with pdu=0x2000190fef90 00:27:45.449 [2024-04-25 03:28:19.799214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.449 [2024-04-25 03:28:19.799254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:45.449 [2024-04-25 03:28:19.823024] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a730) with pdu=0x2000190fef90 00:27:45.449 [2024-04-25 03:28:19.823444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.449 [2024-04-25 03:28:19.823473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:45.449 [2024-04-25 03:28:19.848143] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a730) with pdu=0x2000190fef90 00:27:45.449 [2024-04-25 03:28:19.848745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.449 [2024-04-25 03:28:19.848773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.449 [2024-04-25 03:28:19.872693] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a730) with pdu=0x2000190fef90 00:27:45.449 [2024-04-25 03:28:19.873426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.449 [2024-04-25 03:28:19.873454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:45.449 [2024-04-25 03:28:19.895163] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a730) with pdu=0x2000190fef90 00:27:45.449 [2024-04-25 03:28:19.895554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.449 [2024-04-25 03:28:19.895582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:45.449 [2024-04-25 03:28:19.914921] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a730) with pdu=0x2000190fef90 00:27:45.449 [2024-04-25 03:28:19.915331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.449 [2024-04-25 03:28:19.915359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:45.449 [2024-04-25 03:28:19.938820] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a730) with pdu=0x2000190fef90 00:27:45.449 [2024-04-25 03:28:19.939452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.449 [2024-04-25 03:28:19.939481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.708 [2024-04-25 03:28:19.961540] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x136a730) with pdu=0x2000190fef90 00:27:45.708 [2024-04-25 03:28:19.962124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.708 [2024-04-25 03:28:19.962153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:45.708 00:27:45.708 Latency(us) 00:27:45.708 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:45.708 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:27:45.708 nvme0n1 : 2.01 1255.78 156.97 0.00 0.00 12698.01 8786.68 27962.03 00:27:45.708 =================================================================================================================== 00:27:45.708 Total : 1255.78 156.97 0.00 0.00 12698.01 8786.68 27962.03 00:27:45.708 0 00:27:45.708 03:28:19 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:45.708 03:28:19 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:45.708 03:28:19 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:45.708 | .driver_specific 00:27:45.708 | .nvme_error 00:27:45.708 | .status_code 00:27:45.708 | .command_transient_transport_error' 00:27:45.708 03:28:19 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:45.967 03:28:20 -- host/digest.sh@71 -- # (( 81 > 0 )) 00:27:45.967 03:28:20 -- host/digest.sh@73 -- # killprocess 1616677 00:27:45.967 03:28:20 -- common/autotest_common.sh@936 -- # '[' -z 1616677 ']' 00:27:45.967 03:28:20 -- common/autotest_common.sh@940 -- # kill -0 1616677 00:27:45.967 03:28:20 -- common/autotest_common.sh@941 -- # uname 00:27:45.967 03:28:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:45.967 03:28:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1616677 00:27:45.967 03:28:20 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:27:45.967 03:28:20 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:27:45.967 03:28:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1616677' 00:27:45.967 killing process with pid 1616677 00:27:45.967 03:28:20 -- common/autotest_common.sh@955 -- # kill 1616677 00:27:45.967 Received shutdown signal, test time was about 2.000000 seconds 00:27:45.967 00:27:45.967 Latency(us) 00:27:45.967 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:45.967 =================================================================================================================== 00:27:45.967 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:45.967 03:28:20 -- common/autotest_common.sh@960 -- # wait 1616677 00:27:46.225 03:28:20 -- host/digest.sh@116 -- # killprocess 1615173 00:27:46.225 03:28:20 -- common/autotest_common.sh@936 -- # '[' -z 1615173 ']' 00:27:46.225 03:28:20 -- common/autotest_common.sh@940 -- # kill -0 1615173 00:27:46.225 03:28:20 -- common/autotest_common.sh@941 -- # uname 00:27:46.226 03:28:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:46.226 03:28:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1615173 00:27:46.226 03:28:20 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:46.226 03:28:20 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:46.226 03:28:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1615173' 00:27:46.226 killing process with pid 1615173 00:27:46.226 03:28:20 -- common/autotest_common.sh@955 -- # kill 1615173 00:27:46.226 03:28:20 -- common/autotest_common.sh@960 -- # wait 1615173 00:27:46.484 00:27:46.484 real 0m16.876s 00:27:46.484 user 0m33.874s 00:27:46.484 sys 0m3.792s 00:27:46.484 03:28:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:27:46.484 03:28:20 -- common/autotest_common.sh@10 -- # set +x 00:27:46.484 ************************************ 00:27:46.484 END TEST nvmf_digest_error 00:27:46.484 ************************************ 00:27:46.484 03:28:20 -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:27:46.484 03:28:20 -- host/digest.sh@150 -- # nvmftestfini 00:27:46.484 03:28:20 -- nvmf/common.sh@477 -- # nvmfcleanup 00:27:46.485 03:28:20 -- nvmf/common.sh@117 -- # sync 00:27:46.485 03:28:20 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:46.485 03:28:20 -- nvmf/common.sh@120 -- # set +e 00:27:46.485 03:28:20 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:46.485 03:28:20 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:46.485 rmmod nvme_tcp 00:27:46.485 rmmod nvme_fabrics 00:27:46.485 rmmod nvme_keyring 00:27:46.485 03:28:20 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:46.485 03:28:20 -- nvmf/common.sh@124 -- # set -e 00:27:46.485 03:28:20 -- nvmf/common.sh@125 -- # return 0 00:27:46.485 03:28:20 -- nvmf/common.sh@478 -- # '[' -n 1615173 ']' 00:27:46.485 03:28:20 -- nvmf/common.sh@479 -- # killprocess 1615173 00:27:46.485 03:28:20 -- common/autotest_common.sh@936 -- # '[' -z 1615173 ']' 00:27:46.485 03:28:20 -- common/autotest_common.sh@940 -- # kill -0 1615173 00:27:46.485 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (1615173) - No such process 00:27:46.485 03:28:20 -- common/autotest_common.sh@963 -- # echo 'Process with pid 1615173 is not found' 00:27:46.485 Process with pid 1615173 is not found 00:27:46.485 03:28:20 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:27:46.485 03:28:20 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:27:46.485 03:28:20 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:27:46.485 03:28:20 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:46.485 03:28:20 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:46.485 03:28:20 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:46.485 03:28:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:46.485 03:28:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:49.020 03:28:22 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:49.020 00:27:49.020 real 0m38.342s 00:27:49.020 user 1m8.482s 00:27:49.020 sys 0m9.281s 00:27:49.020 03:28:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:27:49.020 03:28:22 -- common/autotest_common.sh@10 -- # set +x 00:27:49.020 ************************************ 00:27:49.020 END TEST nvmf_digest 00:27:49.020 ************************************ 00:27:49.020 03:28:22 -- nvmf/nvmf.sh@108 -- # [[ 0 -eq 1 ]] 00:27:49.020 03:28:22 -- nvmf/nvmf.sh@113 -- # [[ 0 -eq 1 ]] 00:27:49.020 03:28:22 -- nvmf/nvmf.sh@118 -- # [[ phy == phy ]] 00:27:49.020 03:28:22 -- nvmf/nvmf.sh@119 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:27:49.020 03:28:22 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:27:49.020 03:28:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:49.020 03:28:22 -- common/autotest_common.sh@10 -- # set +x 00:27:49.020 ************************************ 00:27:49.020 START TEST nvmf_bdevperf 00:27:49.020 ************************************ 00:27:49.020 03:28:23 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:27:49.020 * Looking for test storage... 00:27:49.020 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:49.020 03:28:23 -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:49.020 03:28:23 -- nvmf/common.sh@7 -- # uname -s 00:27:49.020 03:28:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:49.020 03:28:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:49.020 03:28:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:49.020 03:28:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:49.020 03:28:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:49.020 03:28:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:49.020 03:28:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:49.020 03:28:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:49.020 03:28:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:49.020 03:28:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:49.020 03:28:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:49.020 03:28:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:49.020 03:28:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:49.020 03:28:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:49.020 03:28:23 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:49.020 03:28:23 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:49.020 03:28:23 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:49.020 03:28:23 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:49.020 03:28:23 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:49.020 03:28:23 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:49.020 03:28:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:49.020 03:28:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:49.020 03:28:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:49.020 03:28:23 -- paths/export.sh@5 -- # export PATH 00:27:49.020 03:28:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:49.020 03:28:23 -- nvmf/common.sh@47 -- # : 0 00:27:49.020 03:28:23 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:49.020 03:28:23 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:49.020 03:28:23 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:49.020 03:28:23 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:49.020 03:28:23 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:49.020 03:28:23 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:49.020 03:28:23 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:49.020 03:28:23 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:49.020 03:28:23 -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:49.020 03:28:23 -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:49.020 03:28:23 -- host/bdevperf.sh@24 -- # nvmftestinit 00:27:49.020 03:28:23 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:27:49.020 03:28:23 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:49.020 03:28:23 -- nvmf/common.sh@437 -- # prepare_net_devs 00:27:49.020 03:28:23 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:27:49.020 03:28:23 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:27:49.020 03:28:23 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:49.020 03:28:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:49.020 03:28:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:49.020 03:28:23 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:27:49.020 03:28:23 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:27:49.020 03:28:23 -- nvmf/common.sh@285 -- # xtrace_disable 00:27:49.020 03:28:23 -- common/autotest_common.sh@10 -- # set +x 00:27:50.924 03:28:25 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:50.924 03:28:25 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:50.924 03:28:25 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:50.924 03:28:25 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:50.924 03:28:25 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:50.924 03:28:25 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:50.924 03:28:25 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:50.924 03:28:25 -- nvmf/common.sh@295 -- # net_devs=() 00:27:50.924 03:28:25 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:50.924 03:28:25 -- nvmf/common.sh@296 -- # e810=() 00:27:50.924 03:28:25 -- nvmf/common.sh@296 -- # local -ga e810 00:27:50.924 03:28:25 -- nvmf/common.sh@297 -- # x722=() 00:27:50.924 03:28:25 -- nvmf/common.sh@297 -- # local -ga x722 00:27:50.924 03:28:25 -- nvmf/common.sh@298 -- # mlx=() 00:27:50.924 03:28:25 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:50.924 03:28:25 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:50.924 03:28:25 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:50.924 03:28:25 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:50.924 03:28:25 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:50.924 03:28:25 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:50.924 03:28:25 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:50.924 03:28:25 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:50.924 03:28:25 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:50.924 03:28:25 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:50.924 03:28:25 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:50.924 03:28:25 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:50.924 03:28:25 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:50.924 03:28:25 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:50.924 03:28:25 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:50.924 03:28:25 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:50.924 03:28:25 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:50.924 03:28:25 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:50.924 03:28:25 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:50.924 03:28:25 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:50.924 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:50.924 03:28:25 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:50.924 03:28:25 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:50.924 03:28:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:50.924 03:28:25 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:50.924 03:28:25 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:50.924 03:28:25 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:50.924 03:28:25 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:50.924 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:50.924 03:28:25 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:50.924 03:28:25 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:50.924 03:28:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:50.924 03:28:25 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:50.924 03:28:25 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:50.924 03:28:25 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:50.924 03:28:25 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:50.924 03:28:25 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:50.924 03:28:25 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:50.924 03:28:25 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:50.924 03:28:25 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:27:50.924 03:28:25 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:50.924 03:28:25 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:50.924 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:50.924 03:28:25 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:27:50.924 03:28:25 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:50.924 03:28:25 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:50.924 03:28:25 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:27:50.924 03:28:25 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:50.924 03:28:25 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:50.924 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:50.924 03:28:25 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:27:50.924 03:28:25 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:27:50.924 03:28:25 -- nvmf/common.sh@403 -- # is_hw=yes 00:27:50.924 03:28:25 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:27:50.924 03:28:25 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:27:50.924 03:28:25 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:27:50.924 03:28:25 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:50.924 03:28:25 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:50.924 03:28:25 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:50.924 03:28:25 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:50.924 03:28:25 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:50.924 03:28:25 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:50.924 03:28:25 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:50.924 03:28:25 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:50.924 03:28:25 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:50.924 03:28:25 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:50.924 03:28:25 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:50.924 03:28:25 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:50.924 03:28:25 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:50.924 03:28:25 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:50.924 03:28:25 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:50.924 03:28:25 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:50.924 03:28:25 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:50.924 03:28:25 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:50.924 03:28:25 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:50.924 03:28:25 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:50.924 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:50.924 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.166 ms 00:27:50.924 00:27:50.924 --- 10.0.0.2 ping statistics --- 00:27:50.924 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:50.924 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:27:50.924 03:28:25 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:50.925 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:50.925 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:27:50.925 00:27:50.925 --- 10.0.0.1 ping statistics --- 00:27:50.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:50.925 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:27:50.925 03:28:25 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:50.925 03:28:25 -- nvmf/common.sh@411 -- # return 0 00:27:50.925 03:28:25 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:27:50.925 03:28:25 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:50.925 03:28:25 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:27:50.925 03:28:25 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:27:50.925 03:28:25 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:50.925 03:28:25 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:27:50.925 03:28:25 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:27:50.925 03:28:25 -- host/bdevperf.sh@25 -- # tgt_init 00:27:50.925 03:28:25 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:27:50.925 03:28:25 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:27:50.925 03:28:25 -- common/autotest_common.sh@710 -- # xtrace_disable 00:27:50.925 03:28:25 -- common/autotest_common.sh@10 -- # set +x 00:27:50.925 03:28:25 -- nvmf/common.sh@470 -- # nvmfpid=1619050 00:27:50.925 03:28:25 -- nvmf/common.sh@471 -- # waitforlisten 1619050 00:27:50.925 03:28:25 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:50.925 03:28:25 -- common/autotest_common.sh@817 -- # '[' -z 1619050 ']' 00:27:50.925 03:28:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:50.925 03:28:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:50.925 03:28:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:50.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:50.925 03:28:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:50.925 03:28:25 -- common/autotest_common.sh@10 -- # set +x 00:27:50.925 [2024-04-25 03:28:25.232945] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:27:50.925 [2024-04-25 03:28:25.233053] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:50.925 EAL: No free 2048 kB hugepages reported on node 1 00:27:50.925 [2024-04-25 03:28:25.305160] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:51.184 [2024-04-25 03:28:25.429921] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:51.184 [2024-04-25 03:28:25.429991] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:51.184 [2024-04-25 03:28:25.430008] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:51.184 [2024-04-25 03:28:25.430021] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:51.184 [2024-04-25 03:28:25.430033] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:51.184 [2024-04-25 03:28:25.430128] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:51.184 [2024-04-25 03:28:25.430184] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:51.184 [2024-04-25 03:28:25.430187] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:51.184 03:28:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:51.184 03:28:25 -- common/autotest_common.sh@850 -- # return 0 00:27:51.184 03:28:25 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:27:51.184 03:28:25 -- common/autotest_common.sh@716 -- # xtrace_disable 00:27:51.184 03:28:25 -- common/autotest_common.sh@10 -- # set +x 00:27:51.184 03:28:25 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:51.184 03:28:25 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:51.184 03:28:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:51.184 03:28:25 -- common/autotest_common.sh@10 -- # set +x 00:27:51.184 [2024-04-25 03:28:25.577938] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:51.184 03:28:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:51.184 03:28:25 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:51.184 03:28:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:51.184 03:28:25 -- common/autotest_common.sh@10 -- # set +x 00:27:51.184 Malloc0 00:27:51.184 03:28:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:51.184 03:28:25 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:51.184 03:28:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:51.184 03:28:25 -- common/autotest_common.sh@10 -- # set +x 00:27:51.184 03:28:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:51.184 03:28:25 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:51.184 03:28:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:51.184 03:28:25 -- common/autotest_common.sh@10 -- # set +x 00:27:51.184 03:28:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:51.184 03:28:25 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:51.184 03:28:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:51.184 03:28:25 -- common/autotest_common.sh@10 -- # set +x 00:27:51.184 [2024-04-25 03:28:25.640727] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:51.184 03:28:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:51.184 03:28:25 -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:27:51.184 03:28:25 -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:27:51.184 03:28:25 -- nvmf/common.sh@521 -- # config=() 00:27:51.184 03:28:25 -- nvmf/common.sh@521 -- # local subsystem config 00:27:51.184 03:28:25 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:51.184 03:28:25 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:51.184 { 00:27:51.184 "params": { 00:27:51.184 "name": "Nvme$subsystem", 00:27:51.184 "trtype": "$TEST_TRANSPORT", 00:27:51.184 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:51.184 "adrfam": "ipv4", 00:27:51.184 "trsvcid": "$NVMF_PORT", 00:27:51.184 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:51.184 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:51.184 "hdgst": ${hdgst:-false}, 00:27:51.184 "ddgst": ${ddgst:-false} 00:27:51.184 }, 00:27:51.184 "method": "bdev_nvme_attach_controller" 00:27:51.184 } 00:27:51.184 EOF 00:27:51.184 )") 00:27:51.184 03:28:25 -- nvmf/common.sh@543 -- # cat 00:27:51.184 03:28:25 -- nvmf/common.sh@545 -- # jq . 00:27:51.184 03:28:25 -- nvmf/common.sh@546 -- # IFS=, 00:27:51.184 03:28:25 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:27:51.184 "params": { 00:27:51.184 "name": "Nvme1", 00:27:51.184 "trtype": "tcp", 00:27:51.184 "traddr": "10.0.0.2", 00:27:51.184 "adrfam": "ipv4", 00:27:51.184 "trsvcid": "4420", 00:27:51.184 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:51.184 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:51.184 "hdgst": false, 00:27:51.184 "ddgst": false 00:27:51.184 }, 00:27:51.184 "method": "bdev_nvme_attach_controller" 00:27:51.184 }' 00:27:51.444 [2024-04-25 03:28:25.687335] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:27:51.444 [2024-04-25 03:28:25.687402] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1619178 ] 00:27:51.444 EAL: No free 2048 kB hugepages reported on node 1 00:27:51.444 [2024-04-25 03:28:25.747820] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:51.444 [2024-04-25 03:28:25.858808] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:51.703 Running I/O for 1 seconds... 00:27:53.084 00:27:53.084 Latency(us) 00:27:53.084 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:53.085 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:53.085 Verification LBA range: start 0x0 length 0x4000 00:27:53.085 Nvme1n1 : 1.01 8728.24 34.09 0.00 0.00 14603.38 2767.08 16311.18 00:27:53.085 =================================================================================================================== 00:27:53.085 Total : 8728.24 34.09 0.00 0.00 14603.38 2767.08 16311.18 00:27:53.085 03:28:27 -- host/bdevperf.sh@30 -- # bdevperfpid=1619342 00:27:53.085 03:28:27 -- host/bdevperf.sh@32 -- # sleep 3 00:27:53.085 03:28:27 -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:27:53.085 03:28:27 -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:27:53.085 03:28:27 -- nvmf/common.sh@521 -- # config=() 00:27:53.085 03:28:27 -- nvmf/common.sh@521 -- # local subsystem config 00:27:53.085 03:28:27 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:53.085 03:28:27 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:53.085 { 00:27:53.085 "params": { 00:27:53.085 "name": "Nvme$subsystem", 00:27:53.085 "trtype": "$TEST_TRANSPORT", 00:27:53.085 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:53.085 "adrfam": "ipv4", 00:27:53.085 "trsvcid": "$NVMF_PORT", 00:27:53.085 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:53.085 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:53.085 "hdgst": ${hdgst:-false}, 00:27:53.085 "ddgst": ${ddgst:-false} 00:27:53.085 }, 00:27:53.085 "method": "bdev_nvme_attach_controller" 00:27:53.085 } 00:27:53.085 EOF 00:27:53.085 )") 00:27:53.085 03:28:27 -- nvmf/common.sh@543 -- # cat 00:27:53.085 03:28:27 -- nvmf/common.sh@545 -- # jq . 00:27:53.085 03:28:27 -- nvmf/common.sh@546 -- # IFS=, 00:27:53.085 03:28:27 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:27:53.085 "params": { 00:27:53.085 "name": "Nvme1", 00:27:53.085 "trtype": "tcp", 00:27:53.085 "traddr": "10.0.0.2", 00:27:53.085 "adrfam": "ipv4", 00:27:53.085 "trsvcid": "4420", 00:27:53.085 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:53.085 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:53.085 "hdgst": false, 00:27:53.085 "ddgst": false 00:27:53.085 }, 00:27:53.085 "method": "bdev_nvme_attach_controller" 00:27:53.085 }' 00:27:53.085 [2024-04-25 03:28:27.494111] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:27:53.085 [2024-04-25 03:28:27.494201] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1619342 ] 00:27:53.085 EAL: No free 2048 kB hugepages reported on node 1 00:27:53.085 [2024-04-25 03:28:27.557433] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:53.345 [2024-04-25 03:28:27.665688] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:53.604 Running I/O for 15 seconds... 00:27:56.141 03:28:30 -- host/bdevperf.sh@33 -- # kill -9 1619050 00:27:56.141 03:28:30 -- host/bdevperf.sh@35 -- # sleep 3 00:27:56.141 [2024-04-25 03:28:30.462430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:48904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.141 [2024-04-25 03:28:30.462484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.141 [2024-04-25 03:28:30.462525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:49576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.141 [2024-04-25 03:28:30.462545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.141 [2024-04-25 03:28:30.462574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:49584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.141 [2024-04-25 03:28:30.462590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.141 [2024-04-25 03:28:30.462609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:49592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.141 [2024-04-25 03:28:30.462653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.141 [2024-04-25 03:28:30.462673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:49600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.141 [2024-04-25 03:28:30.462705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.141 [2024-04-25 03:28:30.462721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:49608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.141 [2024-04-25 03:28:30.462736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.141 [2024-04-25 03:28:30.462751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:49616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.141 [2024-04-25 03:28:30.462769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.141 [2024-04-25 03:28:30.462788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:49624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.141 [2024-04-25 03:28:30.462807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.141 [2024-04-25 03:28:30.462825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:49632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.141 [2024-04-25 03:28:30.462841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.141 [2024-04-25 03:28:30.462858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:49640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.141 [2024-04-25 03:28:30.462873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.141 [2024-04-25 03:28:30.462889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:49648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.141 [2024-04-25 03:28:30.462903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.141 [2024-04-25 03:28:30.462932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:49656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.141 [2024-04-25 03:28:30.462951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.141 [2024-04-25 03:28:30.462965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:49664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.141 [2024-04-25 03:28:30.462978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.141 [2024-04-25 03:28:30.463019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:49672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.141 [2024-04-25 03:28:30.463034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.141 [2024-04-25 03:28:30.463050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:49680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.141 [2024-04-25 03:28:30.463066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.141 [2024-04-25 03:28:30.463082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:49688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.141 [2024-04-25 03:28:30.463097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.141 [2024-04-25 03:28:30.463119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:49696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.141 [2024-04-25 03:28:30.463136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.141 [2024-04-25 03:28:30.463153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:49704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.141 [2024-04-25 03:28:30.463169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.141 [2024-04-25 03:28:30.463189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:49712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.141 [2024-04-25 03:28:30.463204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.141 [2024-04-25 03:28:30.463221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:49720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.141 [2024-04-25 03:28:30.463238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.141 [2024-04-25 03:28:30.463255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:49728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.141 [2024-04-25 03:28:30.463270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.141 [2024-04-25 03:28:30.463287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:49736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.141 [2024-04-25 03:28:30.463302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.141 [2024-04-25 03:28:30.463318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:49744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.141 [2024-04-25 03:28:30.463334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.141 [2024-04-25 03:28:30.463350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:49752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.142 [2024-04-25 03:28:30.463365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.142 [2024-04-25 03:28:30.463382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:49760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.142 [2024-04-25 03:28:30.463397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.142 [2024-04-25 03:28:30.463414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:49768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.142 [2024-04-25 03:28:30.463429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.142 [2024-04-25 03:28:30.463445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:49776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.142 [2024-04-25 03:28:30.463460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.142 [2024-04-25 03:28:30.463477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:49784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.142 [2024-04-25 03:28:30.463492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.142 [2024-04-25 03:28:30.463508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:49792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.142 [2024-04-25 03:28:30.463527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.142 [2024-04-25 03:28:30.463545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:49800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.142 [2024-04-25 03:28:30.463561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.142 [2024-04-25 03:28:30.463578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:48912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.142 [2024-04-25 03:28:30.463593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.142 [2024-04-25 03:28:30.463610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:48920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.142 [2024-04-25 03:28:30.463645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.142 [2024-04-25 03:28:30.463677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:48928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.142 [2024-04-25 03:28:30.463692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.142 [2024-04-25 03:28:30.463708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:48936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.142 [2024-04-25 03:28:30.463721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.142 [2024-04-25 03:28:30.463736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:48944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.142 [2024-04-25 03:28:30.463750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.142 [2024-04-25 03:28:30.463765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:48952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.142 [2024-04-25 03:28:30.463779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.142 [2024-04-25 03:28:30.463794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:48960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.142 [2024-04-25 03:28:30.463808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.142 [2024-04-25 03:28:30.463822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:48968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.142 [2024-04-25 03:28:30.463836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.142 [2024-04-25 03:28:30.463852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:48976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.142 [2024-04-25 03:28:30.463865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.142 [2024-04-25 03:28:30.463881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:48984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.142 [2024-04-25 03:28:30.463894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.142 [2024-04-25 03:28:30.463925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:48992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.142 [2024-04-25 03:28:30.463941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.142 [2024-04-25 03:28:30.463962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:49000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.142 [2024-04-25 03:28:30.463978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.142 [2024-04-25 03:28:30.463995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:49008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.142 [2024-04-25 03:28:30.464010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.142 [2024-04-25 03:28:30.464027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:49016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.142 [2024-04-25 03:28:30.464042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.142 [2024-04-25 03:28:30.464059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:49024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.142 [2024-04-25 03:28:30.464074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.142 [2024-04-25 03:28:30.464091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:49032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.142 [2024-04-25 03:28:30.464105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.142 [2024-04-25 03:28:30.464122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:49040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.142 [2024-04-25 03:28:30.464137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.142 [2024-04-25 03:28:30.464153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:49048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.142 [2024-04-25 03:28:30.464168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.142 [2024-04-25 03:28:30.464185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:49056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.142 [2024-04-25 03:28:30.464200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.142 [2024-04-25 03:28:30.464216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:49064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.142 [2024-04-25 03:28:30.464231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.142 [2024-04-25 03:28:30.464248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:49072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.142 [2024-04-25 03:28:30.464262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.142 [2024-04-25 03:28:30.464286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:49080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.142 [2024-04-25 03:28:30.464301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.142 [2024-04-25 03:28:30.464318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:49088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.142 [2024-04-25 03:28:30.464333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.142 [2024-04-25 03:28:30.464349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:49096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.142 [2024-04-25 03:28:30.464365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.142 [2024-04-25 03:28:30.464385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:49104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.142 [2024-04-25 03:28:30.464401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.142 [2024-04-25 03:28:30.464418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:49112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.142 [2024-04-25 03:28:30.464433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.142 [2024-04-25 03:28:30.464450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:49120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.142 [2024-04-25 03:28:30.464472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.142 [2024-04-25 03:28:30.464489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:49128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.142 [2024-04-25 03:28:30.464504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.142 [2024-04-25 03:28:30.464520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:49136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.142 [2024-04-25 03:28:30.464535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.142 [2024-04-25 03:28:30.464552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:49144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.142 [2024-04-25 03:28:30.464567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.142 [2024-04-25 03:28:30.464583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:49152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.142 [2024-04-25 03:28:30.464598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.142 [2024-04-25 03:28:30.464626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:49160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.142 [2024-04-25 03:28:30.464650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.142 [2024-04-25 03:28:30.464682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:49168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.142 [2024-04-25 03:28:30.464697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.142 [2024-04-25 03:28:30.464712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:49176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.142 [2024-04-25 03:28:30.464734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.142 [2024-04-25 03:28:30.464749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:49184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.142 [2024-04-25 03:28:30.464763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.142 [2024-04-25 03:28:30.464778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:49192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.142 [2024-04-25 03:28:30.464792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.142 [2024-04-25 03:28:30.464807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:49200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.142 [2024-04-25 03:28:30.464824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.142 [2024-04-25 03:28:30.464840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:49208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.143 [2024-04-25 03:28:30.464854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.143 [2024-04-25 03:28:30.464869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:49216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.143 [2024-04-25 03:28:30.464882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.143 [2024-04-25 03:28:30.464897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:49224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.143 [2024-04-25 03:28:30.464925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.143 [2024-04-25 03:28:30.464942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:49232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.143 [2024-04-25 03:28:30.464955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.143 [2024-04-25 03:28:30.464984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:49240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.143 [2024-04-25 03:28:30.464999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.143 [2024-04-25 03:28:30.465016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:49248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.143 [2024-04-25 03:28:30.465031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.143 [2024-04-25 03:28:30.465054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:49256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.143 [2024-04-25 03:28:30.465069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.143 [2024-04-25 03:28:30.465086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:49808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.143 [2024-04-25 03:28:30.465100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.143 [2024-04-25 03:28:30.465117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:49264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.143 [2024-04-25 03:28:30.465132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.143 [2024-04-25 03:28:30.465148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:49272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.143 [2024-04-25 03:28:30.465164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.143 [2024-04-25 03:28:30.465181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:49280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.143 [2024-04-25 03:28:30.465196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.143 [2024-04-25 03:28:30.465213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:49288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.143 [2024-04-25 03:28:30.465227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.143 [2024-04-25 03:28:30.465248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:49296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.143 [2024-04-25 03:28:30.465263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.143 [2024-04-25 03:28:30.465280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:49304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.143 [2024-04-25 03:28:30.465295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.143 [2024-04-25 03:28:30.465311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:49312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.143 [2024-04-25 03:28:30.465327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.143 [2024-04-25 03:28:30.465344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:49320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.143 [2024-04-25 03:28:30.465360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.143 [2024-04-25 03:28:30.465376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:49328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.143 [2024-04-25 03:28:30.465391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.143 [2024-04-25 03:28:30.465408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:49336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.143 [2024-04-25 03:28:30.465423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.143 [2024-04-25 03:28:30.465439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:49344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.143 [2024-04-25 03:28:30.465455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.143 [2024-04-25 03:28:30.465472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:49352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.143 [2024-04-25 03:28:30.465487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.143 [2024-04-25 03:28:30.465504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:49360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.143 [2024-04-25 03:28:30.465519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.143 [2024-04-25 03:28:30.465536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:49368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.143 [2024-04-25 03:28:30.465550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.143 [2024-04-25 03:28:30.465567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:49376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.143 [2024-04-25 03:28:30.465582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.143 [2024-04-25 03:28:30.465598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:49384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.143 [2024-04-25 03:28:30.465623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.143 [2024-04-25 03:28:30.465647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:49392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.143 [2024-04-25 03:28:30.465682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.143 [2024-04-25 03:28:30.465699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:49400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.143 [2024-04-25 03:28:30.465713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.143 [2024-04-25 03:28:30.465729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:49408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.143 [2024-04-25 03:28:30.465742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.143 [2024-04-25 03:28:30.465758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:49416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.143 [2024-04-25 03:28:30.465771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.143 [2024-04-25 03:28:30.465786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:49424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.143 [2024-04-25 03:28:30.465809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.143 [2024-04-25 03:28:30.465824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:49432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.143 [2024-04-25 03:28:30.465837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.143 [2024-04-25 03:28:30.465852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:49440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.144 [2024-04-25 03:28:30.465866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.144 [2024-04-25 03:28:30.465882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:49448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.144 [2024-04-25 03:28:30.465895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.144 [2024-04-25 03:28:30.465937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:49456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.144 [2024-04-25 03:28:30.465953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.144 [2024-04-25 03:28:30.465970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:49464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.144 [2024-04-25 03:28:30.465985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.144 [2024-04-25 03:28:30.466002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:49472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.144 [2024-04-25 03:28:30.466017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.144 [2024-04-25 03:28:30.466033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:49480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.144 [2024-04-25 03:28:30.466048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.144 [2024-04-25 03:28:30.466065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:49488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.144 [2024-04-25 03:28:30.466080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.144 [2024-04-25 03:28:30.466100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:49496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.144 [2024-04-25 03:28:30.466116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.144 [2024-04-25 03:28:30.466133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:49504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.144 [2024-04-25 03:28:30.466149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.144 [2024-04-25 03:28:30.466165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:49816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.144 [2024-04-25 03:28:30.466180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.144 [2024-04-25 03:28:30.466197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:49824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.144 [2024-04-25 03:28:30.466212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.144 [2024-04-25 03:28:30.466229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:49832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.144 [2024-04-25 03:28:30.466244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.144 [2024-04-25 03:28:30.466260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:49840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.144 [2024-04-25 03:28:30.466275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.144 [2024-04-25 03:28:30.466291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:49848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.144 [2024-04-25 03:28:30.466306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.144 [2024-04-25 03:28:30.466322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:49856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.144 [2024-04-25 03:28:30.466337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.144 [2024-04-25 03:28:30.466354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:49864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.144 [2024-04-25 03:28:30.466369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.144 [2024-04-25 03:28:30.466385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:49872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.144 [2024-04-25 03:28:30.466401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.144 [2024-04-25 03:28:30.466418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:49880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.144 [2024-04-25 03:28:30.466433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.144 [2024-04-25 03:28:30.466450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:49888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.144 [2024-04-25 03:28:30.466465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.144 [2024-04-25 03:28:30.466482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:49896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.144 [2024-04-25 03:28:30.466497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.144 [2024-04-25 03:28:30.466517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:49904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.144 [2024-04-25 03:28:30.466533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.144 [2024-04-25 03:28:30.466550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:49912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.144 [2024-04-25 03:28:30.466565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.144 [2024-04-25 03:28:30.466581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:49920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.144 [2024-04-25 03:28:30.466597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.144 [2024-04-25 03:28:30.466614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:49512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.144 [2024-04-25 03:28:30.466636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.144 [2024-04-25 03:28:30.466654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:49520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.144 [2024-04-25 03:28:30.466685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.144 [2024-04-25 03:28:30.466702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:49528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.144 [2024-04-25 03:28:30.466715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.144 [2024-04-25 03:28:30.466730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:49536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.144 [2024-04-25 03:28:30.466744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.144 [2024-04-25 03:28:30.466759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:49544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.144 [2024-04-25 03:28:30.466773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.144 [2024-04-25 03:28:30.466788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:49552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.144 [2024-04-25 03:28:30.466801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.144 [2024-04-25 03:28:30.466816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:49560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.144 [2024-04-25 03:28:30.466830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.144 [2024-04-25 03:28:30.466844] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11048a0 is same with the state(5) to be set 00:27:56.144 [2024-04-25 03:28:30.466861] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:56.144 [2024-04-25 03:28:30.466872] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:56.144 [2024-04-25 03:28:30.466884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49568 len:8 PRP1 0x0 PRP2 0x0 00:27:56.144 [2024-04-25 03:28:30.466898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.144 [2024-04-25 03:28:30.467001] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x11048a0 was disconnected and freed. reset controller. 00:27:56.145 [2024-04-25 03:28:30.467085] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:56.145 [2024-04-25 03:28:30.467109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.145 [2024-04-25 03:28:30.467125] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:56.145 [2024-04-25 03:28:30.467140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.145 [2024-04-25 03:28:30.467155] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:56.145 [2024-04-25 03:28:30.467169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.145 [2024-04-25 03:28:30.467185] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:56.145 [2024-04-25 03:28:30.467200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.145 [2024-04-25 03:28:30.467214] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:56.145 [2024-04-25 03:28:30.470759] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:56.145 [2024-04-25 03:28:30.470796] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:56.145 [2024-04-25 03:28:30.471442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.145 [2024-04-25 03:28:30.471693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.145 [2024-04-25 03:28:30.471721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:56.145 [2024-04-25 03:28:30.471738] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:56.145 [2024-04-25 03:28:30.471953] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:56.145 [2024-04-25 03:28:30.472171] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:56.145 [2024-04-25 03:28:30.472194] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:56.145 [2024-04-25 03:28:30.472211] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:56.145 [2024-04-25 03:28:30.475432] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:56.145 [2024-04-25 03:28:30.484467] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:56.145 [2024-04-25 03:28:30.484918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.145 [2024-04-25 03:28:30.485138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.145 [2024-04-25 03:28:30.485164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:56.145 [2024-04-25 03:28:30.485181] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:56.145 [2024-04-25 03:28:30.485394] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:56.145 [2024-04-25 03:28:30.485611] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:56.145 [2024-04-25 03:28:30.485644] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:56.145 [2024-04-25 03:28:30.485664] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:56.145 [2024-04-25 03:28:30.488847] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:56.145 [2024-04-25 03:28:30.497839] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:56.145 [2024-04-25 03:28:30.498268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.145 [2024-04-25 03:28:30.498479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.145 [2024-04-25 03:28:30.498528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:56.145 [2024-04-25 03:28:30.498544] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:56.145 [2024-04-25 03:28:30.498790] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:56.145 [2024-04-25 03:28:30.499022] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:56.145 [2024-04-25 03:28:30.499042] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:56.145 [2024-04-25 03:28:30.499056] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:56.145 [2024-04-25 03:28:30.502033] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:56.145 [2024-04-25 03:28:30.511707] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:56.145 [2024-04-25 03:28:30.512267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.145 [2024-04-25 03:28:30.512621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.145 [2024-04-25 03:28:30.512696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:56.145 [2024-04-25 03:28:30.512729] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:56.145 [2024-04-25 03:28:30.512970] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:56.145 [2024-04-25 03:28:30.513212] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:56.145 [2024-04-25 03:28:30.513236] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:56.145 [2024-04-25 03:28:30.513251] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:56.145 [2024-04-25 03:28:30.516759] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:56.145 [2024-04-25 03:28:30.525646] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:56.145 [2024-04-25 03:28:30.526183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.145 [2024-04-25 03:28:30.526400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.145 [2024-04-25 03:28:30.526424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:56.145 [2024-04-25 03:28:30.526439] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:56.145 [2024-04-25 03:28:30.526704] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:56.145 [2024-04-25 03:28:30.526954] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:56.145 [2024-04-25 03:28:30.526979] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:56.145 [2024-04-25 03:28:30.526995] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:56.145 [2024-04-25 03:28:30.530527] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:56.145 [2024-04-25 03:28:30.539578] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:56.145 [2024-04-25 03:28:30.540039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.145 [2024-04-25 03:28:30.540254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.145 [2024-04-25 03:28:30.540297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:56.145 [2024-04-25 03:28:30.540316] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:56.145 [2024-04-25 03:28:30.540552] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:56.145 [2024-04-25 03:28:30.540804] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:56.145 [2024-04-25 03:28:30.540836] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:56.145 [2024-04-25 03:28:30.540850] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:56.145 [2024-04-25 03:28:30.544424] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:56.145 [2024-04-25 03:28:30.553299] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:56.145 [2024-04-25 03:28:30.553752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.145 [2024-04-25 03:28:30.553985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.145 [2024-04-25 03:28:30.554013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:56.145 [2024-04-25 03:28:30.554031] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:56.145 [2024-04-25 03:28:30.554268] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:56.145 [2024-04-25 03:28:30.554508] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:56.145 [2024-04-25 03:28:30.554532] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:56.146 [2024-04-25 03:28:30.554547] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:56.146 [2024-04-25 03:28:30.558119] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:56.146 [2024-04-25 03:28:30.567186] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:56.146 [2024-04-25 03:28:30.567683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.146 [2024-04-25 03:28:30.567864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.146 [2024-04-25 03:28:30.567890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:56.146 [2024-04-25 03:28:30.567920] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:56.146 [2024-04-25 03:28:30.568158] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:56.146 [2024-04-25 03:28:30.568398] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:56.146 [2024-04-25 03:28:30.568422] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:56.146 [2024-04-25 03:28:30.568438] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:56.146 [2024-04-25 03:28:30.572020] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:56.146 [2024-04-25 03:28:30.581001] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:56.146 [2024-04-25 03:28:30.581485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.146 [2024-04-25 03:28:30.581707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.146 [2024-04-25 03:28:30.581737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:56.146 [2024-04-25 03:28:30.581755] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:56.146 [2024-04-25 03:28:30.581992] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:56.146 [2024-04-25 03:28:30.582232] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:56.146 [2024-04-25 03:28:30.582256] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:56.146 [2024-04-25 03:28:30.582271] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:56.146 [2024-04-25 03:28:30.585812] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:56.146 [2024-04-25 03:28:30.594997] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:56.146 [2024-04-25 03:28:30.595474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.146 [2024-04-25 03:28:30.595778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.146 [2024-04-25 03:28:30.595808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:56.146 [2024-04-25 03:28:30.595826] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:56.146 [2024-04-25 03:28:30.596062] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:56.146 [2024-04-25 03:28:30.596303] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:56.146 [2024-04-25 03:28:30.596326] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:56.146 [2024-04-25 03:28:30.596342] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:56.146 [2024-04-25 03:28:30.599881] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:56.146 [2024-04-25 03:28:30.608863] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:56.146 [2024-04-25 03:28:30.609341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.146 [2024-04-25 03:28:30.609535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.146 [2024-04-25 03:28:30.609564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:56.146 [2024-04-25 03:28:30.609582] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:56.146 [2024-04-25 03:28:30.609829] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:56.146 [2024-04-25 03:28:30.610071] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:56.146 [2024-04-25 03:28:30.610095] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:56.146 [2024-04-25 03:28:30.610110] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:56.146 [2024-04-25 03:28:30.613652] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:56.146 [2024-04-25 03:28:30.622833] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:56.146 [2024-04-25 03:28:30.623286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.146 [2024-04-25 03:28:30.623604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.146 [2024-04-25 03:28:30.623652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:56.146 [2024-04-25 03:28:30.623668] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:56.146 [2024-04-25 03:28:30.623927] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:56.146 [2024-04-25 03:28:30.624168] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:56.146 [2024-04-25 03:28:30.624192] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:56.146 [2024-04-25 03:28:30.624207] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:56.146 [2024-04-25 03:28:30.627750] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:56.146 [2024-04-25 03:28:30.636733] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:56.146 [2024-04-25 03:28:30.637216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.146 [2024-04-25 03:28:30.637594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.146 [2024-04-25 03:28:30.637659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:56.146 [2024-04-25 03:28:30.637677] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:56.146 [2024-04-25 03:28:30.637913] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:56.146 [2024-04-25 03:28:30.638154] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:56.406 [2024-04-25 03:28:30.638177] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:56.406 [2024-04-25 03:28:30.638194] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:56.406 [2024-04-25 03:28:30.641735] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:56.406 [2024-04-25 03:28:30.650711] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:56.406 [2024-04-25 03:28:30.651219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.406 [2024-04-25 03:28:30.651466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.406 [2024-04-25 03:28:30.651494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:56.406 [2024-04-25 03:28:30.651513] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:56.406 [2024-04-25 03:28:30.651759] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:56.406 [2024-04-25 03:28:30.652001] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:56.406 [2024-04-25 03:28:30.652025] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:56.406 [2024-04-25 03:28:30.652040] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:56.406 [2024-04-25 03:28:30.655575] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:56.406 [2024-04-25 03:28:30.664545] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:56.406 [2024-04-25 03:28:30.665040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.406 [2024-04-25 03:28:30.665358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.406 [2024-04-25 03:28:30.665384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:56.406 [2024-04-25 03:28:30.665405] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:56.406 [2024-04-25 03:28:30.665659] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:56.406 [2024-04-25 03:28:30.665901] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:56.406 [2024-04-25 03:28:30.665925] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:56.406 [2024-04-25 03:28:30.665940] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:56.406 [2024-04-25 03:28:30.669473] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:56.406 [2024-04-25 03:28:30.678453] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:56.407 [2024-04-25 03:28:30.678927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.407 [2024-04-25 03:28:30.679264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.407 [2024-04-25 03:28:30.679294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:56.407 [2024-04-25 03:28:30.679312] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:56.407 [2024-04-25 03:28:30.679549] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:56.407 [2024-04-25 03:28:30.679800] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:56.407 [2024-04-25 03:28:30.679825] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:56.407 [2024-04-25 03:28:30.679841] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:56.407 [2024-04-25 03:28:30.683374] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:56.407 [2024-04-25 03:28:30.692345] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:56.407 [2024-04-25 03:28:30.692813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.407 [2024-04-25 03:28:30.693065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.407 [2024-04-25 03:28:30.693093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:56.407 [2024-04-25 03:28:30.693111] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:56.407 [2024-04-25 03:28:30.693348] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:56.407 [2024-04-25 03:28:30.693589] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:56.407 [2024-04-25 03:28:30.693613] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:56.407 [2024-04-25 03:28:30.693637] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:56.407 [2024-04-25 03:28:30.697174] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:56.407 [2024-04-25 03:28:30.706149] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:56.407 [2024-04-25 03:28:30.706638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.407 [2024-04-25 03:28:30.706832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.407 [2024-04-25 03:28:30.706860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:56.407 [2024-04-25 03:28:30.706878] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:56.407 [2024-04-25 03:28:30.707121] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:56.407 [2024-04-25 03:28:30.707361] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:56.407 [2024-04-25 03:28:30.707384] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:56.407 [2024-04-25 03:28:30.707400] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:56.407 [2024-04-25 03:28:30.710944] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:56.407 [2024-04-25 03:28:30.720125] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:56.407 [2024-04-25 03:28:30.720756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.407 [2024-04-25 03:28:30.720990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.407 [2024-04-25 03:28:30.721019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:56.407 [2024-04-25 03:28:30.721037] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:56.407 [2024-04-25 03:28:30.721283] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:56.407 [2024-04-25 03:28:30.721523] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:56.407 [2024-04-25 03:28:30.721546] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:56.407 [2024-04-25 03:28:30.721563] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:56.407 [2024-04-25 03:28:30.725112] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:56.407 [2024-04-25 03:28:30.734092] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:56.407 [2024-04-25 03:28:30.734578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.407 [2024-04-25 03:28:30.734843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.407 [2024-04-25 03:28:30.734872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:56.407 [2024-04-25 03:28:30.734890] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:56.407 [2024-04-25 03:28:30.735126] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:56.407 [2024-04-25 03:28:30.735366] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:56.407 [2024-04-25 03:28:30.735390] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:56.407 [2024-04-25 03:28:30.735406] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:56.407 [2024-04-25 03:28:30.738959] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:56.407 [2024-04-25 03:28:30.747939] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:56.407 [2024-04-25 03:28:30.748602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.407 [2024-04-25 03:28:30.748859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.407 [2024-04-25 03:28:30.748889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:56.407 [2024-04-25 03:28:30.748907] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:56.407 [2024-04-25 03:28:30.749152] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:56.407 [2024-04-25 03:28:30.749398] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:56.407 [2024-04-25 03:28:30.749422] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:56.407 [2024-04-25 03:28:30.749438] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:56.407 [2024-04-25 03:28:30.752980] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:56.407 [2024-04-25 03:28:30.761745] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:56.407 [2024-04-25 03:28:30.762224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.407 [2024-04-25 03:28:30.762675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.407 [2024-04-25 03:28:30.762704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:56.407 [2024-04-25 03:28:30.762722] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:56.407 [2024-04-25 03:28:30.762958] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:56.407 [2024-04-25 03:28:30.763198] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:56.407 [2024-04-25 03:28:30.763222] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:56.407 [2024-04-25 03:28:30.763238] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:56.407 [2024-04-25 03:28:30.766778] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:56.407 [2024-04-25 03:28:30.775555] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:56.407 [2024-04-25 03:28:30.776037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.407 [2024-04-25 03:28:30.776282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.407 [2024-04-25 03:28:30.776349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:56.407 [2024-04-25 03:28:30.776367] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:56.407 [2024-04-25 03:28:30.776604] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:56.407 [2024-04-25 03:28:30.776855] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:56.407 [2024-04-25 03:28:30.776881] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:56.407 [2024-04-25 03:28:30.776896] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:56.407 [2024-04-25 03:28:30.780430] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:56.407 [2024-04-25 03:28:30.789407] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:56.407 [2024-04-25 03:28:30.789890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.407 [2024-04-25 03:28:30.790338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.408 [2024-04-25 03:28:30.790388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:56.408 [2024-04-25 03:28:30.790405] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:56.408 [2024-04-25 03:28:30.790652] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:56.408 [2024-04-25 03:28:30.790893] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:56.408 [2024-04-25 03:28:30.790922] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:56.408 [2024-04-25 03:28:30.790939] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:56.408 [2024-04-25 03:28:30.794473] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:56.408 [2024-04-25 03:28:30.803274] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:56.408 [2024-04-25 03:28:30.803746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.408 [2024-04-25 03:28:30.803998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.408 [2024-04-25 03:28:30.804026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:56.408 [2024-04-25 03:28:30.804044] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:56.408 [2024-04-25 03:28:30.804280] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:56.408 [2024-04-25 03:28:30.804521] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:56.408 [2024-04-25 03:28:30.804545] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:56.408 [2024-04-25 03:28:30.804561] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:56.408 [2024-04-25 03:28:30.808104] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:56.408 [2024-04-25 03:28:30.817081] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:56.408 [2024-04-25 03:28:30.817564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.408 [2024-04-25 03:28:30.817824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.408 [2024-04-25 03:28:30.817853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:56.408 [2024-04-25 03:28:30.817872] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:56.408 [2024-04-25 03:28:30.818108] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:56.408 [2024-04-25 03:28:30.818349] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:56.408 [2024-04-25 03:28:30.818373] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:56.408 [2024-04-25 03:28:30.818388] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:56.408 [2024-04-25 03:28:30.821930] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:56.408 [2024-04-25 03:28:30.830921] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:56.408 [2024-04-25 03:28:30.831561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.408 [2024-04-25 03:28:30.831825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.408 [2024-04-25 03:28:30.831854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:56.408 [2024-04-25 03:28:30.831872] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:56.408 [2024-04-25 03:28:30.832108] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:56.408 [2024-04-25 03:28:30.832348] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:56.408 [2024-04-25 03:28:30.832372] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:56.408 [2024-04-25 03:28:30.832394] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:56.408 [2024-04-25 03:28:30.835947] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:56.408 [2024-04-25 03:28:30.844753] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:56.408 [2024-04-25 03:28:30.845356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.408 [2024-04-25 03:28:30.845642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.408 [2024-04-25 03:28:30.845672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:56.408 [2024-04-25 03:28:30.845691] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:56.408 [2024-04-25 03:28:30.845928] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:56.408 [2024-04-25 03:28:30.846179] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:56.408 [2024-04-25 03:28:30.846202] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:56.408 [2024-04-25 03:28:30.846218] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:56.408 [2024-04-25 03:28:30.849765] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:56.408 [2024-04-25 03:28:30.858772] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:56.408 [2024-04-25 03:28:30.859336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.408 [2024-04-25 03:28:30.859737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.408 [2024-04-25 03:28:30.859767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:56.408 [2024-04-25 03:28:30.859784] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:56.408 [2024-04-25 03:28:30.860021] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:56.408 [2024-04-25 03:28:30.860261] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:56.408 [2024-04-25 03:28:30.860284] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:56.408 [2024-04-25 03:28:30.860300] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:56.408 [2024-04-25 03:28:30.863846] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:56.408 [2024-04-25 03:28:30.872619] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:56.408 [2024-04-25 03:28:30.873111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.408 [2024-04-25 03:28:30.873367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.408 [2024-04-25 03:28:30.873415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:56.408 [2024-04-25 03:28:30.873433] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:56.408 [2024-04-25 03:28:30.873681] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:56.408 [2024-04-25 03:28:30.873921] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:56.408 [2024-04-25 03:28:30.873954] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:56.408 [2024-04-25 03:28:30.873969] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:56.408 [2024-04-25 03:28:30.877511] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:56.408 [2024-04-25 03:28:30.886501] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:56.408 [2024-04-25 03:28:30.886961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.408 [2024-04-25 03:28:30.887217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.408 [2024-04-25 03:28:30.887246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:56.408 [2024-04-25 03:28:30.887264] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:56.408 [2024-04-25 03:28:30.887500] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:56.408 [2024-04-25 03:28:30.887750] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:56.408 [2024-04-25 03:28:30.887775] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:56.408 [2024-04-25 03:28:30.887791] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:56.408 [2024-04-25 03:28:30.891333] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:56.408 [2024-04-25 03:28:30.900316] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:56.409 [2024-04-25 03:28:30.900791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.409 [2024-04-25 03:28:30.901010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.409 [2024-04-25 03:28:30.901039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:56.409 [2024-04-25 03:28:30.901057] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:56.409 [2024-04-25 03:28:30.901293] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:56.409 [2024-04-25 03:28:30.901533] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:56.409 [2024-04-25 03:28:30.901568] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:56.409 [2024-04-25 03:28:30.901584] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:56.409 [2024-04-25 03:28:30.905139] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:56.668 [2024-04-25 03:28:30.914126] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:56.668 [2024-04-25 03:28:30.914605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.668 [2024-04-25 03:28:30.914820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.668 [2024-04-25 03:28:30.914849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:56.668 [2024-04-25 03:28:30.914867] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:56.668 [2024-04-25 03:28:30.915104] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:56.668 [2024-04-25 03:28:30.915344] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:56.668 [2024-04-25 03:28:30.915367] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:56.668 [2024-04-25 03:28:30.915383] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:56.668 [2024-04-25 03:28:30.918929] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:56.668 [2024-04-25 03:28:30.928126] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:56.668 [2024-04-25 03:28:30.928601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.668 [2024-04-25 03:28:30.928824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.669 [2024-04-25 03:28:30.928853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:56.669 [2024-04-25 03:28:30.928871] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:56.669 [2024-04-25 03:28:30.929107] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:56.669 [2024-04-25 03:28:30.929347] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:56.669 [2024-04-25 03:28:30.929370] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:56.669 [2024-04-25 03:28:30.929386] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:56.669 [2024-04-25 03:28:30.932932] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:56.669 [2024-04-25 03:28:30.942126] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:56.669 [2024-04-25 03:28:30.942711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.669 [2024-04-25 03:28:30.942907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.669 [2024-04-25 03:28:30.942943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:56.669 [2024-04-25 03:28:30.942961] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:56.669 [2024-04-25 03:28:30.943197] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:56.669 [2024-04-25 03:28:30.943437] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:56.669 [2024-04-25 03:28:30.943461] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:56.669 [2024-04-25 03:28:30.943477] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:56.669 [2024-04-25 03:28:30.947018] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:56.669 [2024-04-25 03:28:30.956001] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:56.669 [2024-04-25 03:28:30.956555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.669 [2024-04-25 03:28:30.956806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.669 [2024-04-25 03:28:30.956836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:56.669 [2024-04-25 03:28:30.956854] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:56.669 [2024-04-25 03:28:30.957090] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:56.669 [2024-04-25 03:28:30.957341] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:56.669 [2024-04-25 03:28:30.957365] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:56.669 [2024-04-25 03:28:30.957380] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:56.669 [2024-04-25 03:28:30.960921] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:56.669 [2024-04-25 03:28:30.969903] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:56.669 [2024-04-25 03:28:30.970435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.669 [2024-04-25 03:28:30.970708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.669 [2024-04-25 03:28:30.970738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:56.669 [2024-04-25 03:28:30.970756] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:56.669 [2024-04-25 03:28:30.970993] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:56.669 [2024-04-25 03:28:30.971233] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:56.669 [2024-04-25 03:28:30.971257] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:56.669 [2024-04-25 03:28:30.971272] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:56.669 [2024-04-25 03:28:30.974820] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:56.669 [2024-04-25 03:28:30.983807] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:56.669 [2024-04-25 03:28:30.984283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.669 [2024-04-25 03:28:30.984524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.669 [2024-04-25 03:28:30.984553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:56.669 [2024-04-25 03:28:30.984571] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:56.669 [2024-04-25 03:28:30.984817] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:56.669 [2024-04-25 03:28:30.985059] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:56.669 [2024-04-25 03:28:30.985082] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:56.669 [2024-04-25 03:28:30.985098] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:56.669 [2024-04-25 03:28:30.988636] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:56.669 [2024-04-25 03:28:30.997602] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:56.669 [2024-04-25 03:28:30.998063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.669 [2024-04-25 03:28:30.998322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.669 [2024-04-25 03:28:30.998367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:56.669 [2024-04-25 03:28:30.998385] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:56.669 [2024-04-25 03:28:30.998621] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:56.669 [2024-04-25 03:28:30.998871] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:56.669 [2024-04-25 03:28:30.998895] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:56.669 [2024-04-25 03:28:30.998922] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:56.669 [2024-04-25 03:28:31.002455] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:56.669 [2024-04-25 03:28:31.011442] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:56.669 [2024-04-25 03:28:31.011924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.669 [2024-04-25 03:28:31.012209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.669 [2024-04-25 03:28:31.012261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:56.669 [2024-04-25 03:28:31.012279] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:56.669 [2024-04-25 03:28:31.012515] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:56.669 [2024-04-25 03:28:31.012768] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:56.669 [2024-04-25 03:28:31.012792] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:56.669 [2024-04-25 03:28:31.012808] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:56.669 [2024-04-25 03:28:31.016344] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:56.669 [2024-04-25 03:28:31.025312] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:56.669 [2024-04-25 03:28:31.025788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.669 [2024-04-25 03:28:31.026087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.669 [2024-04-25 03:28:31.026122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:56.669 [2024-04-25 03:28:31.026156] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:56.669 [2024-04-25 03:28:31.026393] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:56.669 [2024-04-25 03:28:31.026644] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:56.669 [2024-04-25 03:28:31.026669] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:56.669 [2024-04-25 03:28:31.026684] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:56.669 [2024-04-25 03:28:31.030219] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:56.669 [2024-04-25 03:28:31.039197] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:56.669 [2024-04-25 03:28:31.039717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.669 [2024-04-25 03:28:31.039941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.670 [2024-04-25 03:28:31.039970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:56.670 [2024-04-25 03:28:31.039988] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:56.670 [2024-04-25 03:28:31.040225] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:56.670 [2024-04-25 03:28:31.040465] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:56.670 [2024-04-25 03:28:31.040488] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:56.670 [2024-04-25 03:28:31.040504] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:56.670 [2024-04-25 03:28:31.044044] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:56.670 [2024-04-25 03:28:31.053036] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:56.670 [2024-04-25 03:28:31.053523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.670 [2024-04-25 03:28:31.053747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.670 [2024-04-25 03:28:31.053778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:56.670 [2024-04-25 03:28:31.053802] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:56.670 [2024-04-25 03:28:31.054040] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:56.670 [2024-04-25 03:28:31.054280] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:56.670 [2024-04-25 03:28:31.054304] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:56.670 [2024-04-25 03:28:31.054320] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:56.670 [2024-04-25 03:28:31.057862] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:56.670 [2024-04-25 03:28:31.066837] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:56.670 [2024-04-25 03:28:31.067308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.670 [2024-04-25 03:28:31.067604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.670 [2024-04-25 03:28:31.067667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:56.670 [2024-04-25 03:28:31.067688] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:56.670 [2024-04-25 03:28:31.067925] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:56.670 [2024-04-25 03:28:31.068166] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:56.670 [2024-04-25 03:28:31.068190] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:56.670 [2024-04-25 03:28:31.068205] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:56.670 [2024-04-25 03:28:31.071743] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:56.670 [2024-04-25 03:28:31.080713] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:56.670 [2024-04-25 03:28:31.081214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.670 [2024-04-25 03:28:31.081413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.670 [2024-04-25 03:28:31.081444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:56.670 [2024-04-25 03:28:31.081463] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:56.670 [2024-04-25 03:28:31.081711] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:56.670 [2024-04-25 03:28:31.081952] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:56.670 [2024-04-25 03:28:31.081976] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:56.670 [2024-04-25 03:28:31.081992] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:56.670 [2024-04-25 03:28:31.085532] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:56.670 [2024-04-25 03:28:31.094715] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:56.670 [2024-04-25 03:28:31.095170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.670 [2024-04-25 03:28:31.095427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.670 [2024-04-25 03:28:31.095478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:56.670 [2024-04-25 03:28:31.095497] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:56.670 [2024-04-25 03:28:31.095755] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:56.670 [2024-04-25 03:28:31.095997] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:56.670 [2024-04-25 03:28:31.096021] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:56.670 [2024-04-25 03:28:31.096037] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:56.670 [2024-04-25 03:28:31.099570] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:56.670 [2024-04-25 03:28:31.108550] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:56.670 [2024-04-25 03:28:31.109032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.670 [2024-04-25 03:28:31.109289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.670 [2024-04-25 03:28:31.109337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:56.670 [2024-04-25 03:28:31.109355] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:56.670 [2024-04-25 03:28:31.109591] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:56.670 [2024-04-25 03:28:31.109842] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:56.670 [2024-04-25 03:28:31.109867] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:56.670 [2024-04-25 03:28:31.109883] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:56.670 [2024-04-25 03:28:31.113418] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:56.670 [2024-04-25 03:28:31.122384] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:56.670 [2024-04-25 03:28:31.122865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.670 [2024-04-25 03:28:31.123155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.670 [2024-04-25 03:28:31.123208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:56.670 [2024-04-25 03:28:31.123227] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:56.670 [2024-04-25 03:28:31.123463] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:56.670 [2024-04-25 03:28:31.123715] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:56.670 [2024-04-25 03:28:31.123739] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:56.670 [2024-04-25 03:28:31.123755] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:56.670 [2024-04-25 03:28:31.127290] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:56.670 [2024-04-25 03:28:31.136256] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:56.670 [2024-04-25 03:28:31.136714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.670 [2024-04-25 03:28:31.136933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.670 [2024-04-25 03:28:31.136961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:56.670 [2024-04-25 03:28:31.136979] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:56.670 [2024-04-25 03:28:31.137215] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:56.670 [2024-04-25 03:28:31.137461] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:56.671 [2024-04-25 03:28:31.137485] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:56.671 [2024-04-25 03:28:31.137501] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:56.671 [2024-04-25 03:28:31.141044] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:56.671 [2024-04-25 03:28:31.150228] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:56.671 [2024-04-25 03:28:31.150699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.671 [2024-04-25 03:28:31.150954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.671 [2024-04-25 03:28:31.151000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:56.671 [2024-04-25 03:28:31.151018] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:56.671 [2024-04-25 03:28:31.151254] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:56.671 [2024-04-25 03:28:31.151494] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:56.671 [2024-04-25 03:28:31.151518] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:56.671 [2024-04-25 03:28:31.151533] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:56.671 [2024-04-25 03:28:31.155074] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:56.671 [2024-04-25 03:28:31.164043] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:56.671 [2024-04-25 03:28:31.164505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.671 [2024-04-25 03:28:31.164755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.671 [2024-04-25 03:28:31.164785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:56.671 [2024-04-25 03:28:31.164803] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:56.671 [2024-04-25 03:28:31.165041] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:56.671 [2024-04-25 03:28:31.165281] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:56.671 [2024-04-25 03:28:31.165305] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:56.671 [2024-04-25 03:28:31.165321] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:56.931 [2024-04-25 03:28:31.168862] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:56.931 [2024-04-25 03:28:31.177845] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:56.931 [2024-04-25 03:28:31.178327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.931 [2024-04-25 03:28:31.178576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.931 [2024-04-25 03:28:31.178605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:56.932 [2024-04-25 03:28:31.178623] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:56.932 [2024-04-25 03:28:31.178871] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:56.932 [2024-04-25 03:28:31.179111] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:56.932 [2024-04-25 03:28:31.179140] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:56.932 [2024-04-25 03:28:31.179156] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:56.932 [2024-04-25 03:28:31.182697] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:56.932 [2024-04-25 03:28:31.191684] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:56.932 [2024-04-25 03:28:31.192158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.932 [2024-04-25 03:28:31.192452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.932 [2024-04-25 03:28:31.192498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:56.932 [2024-04-25 03:28:31.192516] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:56.932 [2024-04-25 03:28:31.192765] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:56.932 [2024-04-25 03:28:31.193005] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:56.932 [2024-04-25 03:28:31.193029] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:56.932 [2024-04-25 03:28:31.193044] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:56.932 [2024-04-25 03:28:31.196579] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:56.932 [2024-04-25 03:28:31.205556] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:56.932 [2024-04-25 03:28:31.206015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.932 [2024-04-25 03:28:31.206334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.932 [2024-04-25 03:28:31.206380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:56.932 [2024-04-25 03:28:31.206398] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:56.932 [2024-04-25 03:28:31.206646] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:56.932 [2024-04-25 03:28:31.206887] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:56.932 [2024-04-25 03:28:31.206910] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:56.932 [2024-04-25 03:28:31.206926] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:56.932 [2024-04-25 03:28:31.210463] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:56.932 [2024-04-25 03:28:31.219438] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:56.932 [2024-04-25 03:28:31.219919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.932 [2024-04-25 03:28:31.220174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.932 [2024-04-25 03:28:31.220203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:56.932 [2024-04-25 03:28:31.220221] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:56.932 [2024-04-25 03:28:31.220457] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:56.932 [2024-04-25 03:28:31.220709] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:56.932 [2024-04-25 03:28:31.220733] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:56.932 [2024-04-25 03:28:31.220754] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:56.932 [2024-04-25 03:28:31.224301] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:56.932 [2024-04-25 03:28:31.233279] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:56.932 [2024-04-25 03:28:31.233765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.932 [2024-04-25 03:28:31.233961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.932 [2024-04-25 03:28:31.233992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:56.932 [2024-04-25 03:28:31.234010] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:56.932 [2024-04-25 03:28:31.234246] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:56.932 [2024-04-25 03:28:31.234486] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:56.932 [2024-04-25 03:28:31.234510] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:56.932 [2024-04-25 03:28:31.234525] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:56.932 [2024-04-25 03:28:31.238069] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:56.932 [2024-04-25 03:28:31.247247] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:56.932 [2024-04-25 03:28:31.247696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.932 [2024-04-25 03:28:31.247933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.932 [2024-04-25 03:28:31.247978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:56.932 [2024-04-25 03:28:31.247996] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:56.932 [2024-04-25 03:28:31.248232] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:56.932 [2024-04-25 03:28:31.248472] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:56.932 [2024-04-25 03:28:31.248496] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:56.932 [2024-04-25 03:28:31.248511] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:56.932 [2024-04-25 03:28:31.252067] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:56.932 [2024-04-25 03:28:31.261056] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:56.932 [2024-04-25 03:28:31.261541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.932 [2024-04-25 03:28:31.261761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.932 [2024-04-25 03:28:31.261792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:56.932 [2024-04-25 03:28:31.261810] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:56.932 [2024-04-25 03:28:31.262047] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:56.932 [2024-04-25 03:28:31.262288] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:56.932 [2024-04-25 03:28:31.262314] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:56.932 [2024-04-25 03:28:31.262330] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:56.932 [2024-04-25 03:28:31.265873] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:56.932 [2024-04-25 03:28:31.274886] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:56.932 [2024-04-25 03:28:31.275558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.932 [2024-04-25 03:28:31.275832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.932 [2024-04-25 03:28:31.275862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:56.932 [2024-04-25 03:28:31.275880] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:56.932 [2024-04-25 03:28:31.276117] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:56.932 [2024-04-25 03:28:31.276358] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:56.932 [2024-04-25 03:28:31.276382] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:56.932 [2024-04-25 03:28:31.276398] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:56.932 [2024-04-25 03:28:31.279946] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:56.932 [2024-04-25 03:28:31.288724] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:56.932 [2024-04-25 03:28:31.289211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.932 [2024-04-25 03:28:31.289596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.932 [2024-04-25 03:28:31.289661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:56.932 [2024-04-25 03:28:31.289680] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:56.932 [2024-04-25 03:28:31.289916] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:56.932 [2024-04-25 03:28:31.290156] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:56.932 [2024-04-25 03:28:31.290181] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:56.932 [2024-04-25 03:28:31.290197] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:56.932 [2024-04-25 03:28:31.293742] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:56.932 [2024-04-25 03:28:31.302731] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:56.932 [2024-04-25 03:28:31.303183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.932 [2024-04-25 03:28:31.303376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.932 [2024-04-25 03:28:31.303404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:56.932 [2024-04-25 03:28:31.303422] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:56.932 [2024-04-25 03:28:31.303672] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:56.932 [2024-04-25 03:28:31.303912] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:56.933 [2024-04-25 03:28:31.303937] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:56.933 [2024-04-25 03:28:31.303953] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:56.933 [2024-04-25 03:28:31.307491] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:56.933 [2024-04-25 03:28:31.316691] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:56.933 [2024-04-25 03:28:31.317152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.933 [2024-04-25 03:28:31.317488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.933 [2024-04-25 03:28:31.317543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:56.933 [2024-04-25 03:28:31.317561] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:56.933 [2024-04-25 03:28:31.317811] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:56.933 [2024-04-25 03:28:31.318052] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:56.933 [2024-04-25 03:28:31.318076] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:56.933 [2024-04-25 03:28:31.318093] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:56.933 [2024-04-25 03:28:31.321640] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:56.933 [2024-04-25 03:28:31.330619] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:56.933 [2024-04-25 03:28:31.331103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.933 [2024-04-25 03:28:31.331359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.933 [2024-04-25 03:28:31.331407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:56.933 [2024-04-25 03:28:31.331426] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:56.933 [2024-04-25 03:28:31.331677] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:56.933 [2024-04-25 03:28:31.331920] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:56.933 [2024-04-25 03:28:31.331946] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:56.933 [2024-04-25 03:28:31.331962] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:56.933 [2024-04-25 03:28:31.335500] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:56.933 [2024-04-25 03:28:31.344485] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:56.933 [2024-04-25 03:28:31.344953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.933 [2024-04-25 03:28:31.345300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.933 [2024-04-25 03:28:31.345348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:56.933 [2024-04-25 03:28:31.345367] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:56.933 [2024-04-25 03:28:31.345603] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:56.933 [2024-04-25 03:28:31.345856] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:56.933 [2024-04-25 03:28:31.345882] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:56.933 [2024-04-25 03:28:31.345898] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:56.933 [2024-04-25 03:28:31.349435] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:56.933 [2024-04-25 03:28:31.358420] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:56.933 [2024-04-25 03:28:31.358914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.933 [2024-04-25 03:28:31.359214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.933 [2024-04-25 03:28:31.359263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:56.933 [2024-04-25 03:28:31.359282] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:56.933 [2024-04-25 03:28:31.359520] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:56.933 [2024-04-25 03:28:31.359774] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:56.933 [2024-04-25 03:28:31.359799] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:56.933 [2024-04-25 03:28:31.359815] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:56.933 [2024-04-25 03:28:31.363356] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:56.933 [2024-04-25 03:28:31.372327] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:56.933 [2024-04-25 03:28:31.372806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.933 [2024-04-25 03:28:31.373049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.933 [2024-04-25 03:28:31.373078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:56.933 [2024-04-25 03:28:31.373097] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:56.933 [2024-04-25 03:28:31.373334] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:56.933 [2024-04-25 03:28:31.373575] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:56.933 [2024-04-25 03:28:31.373599] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:56.933 [2024-04-25 03:28:31.373614] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:56.933 [2024-04-25 03:28:31.377165] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:56.933 [2024-04-25 03:28:31.386155] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:56.933 [2024-04-25 03:28:31.386634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.933 [2024-04-25 03:28:31.386859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.933 [2024-04-25 03:28:31.386891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:56.933 [2024-04-25 03:28:31.386910] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:56.933 [2024-04-25 03:28:31.387147] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:56.933 [2024-04-25 03:28:31.387388] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:56.933 [2024-04-25 03:28:31.387413] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:56.933 [2024-04-25 03:28:31.387429] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:56.933 [2024-04-25 03:28:31.390982] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:56.933 [2024-04-25 03:28:31.399984] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:56.933 [2024-04-25 03:28:31.400675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.933 [2024-04-25 03:28:31.400916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.933 [2024-04-25 03:28:31.400970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:56.933 [2024-04-25 03:28:31.400994] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:56.933 [2024-04-25 03:28:31.401232] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:56.933 [2024-04-25 03:28:31.401473] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:56.933 [2024-04-25 03:28:31.401498] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:56.933 [2024-04-25 03:28:31.401514] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:56.933 [2024-04-25 03:28:31.405066] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:56.933 [2024-04-25 03:28:31.413839] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:56.933 [2024-04-25 03:28:31.414317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.933 [2024-04-25 03:28:31.414573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.933 [2024-04-25 03:28:31.414603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:56.933 [2024-04-25 03:28:31.414621] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:56.933 [2024-04-25 03:28:31.414871] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:56.933 [2024-04-25 03:28:31.415111] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:56.933 [2024-04-25 03:28:31.415136] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:56.933 [2024-04-25 03:28:31.415152] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:56.933 [2024-04-25 03:28:31.418700] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:56.933 [2024-04-25 03:28:31.427688] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:56.933 [2024-04-25 03:28:31.428135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.933 [2024-04-25 03:28:31.428382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.933 [2024-04-25 03:28:31.428411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:56.933 [2024-04-25 03:28:31.428429] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:56.933 [2024-04-25 03:28:31.428679] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:56.933 [2024-04-25 03:28:31.428921] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:56.933 [2024-04-25 03:28:31.428945] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:56.933 [2024-04-25 03:28:31.428961] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:57.194 [2024-04-25 03:28:31.432500] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:57.194 [2024-04-25 03:28:31.441489] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:57.194 [2024-04-25 03:28:31.441975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.194 [2024-04-25 03:28:31.442287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.194 [2024-04-25 03:28:31.442337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:57.194 [2024-04-25 03:28:31.442356] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:57.194 [2024-04-25 03:28:31.442598] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:57.194 [2024-04-25 03:28:31.442852] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:57.194 [2024-04-25 03:28:31.442877] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:57.194 [2024-04-25 03:28:31.442893] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:57.194 [2024-04-25 03:28:31.446429] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:57.194 [2024-04-25 03:28:31.455413] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:57.194 [2024-04-25 03:28:31.455898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.194 [2024-04-25 03:28:31.456217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.194 [2024-04-25 03:28:31.456265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:57.194 [2024-04-25 03:28:31.456283] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:57.194 [2024-04-25 03:28:31.456519] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:57.194 [2024-04-25 03:28:31.456773] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:57.194 [2024-04-25 03:28:31.456798] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:57.194 [2024-04-25 03:28:31.456814] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:57.194 [2024-04-25 03:28:31.460352] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:57.194 [2024-04-25 03:28:31.469333] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:57.194 [2024-04-25 03:28:31.469781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.194 [2024-04-25 03:28:31.470138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.194 [2024-04-25 03:28:31.470187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:57.194 [2024-04-25 03:28:31.470204] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:57.194 [2024-04-25 03:28:31.470441] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:57.194 [2024-04-25 03:28:31.470695] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:57.194 [2024-04-25 03:28:31.470721] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:57.194 [2024-04-25 03:28:31.470737] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:57.194 [2024-04-25 03:28:31.474275] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:57.194 [2024-04-25 03:28:31.483262] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:57.195 [2024-04-25 03:28:31.483739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.195 [2024-04-25 03:28:31.483966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.195 [2024-04-25 03:28:31.483995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:57.195 [2024-04-25 03:28:31.484014] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:57.195 [2024-04-25 03:28:31.484251] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:57.195 [2024-04-25 03:28:31.484499] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:57.195 [2024-04-25 03:28:31.484524] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:57.195 [2024-04-25 03:28:31.484541] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:57.195 [2024-04-25 03:28:31.488085] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:57.195 [2024-04-25 03:28:31.497068] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:57.195 [2024-04-25 03:28:31.497624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.195 [2024-04-25 03:28:31.497876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.195 [2024-04-25 03:28:31.497906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:57.195 [2024-04-25 03:28:31.497924] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:57.195 [2024-04-25 03:28:31.498160] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:57.195 [2024-04-25 03:28:31.498402] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:57.195 [2024-04-25 03:28:31.498427] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:57.195 [2024-04-25 03:28:31.498443] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:57.195 [2024-04-25 03:28:31.501993] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:57.195 [2024-04-25 03:28:31.511159] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:57.195 [2024-04-25 03:28:31.511609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.195 [2024-04-25 03:28:31.511876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.195 [2024-04-25 03:28:31.511906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:57.195 [2024-04-25 03:28:31.511924] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:57.195 [2024-04-25 03:28:31.512162] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:57.195 [2024-04-25 03:28:31.512403] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:57.195 [2024-04-25 03:28:31.512428] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:57.195 [2024-04-25 03:28:31.512444] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:57.195 [2024-04-25 03:28:31.515994] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:57.195 [2024-04-25 03:28:31.524985] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:57.195 [2024-04-25 03:28:31.525470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.195 [2024-04-25 03:28:31.525693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.195 [2024-04-25 03:28:31.525721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:57.195 [2024-04-25 03:28:31.525738] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:57.195 [2024-04-25 03:28:31.525975] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:57.195 [2024-04-25 03:28:31.526227] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:57.195 [2024-04-25 03:28:31.526257] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:57.195 [2024-04-25 03:28:31.526275] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:57.195 [2024-04-25 03:28:31.529821] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:57.195 [2024-04-25 03:28:31.538816] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:57.195 [2024-04-25 03:28:31.539291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.195 [2024-04-25 03:28:31.539552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.195 [2024-04-25 03:28:31.539581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:57.195 [2024-04-25 03:28:31.539599] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:57.195 [2024-04-25 03:28:31.539845] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:57.195 [2024-04-25 03:28:31.540086] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:57.195 [2024-04-25 03:28:31.540110] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:57.195 [2024-04-25 03:28:31.540126] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:57.195 [2024-04-25 03:28:31.543669] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:57.195 [2024-04-25 03:28:31.552646] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:57.195 [2024-04-25 03:28:31.553096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.195 [2024-04-25 03:28:31.553344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.195 [2024-04-25 03:28:31.553374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:57.195 [2024-04-25 03:28:31.553392] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:57.195 [2024-04-25 03:28:31.553636] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:57.195 [2024-04-25 03:28:31.553889] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:57.195 [2024-04-25 03:28:31.553913] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:57.195 [2024-04-25 03:28:31.553929] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:57.195 [2024-04-25 03:28:31.557467] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:57.195 [2024-04-25 03:28:31.566449] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:57.195 [2024-04-25 03:28:31.566942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.195 [2024-04-25 03:28:31.567186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.195 [2024-04-25 03:28:31.567215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:57.195 [2024-04-25 03:28:31.567233] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:57.195 [2024-04-25 03:28:31.567470] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:57.195 [2024-04-25 03:28:31.567721] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:57.195 [2024-04-25 03:28:31.567745] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:57.195 [2024-04-25 03:28:31.567767] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:57.195 [2024-04-25 03:28:31.571302] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:57.195 [2024-04-25 03:28:31.580290] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:57.195 [2024-04-25 03:28:31.580766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.195 [2024-04-25 03:28:31.581054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.195 [2024-04-25 03:28:31.581083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:57.195 [2024-04-25 03:28:31.581101] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:57.195 [2024-04-25 03:28:31.581338] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:57.195 [2024-04-25 03:28:31.581578] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:57.195 [2024-04-25 03:28:31.581602] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:57.195 [2024-04-25 03:28:31.581618] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:57.195 [2024-04-25 03:28:31.585160] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:57.195 [2024-04-25 03:28:31.594180] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:57.195 [2024-04-25 03:28:31.594666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.195 [2024-04-25 03:28:31.594899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.195 [2024-04-25 03:28:31.594930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:57.195 [2024-04-25 03:28:31.594948] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:57.195 [2024-04-25 03:28:31.595189] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:57.195 [2024-04-25 03:28:31.595430] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:57.195 [2024-04-25 03:28:31.595455] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:57.195 [2024-04-25 03:28:31.595471] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:57.195 [2024-04-25 03:28:31.599023] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:57.195 [2024-04-25 03:28:31.608015] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:57.195 [2024-04-25 03:28:31.608471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.195 [2024-04-25 03:28:31.608693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.195 [2024-04-25 03:28:31.608723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:57.195 [2024-04-25 03:28:31.608741] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:57.195 [2024-04-25 03:28:31.608979] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:57.196 [2024-04-25 03:28:31.609219] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:57.196 [2024-04-25 03:28:31.609244] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:57.196 [2024-04-25 03:28:31.609260] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:57.196 [2024-04-25 03:28:31.612811] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:57.196 [2024-04-25 03:28:31.621998] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:57.196 [2024-04-25 03:28:31.622456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.196 [2024-04-25 03:28:31.622682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.196 [2024-04-25 03:28:31.622712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:57.196 [2024-04-25 03:28:31.622730] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:57.196 [2024-04-25 03:28:31.622967] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:57.196 [2024-04-25 03:28:31.623209] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:57.196 [2024-04-25 03:28:31.623234] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:57.196 [2024-04-25 03:28:31.623250] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:57.196 [2024-04-25 03:28:31.626797] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:57.196 [2024-04-25 03:28:31.635841] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:57.196 [2024-04-25 03:28:31.636298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.196 [2024-04-25 03:28:31.636512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.196 [2024-04-25 03:28:31.636541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:57.196 [2024-04-25 03:28:31.636560] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:57.196 [2024-04-25 03:28:31.636806] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:57.196 [2024-04-25 03:28:31.637048] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:57.196 [2024-04-25 03:28:31.637073] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:57.196 [2024-04-25 03:28:31.637090] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:57.196 [2024-04-25 03:28:31.640623] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:57.196 [2024-04-25 03:28:31.649809] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:57.196 [2024-04-25 03:28:31.650330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.196 [2024-04-25 03:28:31.650585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.196 [2024-04-25 03:28:31.650615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:57.196 [2024-04-25 03:28:31.650643] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:57.196 [2024-04-25 03:28:31.650882] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:57.196 [2024-04-25 03:28:31.651124] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:57.196 [2024-04-25 03:28:31.651149] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:57.196 [2024-04-25 03:28:31.651164] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:57.196 [2024-04-25 03:28:31.654707] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:57.196 [2024-04-25 03:28:31.663702] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:57.196 [2024-04-25 03:28:31.664184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.196 [2024-04-25 03:28:31.664420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.196 [2024-04-25 03:28:31.664448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:57.196 [2024-04-25 03:28:31.664467] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:57.196 [2024-04-25 03:28:31.664717] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:57.196 [2024-04-25 03:28:31.664958] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:57.196 [2024-04-25 03:28:31.664983] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:57.196 [2024-04-25 03:28:31.664999] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:57.196 [2024-04-25 03:28:31.668556] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:57.196 [2024-04-25 03:28:31.677544] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:57.196 [2024-04-25 03:28:31.678029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.196 [2024-04-25 03:28:31.678282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.196 [2024-04-25 03:28:31.678311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:57.196 [2024-04-25 03:28:31.678329] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:57.196 [2024-04-25 03:28:31.678565] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:57.196 [2024-04-25 03:28:31.678819] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:57.196 [2024-04-25 03:28:31.678844] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:57.196 [2024-04-25 03:28:31.678860] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:57.196 [2024-04-25 03:28:31.682396] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:57.196 [2024-04-25 03:28:31.691387] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:57.196 [2024-04-25 03:28:31.691901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.196 [2024-04-25 03:28:31.692120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.196 [2024-04-25 03:28:31.692149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:57.196 [2024-04-25 03:28:31.692167] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:57.196 [2024-04-25 03:28:31.692405] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:57.196 [2024-04-25 03:28:31.692658] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:57.196 [2024-04-25 03:28:31.692684] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:57.196 [2024-04-25 03:28:31.692701] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:57.458 [2024-04-25 03:28:31.696237] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:57.458 [2024-04-25 03:28:31.705233] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:57.458 [2024-04-25 03:28:31.705784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.458 [2024-04-25 03:28:31.706007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.458 [2024-04-25 03:28:31.706035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:57.458 [2024-04-25 03:28:31.706053] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:57.458 [2024-04-25 03:28:31.706290] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:57.458 [2024-04-25 03:28:31.706532] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:57.458 [2024-04-25 03:28:31.706557] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:57.458 [2024-04-25 03:28:31.706573] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:57.458 [2024-04-25 03:28:31.710122] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:57.458 [2024-04-25 03:28:31.719106] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:57.458 [2024-04-25 03:28:31.719579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.458 [2024-04-25 03:28:31.719848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.458 [2024-04-25 03:28:31.719878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:57.458 [2024-04-25 03:28:31.719896] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:57.458 [2024-04-25 03:28:31.720134] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:57.458 [2024-04-25 03:28:31.720376] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:57.458 [2024-04-25 03:28:31.720400] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:57.458 [2024-04-25 03:28:31.720415] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:57.458 [2024-04-25 03:28:31.723960] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:57.458 [2024-04-25 03:28:31.732954] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:57.458 [2024-04-25 03:28:31.733386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.458 [2024-04-25 03:28:31.733642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.458 [2024-04-25 03:28:31.733673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:57.458 [2024-04-25 03:28:31.733692] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:57.458 [2024-04-25 03:28:31.733928] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:57.458 [2024-04-25 03:28:31.734170] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:57.458 [2024-04-25 03:28:31.734195] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:57.458 [2024-04-25 03:28:31.734210] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:57.458 [2024-04-25 03:28:31.737760] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:57.459 [2024-04-25 03:28:31.746953] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:57.459 [2024-04-25 03:28:31.747437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.459 [2024-04-25 03:28:31.747663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.459 [2024-04-25 03:28:31.747705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:57.459 [2024-04-25 03:28:31.747725] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:57.459 [2024-04-25 03:28:31.747961] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:57.459 [2024-04-25 03:28:31.748201] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:57.459 [2024-04-25 03:28:31.748225] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:57.459 [2024-04-25 03:28:31.748241] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:57.459 [2024-04-25 03:28:31.751786] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:57.459 [2024-04-25 03:28:31.760774] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:57.459 [2024-04-25 03:28:31.761257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.459 [2024-04-25 03:28:31.761457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.459 [2024-04-25 03:28:31.761484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:57.459 [2024-04-25 03:28:31.761501] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:57.459 [2024-04-25 03:28:31.761748] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:57.459 [2024-04-25 03:28:31.761990] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:57.459 [2024-04-25 03:28:31.762014] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:57.459 [2024-04-25 03:28:31.762030] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:57.459 [2024-04-25 03:28:31.765571] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:57.459 [2024-04-25 03:28:31.774769] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:57.459 [2024-04-25 03:28:31.775253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.459 [2024-04-25 03:28:31.775608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.459 [2024-04-25 03:28:31.775682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:57.459 [2024-04-25 03:28:31.775702] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:57.459 [2024-04-25 03:28:31.775939] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:57.459 [2024-04-25 03:28:31.776180] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:57.459 [2024-04-25 03:28:31.776206] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:57.459 [2024-04-25 03:28:31.776222] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:57.459 [2024-04-25 03:28:31.779768] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:57.459 [2024-04-25 03:28:31.788741] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:57.459 [2024-04-25 03:28:31.789218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.459 [2024-04-25 03:28:31.789504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.459 [2024-04-25 03:28:31.789555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:57.459 [2024-04-25 03:28:31.789579] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:57.459 [2024-04-25 03:28:31.789829] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:57.459 [2024-04-25 03:28:31.790070] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:57.459 [2024-04-25 03:28:31.790095] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:57.459 [2024-04-25 03:28:31.790111] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:57.459 [2024-04-25 03:28:31.793655] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:57.459 [2024-04-25 03:28:31.802644] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:57.459 [2024-04-25 03:28:31.803131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.459 [2024-04-25 03:28:31.803475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.459 [2024-04-25 03:28:31.803523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:57.459 [2024-04-25 03:28:31.803541] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:57.459 [2024-04-25 03:28:31.803792] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:57.459 [2024-04-25 03:28:31.804033] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:57.459 [2024-04-25 03:28:31.804058] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:57.459 [2024-04-25 03:28:31.804074] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:57.459 [2024-04-25 03:28:31.807614] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:57.459 [2024-04-25 03:28:31.816593] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:57.459 [2024-04-25 03:28:31.817050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.459 [2024-04-25 03:28:31.817375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.459 [2024-04-25 03:28:31.817427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:57.459 [2024-04-25 03:28:31.817446] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:57.459 [2024-04-25 03:28:31.817696] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:57.459 [2024-04-25 03:28:31.817939] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:57.459 [2024-04-25 03:28:31.817964] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:57.459 [2024-04-25 03:28:31.817980] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:57.459 [2024-04-25 03:28:31.821655] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:57.459 [2024-04-25 03:28:31.830431] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:57.459 [2024-04-25 03:28:31.830918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.459 [2024-04-25 03:28:31.831142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.459 [2024-04-25 03:28:31.831172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:57.459 [2024-04-25 03:28:31.831190] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:57.459 [2024-04-25 03:28:31.831433] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:57.459 [2024-04-25 03:28:31.831689] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:57.459 [2024-04-25 03:28:31.831714] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:57.459 [2024-04-25 03:28:31.831729] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:57.459 [2024-04-25 03:28:31.835266] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:57.459 [2024-04-25 03:28:31.844246] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:57.459 [2024-04-25 03:28:31.844792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.459 [2024-04-25 03:28:31.845012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.459 [2024-04-25 03:28:31.845042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:57.459 [2024-04-25 03:28:31.845060] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:57.459 [2024-04-25 03:28:31.845297] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:57.459 [2024-04-25 03:28:31.845538] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:57.459 [2024-04-25 03:28:31.845562] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:57.459 [2024-04-25 03:28:31.845578] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:57.459 [2024-04-25 03:28:31.849124] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:57.459 [2024-04-25 03:28:31.858109] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:57.459 [2024-04-25 03:28:31.858637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.459 [2024-04-25 03:28:31.858894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.459 [2024-04-25 03:28:31.858923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:57.459 [2024-04-25 03:28:31.858941] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:57.459 [2024-04-25 03:28:31.859179] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:57.459 [2024-04-25 03:28:31.859421] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:57.459 [2024-04-25 03:28:31.859447] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:57.459 [2024-04-25 03:28:31.859463] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:57.459 [2024-04-25 03:28:31.863026] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:57.459 [2024-04-25 03:28:31.872000] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:57.459 [2024-04-25 03:28:31.872479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.459 [2024-04-25 03:28:31.872766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.459 [2024-04-25 03:28:31.872797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:57.460 [2024-04-25 03:28:31.872815] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:57.460 [2024-04-25 03:28:31.873052] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:57.460 [2024-04-25 03:28:31.873299] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:57.460 [2024-04-25 03:28:31.873325] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:57.460 [2024-04-25 03:28:31.873341] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:57.460 [2024-04-25 03:28:31.876886] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:57.460 [2024-04-25 03:28:31.885883] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:57.460 [2024-04-25 03:28:31.886435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.460 [2024-04-25 03:28:31.886706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.460 [2024-04-25 03:28:31.886736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:57.460 [2024-04-25 03:28:31.886753] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:57.460 [2024-04-25 03:28:31.886990] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:57.460 [2024-04-25 03:28:31.887231] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:57.460 [2024-04-25 03:28:31.887255] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:57.460 [2024-04-25 03:28:31.887270] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:57.460 [2024-04-25 03:28:31.890812] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:57.460 [2024-04-25 03:28:31.899801] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:57.460 [2024-04-25 03:28:31.900314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.460 [2024-04-25 03:28:31.900512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.460 [2024-04-25 03:28:31.900540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:57.460 [2024-04-25 03:28:31.900558] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:57.460 [2024-04-25 03:28:31.900805] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:57.460 [2024-04-25 03:28:31.901047] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:57.460 [2024-04-25 03:28:31.901072] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:57.460 [2024-04-25 03:28:31.901088] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:57.460 [2024-04-25 03:28:31.904643] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:57.460 [2024-04-25 03:28:31.913626] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:57.460 [2024-04-25 03:28:31.914189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.460 [2024-04-25 03:28:31.914432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.460 [2024-04-25 03:28:31.914461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:57.460 [2024-04-25 03:28:31.914480] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:57.460 [2024-04-25 03:28:31.914725] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:57.460 [2024-04-25 03:28:31.914965] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:57.460 [2024-04-25 03:28:31.914995] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:57.460 [2024-04-25 03:28:31.915011] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:57.460 [2024-04-25 03:28:31.918551] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:57.460 [2024-04-25 03:28:31.927542] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:57.460 [2024-04-25 03:28:31.928004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.460 [2024-04-25 03:28:31.928224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.460 [2024-04-25 03:28:31.928253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:57.460 [2024-04-25 03:28:31.928271] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:57.460 [2024-04-25 03:28:31.928508] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:57.460 [2024-04-25 03:28:31.928763] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:57.460 [2024-04-25 03:28:31.928789] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:57.460 [2024-04-25 03:28:31.928805] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:57.460 [2024-04-25 03:28:31.932346] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:57.460 [2024-04-25 03:28:31.941541] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:57.460 [2024-04-25 03:28:31.942041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.460 [2024-04-25 03:28:31.942295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.460 [2024-04-25 03:28:31.942324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:57.460 [2024-04-25 03:28:31.942342] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:57.460 [2024-04-25 03:28:31.942579] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:57.460 [2024-04-25 03:28:31.942831] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:57.460 [2024-04-25 03:28:31.942855] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:57.460 [2024-04-25 03:28:31.942871] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:57.460 [2024-04-25 03:28:31.946408] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:57.460 [2024-04-25 03:28:31.955399] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:57.460 [2024-04-25 03:28:31.955836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.460 [2024-04-25 03:28:31.956092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.460 [2024-04-25 03:28:31.956121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:57.460 [2024-04-25 03:28:31.956140] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:57.460 [2024-04-25 03:28:31.956377] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:57.460 [2024-04-25 03:28:31.956618] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:57.460 [2024-04-25 03:28:31.956655] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:57.460 [2024-04-25 03:28:31.956678] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:57.722 [2024-04-25 03:28:31.960220] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:57.722 [2024-04-25 03:28:31.969243] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:57.722 [2024-04-25 03:28:31.969679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.722 [2024-04-25 03:28:31.969896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.722 [2024-04-25 03:28:31.969950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:57.722 [2024-04-25 03:28:31.969969] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:57.722 [2024-04-25 03:28:31.970206] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:57.722 [2024-04-25 03:28:31.970449] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:57.722 [2024-04-25 03:28:31.970474] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:57.722 [2024-04-25 03:28:31.970490] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:57.722 [2024-04-25 03:28:31.974032] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:57.722 [2024-04-25 03:28:31.983215] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:57.722 [2024-04-25 03:28:31.983694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.722 [2024-04-25 03:28:31.983894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.722 [2024-04-25 03:28:31.983922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:57.722 [2024-04-25 03:28:31.983941] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:57.722 [2024-04-25 03:28:31.984178] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:57.722 [2024-04-25 03:28:31.984419] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:57.722 [2024-04-25 03:28:31.984443] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:57.722 [2024-04-25 03:28:31.984459] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:57.722 [2024-04-25 03:28:31.988001] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:57.722 [2024-04-25 03:28:31.997191] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:57.722 [2024-04-25 03:28:31.997647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.722 [2024-04-25 03:28:31.997846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.722 [2024-04-25 03:28:31.997873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:57.722 [2024-04-25 03:28:31.997892] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:57.722 [2024-04-25 03:28:31.998127] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:57.722 [2024-04-25 03:28:31.998369] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:57.722 [2024-04-25 03:28:31.998394] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:57.722 [2024-04-25 03:28:31.998410] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:57.722 [2024-04-25 03:28:32.001958] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:57.722 [2024-04-25 03:28:32.011149] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:57.722 [2024-04-25 03:28:32.011621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.722 [2024-04-25 03:28:32.011862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.722 [2024-04-25 03:28:32.011890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:57.722 [2024-04-25 03:28:32.011908] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:57.722 [2024-04-25 03:28:32.012147] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:57.722 [2024-04-25 03:28:32.012388] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:57.722 [2024-04-25 03:28:32.012413] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:57.722 [2024-04-25 03:28:32.012429] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:57.722 [2024-04-25 03:28:32.015976] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:57.722 [2024-04-25 03:28:32.024949] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:57.722 [2024-04-25 03:28:32.025509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.722 [2024-04-25 03:28:32.025763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.723 [2024-04-25 03:28:32.025793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:57.723 [2024-04-25 03:28:32.025812] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:57.723 [2024-04-25 03:28:32.026049] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:57.723 [2024-04-25 03:28:32.026291] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:57.723 [2024-04-25 03:28:32.026316] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:57.723 [2024-04-25 03:28:32.026332] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:57.723 [2024-04-25 03:28:32.029873] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:57.723 [2024-04-25 03:28:32.038853] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:57.723 [2024-04-25 03:28:32.039380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.723 [2024-04-25 03:28:32.039642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.723 [2024-04-25 03:28:32.039672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:57.723 [2024-04-25 03:28:32.039691] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:57.723 [2024-04-25 03:28:32.039927] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:57.723 [2024-04-25 03:28:32.040169] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:57.723 [2024-04-25 03:28:32.040193] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:57.723 [2024-04-25 03:28:32.040209] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:57.723 [2024-04-25 03:28:32.043751] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:57.723 [2024-04-25 03:28:32.052729] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:57.723 [2024-04-25 03:28:32.053186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.723 [2024-04-25 03:28:32.053443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.723 [2024-04-25 03:28:32.053489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:57.723 [2024-04-25 03:28:32.053507] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:57.723 [2024-04-25 03:28:32.053754] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:57.723 [2024-04-25 03:28:32.053997] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:57.723 [2024-04-25 03:28:32.054022] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:57.723 [2024-04-25 03:28:32.054038] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:57.723 [2024-04-25 03:28:32.057570] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:57.723 [2024-04-25 03:28:32.066540] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:57.723 [2024-04-25 03:28:32.067002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.723 [2024-04-25 03:28:32.067266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.723 [2024-04-25 03:28:32.067313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:57.723 [2024-04-25 03:28:32.067331] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:57.723 [2024-04-25 03:28:32.067568] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:57.723 [2024-04-25 03:28:32.067820] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:57.723 [2024-04-25 03:28:32.067845] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:57.723 [2024-04-25 03:28:32.067861] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:57.723 [2024-04-25 03:28:32.071397] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:57.723 [2024-04-25 03:28:32.080395] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:57.723 [2024-04-25 03:28:32.080859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.723 [2024-04-25 03:28:32.081079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.723 [2024-04-25 03:28:32.081109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:57.723 [2024-04-25 03:28:32.081127] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:57.723 [2024-04-25 03:28:32.081364] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:57.723 [2024-04-25 03:28:32.081606] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:57.723 [2024-04-25 03:28:32.081638] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:57.723 [2024-04-25 03:28:32.081656] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:57.723 [2024-04-25 03:28:32.085191] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:57.723 [2024-04-25 03:28:32.094380] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:57.723 [2024-04-25 03:28:32.094816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.723 [2024-04-25 03:28:32.095058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.723 [2024-04-25 03:28:32.095092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:57.723 [2024-04-25 03:28:32.095127] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:57.723 [2024-04-25 03:28:32.095364] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:57.723 [2024-04-25 03:28:32.095606] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:57.723 [2024-04-25 03:28:32.095643] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:57.723 [2024-04-25 03:28:32.095663] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:57.723 [2024-04-25 03:28:32.099200] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:57.723 [2024-04-25 03:28:32.108188] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:57.723 [2024-04-25 03:28:32.108667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.723 [2024-04-25 03:28:32.108947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.723 [2024-04-25 03:28:32.108992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:57.723 [2024-04-25 03:28:32.109011] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:57.723 [2024-04-25 03:28:32.109248] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:57.723 [2024-04-25 03:28:32.109490] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:57.723 [2024-04-25 03:28:32.109515] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:57.723 [2024-04-25 03:28:32.109531] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:57.723 [2024-04-25 03:28:32.113080] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:57.723 [2024-04-25 03:28:32.122059] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:57.723 [2024-04-25 03:28:32.122591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.723 [2024-04-25 03:28:32.122871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.723 [2024-04-25 03:28:32.122903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:57.723 [2024-04-25 03:28:32.122921] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:57.723 [2024-04-25 03:28:32.123159] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:57.723 [2024-04-25 03:28:32.123401] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:57.723 [2024-04-25 03:28:32.123425] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:57.723 [2024-04-25 03:28:32.123440] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:57.723 [2024-04-25 03:28:32.126985] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:57.723 [2024-04-25 03:28:32.135965] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:57.723 [2024-04-25 03:28:32.136436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.723 [2024-04-25 03:28:32.136679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.723 [2024-04-25 03:28:32.136708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:57.723 [2024-04-25 03:28:32.136732] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:57.723 [2024-04-25 03:28:32.136969] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:57.723 [2024-04-25 03:28:32.137211] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:57.723 [2024-04-25 03:28:32.137236] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:57.723 [2024-04-25 03:28:32.137252] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:57.723 [2024-04-25 03:28:32.140806] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:57.723 [2024-04-25 03:28:32.149794] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:57.723 [2024-04-25 03:28:32.150271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.723 [2024-04-25 03:28:32.150528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.723 [2024-04-25 03:28:32.150572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:57.723 [2024-04-25 03:28:32.150590] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:57.723 [2024-04-25 03:28:32.150840] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:57.723 [2024-04-25 03:28:32.151081] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:57.724 [2024-04-25 03:28:32.151106] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:57.724 [2024-04-25 03:28:32.151122] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:57.724 [2024-04-25 03:28:32.154665] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:57.724 [2024-04-25 03:28:32.163648] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:57.724 [2024-04-25 03:28:32.164138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.724 [2024-04-25 03:28:32.164366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.724 [2024-04-25 03:28:32.164395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:57.724 [2024-04-25 03:28:32.164413] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:57.724 [2024-04-25 03:28:32.164663] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:57.724 [2024-04-25 03:28:32.164905] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:57.724 [2024-04-25 03:28:32.164930] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:57.724 [2024-04-25 03:28:32.164946] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:57.724 [2024-04-25 03:28:32.168486] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:57.724 [2024-04-25 03:28:32.177467] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:57.724 [2024-04-25 03:28:32.177950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.724 [2024-04-25 03:28:32.178239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.724 [2024-04-25 03:28:32.178290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:57.724 [2024-04-25 03:28:32.178309] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:57.724 [2024-04-25 03:28:32.178551] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:57.724 [2024-04-25 03:28:32.178806] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:57.724 [2024-04-25 03:28:32.178832] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:57.724 [2024-04-25 03:28:32.178848] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:57.724 [2024-04-25 03:28:32.182386] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:57.724 [2024-04-25 03:28:32.191364] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:57.724 [2024-04-25 03:28:32.191848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.724 [2024-04-25 03:28:32.192103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.724 [2024-04-25 03:28:32.192149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:57.724 [2024-04-25 03:28:32.192167] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:57.724 [2024-04-25 03:28:32.192404] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:57.724 [2024-04-25 03:28:32.192659] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:57.724 [2024-04-25 03:28:32.192684] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:57.724 [2024-04-25 03:28:32.192700] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:57.724 [2024-04-25 03:28:32.196236] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:57.724 [2024-04-25 03:28:32.205219] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:57.724 [2024-04-25 03:28:32.205737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.724 [2024-04-25 03:28:32.205960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.724 [2024-04-25 03:28:32.205990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:57.724 [2024-04-25 03:28:32.206008] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:57.724 [2024-04-25 03:28:32.206244] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:57.724 [2024-04-25 03:28:32.206485] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:57.724 [2024-04-25 03:28:32.206509] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:57.724 [2024-04-25 03:28:32.206525] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:57.724 [2024-04-25 03:28:32.210071] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:57.724 [2024-04-25 03:28:32.219067] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:57.724 [2024-04-25 03:28:32.219517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.724 [2024-04-25 03:28:32.219796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.724 [2024-04-25 03:28:32.219845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:57.724 [2024-04-25 03:28:32.219864] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:57.724 [2024-04-25 03:28:32.220101] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:57.724 [2024-04-25 03:28:32.220348] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:57.724 [2024-04-25 03:28:32.220374] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:57.724 [2024-04-25 03:28:32.220390] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:57.986 [2024-04-25 03:28:32.223937] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:57.986 [2024-04-25 03:28:32.232936] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:57.986 [2024-04-25 03:28:32.233386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.986 [2024-04-25 03:28:32.233645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.986 [2024-04-25 03:28:32.233676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:57.986 [2024-04-25 03:28:32.233694] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:57.986 [2024-04-25 03:28:32.233931] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:57.986 [2024-04-25 03:28:32.234173] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:57.986 [2024-04-25 03:28:32.234198] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:57.986 [2024-04-25 03:28:32.234214] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:57.986 [2024-04-25 03:28:32.237762] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:57.986 [2024-04-25 03:28:32.246758] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:57.986 [2024-04-25 03:28:32.247258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.986 [2024-04-25 03:28:32.247534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.986 [2024-04-25 03:28:32.247581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:57.986 [2024-04-25 03:28:32.247599] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:57.986 [2024-04-25 03:28:32.247850] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:57.986 [2024-04-25 03:28:32.248092] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:57.986 [2024-04-25 03:28:32.248117] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:57.986 [2024-04-25 03:28:32.248133] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:57.986 [2024-04-25 03:28:32.251682] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:57.986 [2024-04-25 03:28:32.260663] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:57.986 [2024-04-25 03:28:32.261115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.986 [2024-04-25 03:28:32.261399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.986 [2024-04-25 03:28:32.261445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:57.986 [2024-04-25 03:28:32.261464] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:57.986 [2024-04-25 03:28:32.261712] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:57.986 [2024-04-25 03:28:32.261954] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:57.986 [2024-04-25 03:28:32.261984] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:57.986 [2024-04-25 03:28:32.262001] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:57.986 [2024-04-25 03:28:32.265536] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:57.986 [2024-04-25 03:28:32.274526] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:57.986 [2024-04-25 03:28:32.275008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.986 [2024-04-25 03:28:32.275272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.986 [2024-04-25 03:28:32.275320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:57.986 [2024-04-25 03:28:32.275340] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:57.986 [2024-04-25 03:28:32.275576] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:57.986 [2024-04-25 03:28:32.275825] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:57.986 [2024-04-25 03:28:32.275850] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:57.986 [2024-04-25 03:28:32.275877] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:57.986 [2024-04-25 03:28:32.279414] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:57.986 [2024-04-25 03:28:32.288396] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:57.986 [2024-04-25 03:28:32.288868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.986 [2024-04-25 03:28:32.289191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.986 [2024-04-25 03:28:32.289239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:57.986 [2024-04-25 03:28:32.289258] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:57.986 [2024-04-25 03:28:32.289494] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:57.986 [2024-04-25 03:28:32.289748] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:57.986 [2024-04-25 03:28:32.289773] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:57.986 [2024-04-25 03:28:32.289788] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:57.986 [2024-04-25 03:28:32.293324] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:57.986 [2024-04-25 03:28:32.302310] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:57.986 [2024-04-25 03:28:32.302768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.986 [2024-04-25 03:28:32.303024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.986 [2024-04-25 03:28:32.303071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:57.986 [2024-04-25 03:28:32.303090] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:57.986 [2024-04-25 03:28:32.303328] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:57.986 [2024-04-25 03:28:32.303570] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:57.986 [2024-04-25 03:28:32.303595] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:57.986 [2024-04-25 03:28:32.303616] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:57.986 [2024-04-25 03:28:32.307177] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:57.986 [2024-04-25 03:28:32.316180] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:57.986 [2024-04-25 03:28:32.316677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.986 [2024-04-25 03:28:32.316879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.986 [2024-04-25 03:28:32.316907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:57.986 [2024-04-25 03:28:32.316925] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:57.987 [2024-04-25 03:28:32.317163] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:57.987 [2024-04-25 03:28:32.317405] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:57.987 [2024-04-25 03:28:32.317430] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:57.987 [2024-04-25 03:28:32.317446] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:57.987 [2024-04-25 03:28:32.320997] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:57.987 [2024-04-25 03:28:32.329983] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:57.987 [2024-04-25 03:28:32.330472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.987 [2024-04-25 03:28:32.330719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.987 [2024-04-25 03:28:32.330747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:57.987 [2024-04-25 03:28:32.330765] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:57.987 [2024-04-25 03:28:32.331002] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:57.987 [2024-04-25 03:28:32.331243] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:57.987 [2024-04-25 03:28:32.331268] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:57.987 [2024-04-25 03:28:32.331284] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:57.987 [2024-04-25 03:28:32.334861] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:57.987 [2024-04-25 03:28:32.343850] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:57.987 [2024-04-25 03:28:32.344307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.987 [2024-04-25 03:28:32.344532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.987 [2024-04-25 03:28:32.344562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:57.987 [2024-04-25 03:28:32.344579] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:57.987 [2024-04-25 03:28:32.344829] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:57.987 [2024-04-25 03:28:32.345071] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:57.987 [2024-04-25 03:28:32.345096] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:57.987 [2024-04-25 03:28:32.345112] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:57.987 [2024-04-25 03:28:32.348665] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:57.987 [2024-04-25 03:28:32.357646] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:57.987 [2024-04-25 03:28:32.358131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.987 [2024-04-25 03:28:32.358351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.987 [2024-04-25 03:28:32.358380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:57.987 [2024-04-25 03:28:32.358398] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:57.987 [2024-04-25 03:28:32.358646] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:57.987 [2024-04-25 03:28:32.358888] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:57.987 [2024-04-25 03:28:32.358912] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:57.987 [2024-04-25 03:28:32.358927] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:57.987 [2024-04-25 03:28:32.362463] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:57.987 [2024-04-25 03:28:32.371436] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:57.987 [2024-04-25 03:28:32.371893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.987 [2024-04-25 03:28:32.372141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.987 [2024-04-25 03:28:32.372170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:57.987 [2024-04-25 03:28:32.372189] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:57.987 [2024-04-25 03:28:32.372425] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:57.987 [2024-04-25 03:28:32.372678] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:57.987 [2024-04-25 03:28:32.372704] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:57.987 [2024-04-25 03:28:32.372721] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:57.987 [2024-04-25 03:28:32.376256] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:57.987 [2024-04-25 03:28:32.385434] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:57.987 [2024-04-25 03:28:32.385930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.987 [2024-04-25 03:28:32.386156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.987 [2024-04-25 03:28:32.386185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:57.987 [2024-04-25 03:28:32.386203] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:57.987 [2024-04-25 03:28:32.386439] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:57.987 [2024-04-25 03:28:32.386692] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:57.987 [2024-04-25 03:28:32.386717] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:57.987 [2024-04-25 03:28:32.386734] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:57.987 [2024-04-25 03:28:32.390270] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:57.987 [2024-04-25 03:28:32.399254] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:57.987 [2024-04-25 03:28:32.399731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.987 [2024-04-25 03:28:32.399940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.987 [2024-04-25 03:28:32.399969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:57.987 [2024-04-25 03:28:32.399987] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:57.987 [2024-04-25 03:28:32.400224] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:57.987 [2024-04-25 03:28:32.400466] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:57.987 [2024-04-25 03:28:32.400491] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:57.987 [2024-04-25 03:28:32.400507] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:57.987 [2024-04-25 03:28:32.404060] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:57.987 [2024-04-25 03:28:32.413237] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:57.987 [2024-04-25 03:28:32.413669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.987 [2024-04-25 03:28:32.413891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.987 [2024-04-25 03:28:32.413920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:57.987 [2024-04-25 03:28:32.413938] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:57.987 [2024-04-25 03:28:32.414176] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:57.987 [2024-04-25 03:28:32.414417] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:57.987 [2024-04-25 03:28:32.414442] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:57.987 [2024-04-25 03:28:32.414459] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:57.987 [2024-04-25 03:28:32.418005] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:57.987 [2024-04-25 03:28:32.427179] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:57.987 [2024-04-25 03:28:32.427638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.987 [2024-04-25 03:28:32.427866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.987 [2024-04-25 03:28:32.427896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:57.987 [2024-04-25 03:28:32.427914] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:57.987 [2024-04-25 03:28:32.428152] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:57.987 [2024-04-25 03:28:32.428393] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:57.987 [2024-04-25 03:28:32.428418] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:57.987 [2024-04-25 03:28:32.428435] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:57.987 [2024-04-25 03:28:32.431979] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:57.987 [2024-04-25 03:28:32.441159] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:57.987 [2024-04-25 03:28:32.441642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.987 [2024-04-25 03:28:32.441867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.987 [2024-04-25 03:28:32.441897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:57.987 [2024-04-25 03:28:32.441915] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:57.987 [2024-04-25 03:28:32.442151] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:57.987 [2024-04-25 03:28:32.442391] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:57.987 [2024-04-25 03:28:32.442416] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:57.987 [2024-04-25 03:28:32.442432] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:57.988 [2024-04-25 03:28:32.445974] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:57.988 [2024-04-25 03:28:32.455154] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:57.988 [2024-04-25 03:28:32.455641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.988 [2024-04-25 03:28:32.455873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.988 [2024-04-25 03:28:32.455902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:57.988 [2024-04-25 03:28:32.455920] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:57.988 [2024-04-25 03:28:32.456157] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:57.988 [2024-04-25 03:28:32.456398] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:57.988 [2024-04-25 03:28:32.456423] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:57.988 [2024-04-25 03:28:32.456439] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:57.988 [2024-04-25 03:28:32.459985] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:57.988 [2024-04-25 03:28:32.468961] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:57.988 [2024-04-25 03:28:32.469411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.988 [2024-04-25 03:28:32.469659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.988 [2024-04-25 03:28:32.469688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:57.988 [2024-04-25 03:28:32.469706] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:57.988 [2024-04-25 03:28:32.469943] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:57.988 [2024-04-25 03:28:32.470184] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:57.988 [2024-04-25 03:28:32.470207] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:57.988 [2024-04-25 03:28:32.470223] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:57.988 [2024-04-25 03:28:32.473764] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:57.988 [2024-04-25 03:28:32.482945] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:57.988 [2024-04-25 03:28:32.483421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.988 [2024-04-25 03:28:32.483676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.988 [2024-04-25 03:28:32.483710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:57.988 [2024-04-25 03:28:32.483729] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:57.988 [2024-04-25 03:28:32.483965] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:57.988 [2024-04-25 03:28:32.484207] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:57.988 [2024-04-25 03:28:32.484231] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:57.988 [2024-04-25 03:28:32.484247] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:58.248 [2024-04-25 03:28:32.487790] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:58.248 [2024-04-25 03:28:32.496773] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:58.248 [2024-04-25 03:28:32.497248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.248 [2024-04-25 03:28:32.497510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.248 [2024-04-25 03:28:32.497538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:58.248 [2024-04-25 03:28:32.497557] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:58.248 [2024-04-25 03:28:32.497804] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:58.249 [2024-04-25 03:28:32.498046] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:58.249 [2024-04-25 03:28:32.498071] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:58.249 [2024-04-25 03:28:32.498087] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:58.249 [2024-04-25 03:28:32.501619] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:58.249 [2024-04-25 03:28:32.510599] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:58.249 [2024-04-25 03:28:32.511084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.249 [2024-04-25 03:28:32.511341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.249 [2024-04-25 03:28:32.511370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:58.249 [2024-04-25 03:28:32.511388] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:58.249 [2024-04-25 03:28:32.511625] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:58.249 [2024-04-25 03:28:32.511876] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:58.249 [2024-04-25 03:28:32.511901] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:58.249 [2024-04-25 03:28:32.511917] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:58.249 [2024-04-25 03:28:32.515450] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:58.249 [2024-04-25 03:28:32.524420] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:58.249 [2024-04-25 03:28:32.524876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.249 [2024-04-25 03:28:32.525132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.249 [2024-04-25 03:28:32.525161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:58.249 [2024-04-25 03:28:32.525185] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:58.249 [2024-04-25 03:28:32.525423] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:58.249 [2024-04-25 03:28:32.525674] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:58.249 [2024-04-25 03:28:32.525699] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:58.249 [2024-04-25 03:28:32.525715] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:58.249 [2024-04-25 03:28:32.529252] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:58.249 [2024-04-25 03:28:32.538230] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:58.249 [2024-04-25 03:28:32.538714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.249 [2024-04-25 03:28:32.538939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.249 [2024-04-25 03:28:32.538968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:58.249 [2024-04-25 03:28:32.538987] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:58.249 [2024-04-25 03:28:32.539223] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:58.249 [2024-04-25 03:28:32.539465] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:58.249 [2024-04-25 03:28:32.539490] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:58.249 [2024-04-25 03:28:32.539506] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:58.249 [2024-04-25 03:28:32.543048] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:58.249 [2024-04-25 03:28:32.552044] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:58.249 [2024-04-25 03:28:32.552522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.249 [2024-04-25 03:28:32.552747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.249 [2024-04-25 03:28:32.552776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:58.249 [2024-04-25 03:28:32.552794] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:58.249 [2024-04-25 03:28:32.553031] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:58.249 [2024-04-25 03:28:32.553273] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:58.249 [2024-04-25 03:28:32.553298] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:58.249 [2024-04-25 03:28:32.553313] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:58.249 [2024-04-25 03:28:32.556852] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:58.249 [2024-04-25 03:28:32.566038] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:58.249 [2024-04-25 03:28:32.566531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.249 [2024-04-25 03:28:32.566759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.249 [2024-04-25 03:28:32.566789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:58.249 [2024-04-25 03:28:32.566807] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:58.249 [2024-04-25 03:28:32.567051] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:58.249 [2024-04-25 03:28:32.567294] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:58.249 [2024-04-25 03:28:32.567318] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:58.249 [2024-04-25 03:28:32.567334] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:58.249 [2024-04-25 03:28:32.570881] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:58.249 [2024-04-25 03:28:32.579868] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:58.249 [2024-04-25 03:28:32.580317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.249 [2024-04-25 03:28:32.580560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.249 [2024-04-25 03:28:32.580589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:58.249 [2024-04-25 03:28:32.580608] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:58.249 [2024-04-25 03:28:32.580853] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:58.249 [2024-04-25 03:28:32.581093] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:58.249 [2024-04-25 03:28:32.581118] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:58.249 [2024-04-25 03:28:32.581134] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:58.249 [2024-04-25 03:28:32.584674] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:58.249 [2024-04-25 03:28:32.593868] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:58.249 [2024-04-25 03:28:32.594349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.249 [2024-04-25 03:28:32.594597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.249 [2024-04-25 03:28:32.594626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:58.249 [2024-04-25 03:28:32.594655] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:58.249 [2024-04-25 03:28:32.594892] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:58.249 [2024-04-25 03:28:32.595144] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:58.249 [2024-04-25 03:28:32.595168] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:58.249 [2024-04-25 03:28:32.595184] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:58.249 [2024-04-25 03:28:32.598725] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:58.249 [2024-04-25 03:28:32.607710] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:58.249 [2024-04-25 03:28:32.608161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.249 [2024-04-25 03:28:32.608357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.249 [2024-04-25 03:28:32.608384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:58.249 [2024-04-25 03:28:32.608402] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:58.249 [2024-04-25 03:28:32.608647] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:58.249 [2024-04-25 03:28:32.608900] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:58.249 [2024-04-25 03:28:32.608925] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:58.249 [2024-04-25 03:28:32.608941] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:58.249 [2024-04-25 03:28:32.612479] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:58.249 [2024-04-25 03:28:32.621670] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:58.249 [2024-04-25 03:28:32.622117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.249 [2024-04-25 03:28:32.622359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.249 [2024-04-25 03:28:32.622389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:58.249 [2024-04-25 03:28:32.622407] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:58.250 [2024-04-25 03:28:32.622653] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:58.250 [2024-04-25 03:28:32.622895] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:58.250 [2024-04-25 03:28:32.622921] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:58.250 [2024-04-25 03:28:32.622936] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:58.250 [2024-04-25 03:28:32.626473] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:58.250 [2024-04-25 03:28:32.635662] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:58.250 [2024-04-25 03:28:32.636113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.250 [2024-04-25 03:28:32.636361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.250 [2024-04-25 03:28:32.636392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:58.250 [2024-04-25 03:28:32.636410] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:58.250 [2024-04-25 03:28:32.636666] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:58.250 [2024-04-25 03:28:32.636920] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:58.250 [2024-04-25 03:28:32.636945] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:58.250 [2024-04-25 03:28:32.636961] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:58.250 [2024-04-25 03:28:32.640494] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:58.250 [2024-04-25 03:28:32.649463] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:58.250 [2024-04-25 03:28:32.649950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.250 [2024-04-25 03:28:32.650209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.250 [2024-04-25 03:28:32.650240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:58.250 [2024-04-25 03:28:32.650258] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:58.250 [2024-04-25 03:28:32.650495] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:58.250 [2024-04-25 03:28:32.650747] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:58.250 [2024-04-25 03:28:32.650779] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:58.250 [2024-04-25 03:28:32.650795] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:58.250 [2024-04-25 03:28:32.654345] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:58.250 [2024-04-25 03:28:32.663318] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:58.250 [2024-04-25 03:28:32.663775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.250 [2024-04-25 03:28:32.663971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.250 [2024-04-25 03:28:32.663999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:58.250 [2024-04-25 03:28:32.664017] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:58.250 [2024-04-25 03:28:32.664255] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:58.250 [2024-04-25 03:28:32.664497] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:58.250 [2024-04-25 03:28:32.664521] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:58.250 [2024-04-25 03:28:32.664536] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:58.250 [2024-04-25 03:28:32.668078] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:58.250 [2024-04-25 03:28:32.677259] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:58.250 [2024-04-25 03:28:32.677735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.250 [2024-04-25 03:28:32.677935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.250 [2024-04-25 03:28:32.677963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:58.250 [2024-04-25 03:28:32.677981] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:58.250 [2024-04-25 03:28:32.678218] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:58.250 [2024-04-25 03:28:32.678459] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:58.250 [2024-04-25 03:28:32.678485] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:58.250 [2024-04-25 03:28:32.678501] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:58.250 [2024-04-25 03:28:32.682048] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:58.250 [2024-04-25 03:28:32.691236] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:58.250 [2024-04-25 03:28:32.691693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.250 [2024-04-25 03:28:32.691942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.250 [2024-04-25 03:28:32.691971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:58.250 [2024-04-25 03:28:32.691989] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:58.250 [2024-04-25 03:28:32.692225] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:58.250 [2024-04-25 03:28:32.692466] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:58.250 [2024-04-25 03:28:32.692490] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:58.250 [2024-04-25 03:28:32.692511] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:58.250 [2024-04-25 03:28:32.696057] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:58.250 [2024-04-25 03:28:32.705244] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:58.250 [2024-04-25 03:28:32.705736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.250 [2024-04-25 03:28:32.705929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.250 [2024-04-25 03:28:32.705958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:58.250 [2024-04-25 03:28:32.705976] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:58.250 [2024-04-25 03:28:32.706212] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:58.250 [2024-04-25 03:28:32.706452] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:58.250 [2024-04-25 03:28:32.706477] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:58.250 [2024-04-25 03:28:32.706493] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:58.250 [2024-04-25 03:28:32.710035] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:58.250 [2024-04-25 03:28:32.719218] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:58.250 [2024-04-25 03:28:32.719669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.250 [2024-04-25 03:28:32.719869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.250 [2024-04-25 03:28:32.719898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:58.250 [2024-04-25 03:28:32.719917] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:58.250 [2024-04-25 03:28:32.720153] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:58.250 [2024-04-25 03:28:32.720393] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:58.250 [2024-04-25 03:28:32.720418] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:58.250 [2024-04-25 03:28:32.720433] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:58.250 [2024-04-25 03:28:32.723978] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:58.250 [2024-04-25 03:28:32.733171] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:58.250 [2024-04-25 03:28:32.733621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.250 [2024-04-25 03:28:32.733886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.250 [2024-04-25 03:28:32.733915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:58.250 [2024-04-25 03:28:32.733933] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:58.250 [2024-04-25 03:28:32.734170] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:58.250 [2024-04-25 03:28:32.734412] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:58.250 [2024-04-25 03:28:32.734437] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:58.250 [2024-04-25 03:28:32.734453] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:58.250 [2024-04-25 03:28:32.737772] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:58.250 [2024-04-25 03:28:32.746810] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:58.250 [2024-04-25 03:28:32.747298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.250 [2024-04-25 03:28:32.747522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.250 [2024-04-25 03:28:32.747548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:58.250 [2024-04-25 03:28:32.747564] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:58.512 [2024-04-25 03:28:32.747828] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:58.512 [2024-04-25 03:28:32.748045] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:58.512 [2024-04-25 03:28:32.748066] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:58.512 [2024-04-25 03:28:32.748080] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:58.512 [2024-04-25 03:28:32.751113] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:58.512 [2024-04-25 03:28:32.760163] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:58.512 [2024-04-25 03:28:32.760660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.512 [2024-04-25 03:28:32.760841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.512 [2024-04-25 03:28:32.760867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:58.512 [2024-04-25 03:28:32.760884] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:58.512 [2024-04-25 03:28:32.761145] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:58.512 [2024-04-25 03:28:32.761336] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:58.512 [2024-04-25 03:28:32.761355] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:58.512 [2024-04-25 03:28:32.761368] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:58.512 [2024-04-25 03:28:32.764334] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:58.512 [2024-04-25 03:28:32.773374] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:58.512 [2024-04-25 03:28:32.773809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.512 [2024-04-25 03:28:32.773996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.512 [2024-04-25 03:28:32.774023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:58.513 [2024-04-25 03:28:32.774039] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:58.513 [2024-04-25 03:28:32.774287] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:58.513 [2024-04-25 03:28:32.774480] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:58.513 [2024-04-25 03:28:32.774499] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:58.513 [2024-04-25 03:28:32.774512] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:58.513 [2024-04-25 03:28:32.777475] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:58.513 [2024-04-25 03:28:32.786673] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:58.513 [2024-04-25 03:28:32.787121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.513 [2024-04-25 03:28:32.787305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.513 [2024-04-25 03:28:32.787331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:58.513 [2024-04-25 03:28:32.787347] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:58.513 [2024-04-25 03:28:32.787609] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:58.513 [2024-04-25 03:28:32.787840] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:58.513 [2024-04-25 03:28:32.787863] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:58.513 [2024-04-25 03:28:32.787877] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:58.513 [2024-04-25 03:28:32.790822] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:58.513 [2024-04-25 03:28:32.799879] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:58.513 [2024-04-25 03:28:32.800391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.513 [2024-04-25 03:28:32.800602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.513 [2024-04-25 03:28:32.800648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:58.513 [2024-04-25 03:28:32.800667] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:58.513 [2024-04-25 03:28:32.800903] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:58.513 [2024-04-25 03:28:32.801112] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:58.513 [2024-04-25 03:28:32.801132] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:58.513 [2024-04-25 03:28:32.801144] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:58.513 [2024-04-25 03:28:32.804071] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:58.513 [2024-04-25 03:28:32.813101] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:58.513 [2024-04-25 03:28:32.813533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.513 [2024-04-25 03:28:32.813736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.513 [2024-04-25 03:28:32.813763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:58.513 [2024-04-25 03:28:32.813779] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:58.513 [2024-04-25 03:28:32.814031] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:58.513 [2024-04-25 03:28:32.814239] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:58.513 [2024-04-25 03:28:32.814259] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:58.513 [2024-04-25 03:28:32.814272] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:58.513 [2024-04-25 03:28:32.817253] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:58.513 [2024-04-25 03:28:32.826246] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:58.513 [2024-04-25 03:28:32.826680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.513 [2024-04-25 03:28:32.826864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.513 [2024-04-25 03:28:32.826888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:58.513 [2024-04-25 03:28:32.826903] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:58.513 [2024-04-25 03:28:32.827116] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:58.513 [2024-04-25 03:28:32.827323] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:58.513 [2024-04-25 03:28:32.827343] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:58.513 [2024-04-25 03:28:32.827356] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:58.513 [2024-04-25 03:28:32.830335] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:58.513 [2024-04-25 03:28:32.839539] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:58.513 [2024-04-25 03:28:32.839977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.513 [2024-04-25 03:28:32.840194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.513 [2024-04-25 03:28:32.840218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:58.513 [2024-04-25 03:28:32.840234] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:58.513 [2024-04-25 03:28:32.840478] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:58.513 [2024-04-25 03:28:32.840713] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:58.513 [2024-04-25 03:28:32.840734] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:58.513 [2024-04-25 03:28:32.840747] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:58.513 [2024-04-25 03:28:32.843707] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:58.513 [2024-04-25 03:28:32.852830] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:58.513 [2024-04-25 03:28:32.853341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.513 [2024-04-25 03:28:32.853541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.513 [2024-04-25 03:28:32.853566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:58.513 [2024-04-25 03:28:32.853583] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:58.513 [2024-04-25 03:28:32.853842] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:58.513 [2024-04-25 03:28:32.854052] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:58.513 [2024-04-25 03:28:32.854072] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:58.513 [2024-04-25 03:28:32.854084] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:58.513 [2024-04-25 03:28:32.857016] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:58.513 [2024-04-25 03:28:32.866008] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:58.513 [2024-04-25 03:28:32.866370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.513 [2024-04-25 03:28:32.866624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.513 [2024-04-25 03:28:32.866663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:58.513 [2024-04-25 03:28:32.866680] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:58.513 [2024-04-25 03:28:32.866929] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:58.513 [2024-04-25 03:28:32.867122] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:58.513 [2024-04-25 03:28:32.867142] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:58.513 [2024-04-25 03:28:32.867155] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:58.513 [2024-04-25 03:28:32.870146] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:58.513 [2024-04-25 03:28:32.879158] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:58.513 [2024-04-25 03:28:32.879587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.513 [2024-04-25 03:28:32.879837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.513 [2024-04-25 03:28:32.879865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:58.513 [2024-04-25 03:28:32.879881] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:58.513 [2024-04-25 03:28:32.880135] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:58.513 [2024-04-25 03:28:32.880328] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:58.514 [2024-04-25 03:28:32.880348] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:58.514 [2024-04-25 03:28:32.880361] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:58.514 [2024-04-25 03:28:32.883286] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:58.514 [2024-04-25 03:28:32.892329] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:58.514 [2024-04-25 03:28:32.892819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.514 [2024-04-25 03:28:32.893010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.514 [2024-04-25 03:28:32.893037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:58.514 [2024-04-25 03:28:32.893052] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:58.514 [2024-04-25 03:28:32.893314] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:58.514 [2024-04-25 03:28:32.893506] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:58.514 [2024-04-25 03:28:32.893526] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:58.514 [2024-04-25 03:28:32.893539] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:58.514 [2024-04-25 03:28:32.896493] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:58.514 [2024-04-25 03:28:32.905493] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:58.514 [2024-04-25 03:28:32.906132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.514 [2024-04-25 03:28:32.906434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.514 [2024-04-25 03:28:32.906462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:58.514 [2024-04-25 03:28:32.906485] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:58.514 [2024-04-25 03:28:32.906754] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:58.514 [2024-04-25 03:28:32.906996] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:58.514 [2024-04-25 03:28:32.907017] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:58.514 [2024-04-25 03:28:32.907030] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:58.514 [2024-04-25 03:28:32.909957] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:58.514 [2024-04-25 03:28:32.918744] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:58.514 [2024-04-25 03:28:32.919279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.514 [2024-04-25 03:28:32.919510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.514 [2024-04-25 03:28:32.919538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:58.514 [2024-04-25 03:28:32.919554] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:58.514 [2024-04-25 03:28:32.919815] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:58.514 [2024-04-25 03:28:32.920019] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:58.514 [2024-04-25 03:28:32.920054] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:58.514 [2024-04-25 03:28:32.920067] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:58.514 [2024-04-25 03:28:32.922994] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:58.514 [2024-04-25 03:28:32.932009] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:58.514 [2024-04-25 03:28:32.932507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.514 [2024-04-25 03:28:32.932739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.514 [2024-04-25 03:28:32.932777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:58.514 [2024-04-25 03:28:32.932793] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:58.514 [2024-04-25 03:28:32.933044] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:58.514 [2024-04-25 03:28:32.933237] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:58.514 [2024-04-25 03:28:32.933257] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:58.514 [2024-04-25 03:28:32.933269] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:58.514 [2024-04-25 03:28:32.936194] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:58.514 [2024-04-25 03:28:32.945215] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:58.514 [2024-04-25 03:28:32.945692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.514 [2024-04-25 03:28:32.945893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.514 [2024-04-25 03:28:32.945929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:58.514 [2024-04-25 03:28:32.945945] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:58.514 [2024-04-25 03:28:32.946198] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:58.514 [2024-04-25 03:28:32.946392] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:58.514 [2024-04-25 03:28:32.946412] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:58.514 [2024-04-25 03:28:32.946425] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:58.514 [2024-04-25 03:28:32.949396] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:58.514 [2024-04-25 03:28:32.958403] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:58.514 [2024-04-25 03:28:32.958874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.514 [2024-04-25 03:28:32.959095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.514 [2024-04-25 03:28:32.959123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:58.514 [2024-04-25 03:28:32.959155] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:58.514 [2024-04-25 03:28:32.959380] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:58.514 [2024-04-25 03:28:32.959573] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:58.514 [2024-04-25 03:28:32.959594] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:58.514 [2024-04-25 03:28:32.959607] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:58.514 [2024-04-25 03:28:32.962577] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:58.514 [2024-04-25 03:28:32.971575] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:58.514 [2024-04-25 03:28:32.972043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.514 [2024-04-25 03:28:32.972257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.514 [2024-04-25 03:28:32.972282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:58.514 [2024-04-25 03:28:32.972298] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:58.514 [2024-04-25 03:28:32.972542] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:58.514 [2024-04-25 03:28:32.972782] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:58.514 [2024-04-25 03:28:32.972804] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:58.514 [2024-04-25 03:28:32.972818] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:58.514 [2024-04-25 03:28:32.975762] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:58.514 [2024-04-25 03:28:32.984758] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:58.514 [2024-04-25 03:28:32.985222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.514 [2024-04-25 03:28:32.985475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.514 [2024-04-25 03:28:32.985502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:58.514 [2024-04-25 03:28:32.985518] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:58.514 [2024-04-25 03:28:32.985784] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:58.514 [2024-04-25 03:28:32.986027] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:58.514 [2024-04-25 03:28:32.986053] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:58.514 [2024-04-25 03:28:32.986066] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:58.514 [2024-04-25 03:28:32.989189] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:58.514 [2024-04-25 03:28:32.998270] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:58.514 [2024-04-25 03:28:32.998721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.514 [2024-04-25 03:28:32.998921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.514 [2024-04-25 03:28:32.998948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:58.515 [2024-04-25 03:28:32.998964] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:58.515 [2024-04-25 03:28:32.999216] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:58.515 [2024-04-25 03:28:32.999447] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:58.515 [2024-04-25 03:28:32.999469] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:58.515 [2024-04-25 03:28:32.999483] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:58.515 [2024-04-25 03:28:33.002662] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:58.777 [2024-04-25 03:28:33.011656] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:58.777 [2024-04-25 03:28:33.012122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.777 [2024-04-25 03:28:33.012350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.777 [2024-04-25 03:28:33.012376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:58.777 [2024-04-25 03:28:33.012393] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:58.777 [2024-04-25 03:28:33.012666] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:58.777 [2024-04-25 03:28:33.012879] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:58.777 [2024-04-25 03:28:33.012901] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:58.777 [2024-04-25 03:28:33.012914] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:58.777 [2024-04-25 03:28:33.015881] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:58.777 [2024-04-25 03:28:33.024951] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:58.777 [2024-04-25 03:28:33.025411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.777 [2024-04-25 03:28:33.025650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.777 [2024-04-25 03:28:33.025678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:58.777 [2024-04-25 03:28:33.025695] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:58.777 [2024-04-25 03:28:33.025957] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:58.777 [2024-04-25 03:28:33.026150] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:58.777 [2024-04-25 03:28:33.026170] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:58.777 [2024-04-25 03:28:33.026192] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:58.777 [2024-04-25 03:28:33.029190] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:58.777 [2024-04-25 03:28:33.038193] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:58.777 [2024-04-25 03:28:33.038684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.777 [2024-04-25 03:28:33.038890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.777 [2024-04-25 03:28:33.038916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:58.777 [2024-04-25 03:28:33.038932] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:58.777 [2024-04-25 03:28:33.039181] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:58.777 [2024-04-25 03:28:33.039374] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:58.777 [2024-04-25 03:28:33.039394] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:58.777 [2024-04-25 03:28:33.039406] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:58.777 [2024-04-25 03:28:33.042341] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:58.777 [2024-04-25 03:28:33.051343] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:58.777 [2024-04-25 03:28:33.051736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.777 [2024-04-25 03:28:33.051950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.777 [2024-04-25 03:28:33.051976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:58.777 [2024-04-25 03:28:33.051992] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:58.777 [2024-04-25 03:28:33.052247] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:58.777 [2024-04-25 03:28:33.052455] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:58.777 [2024-04-25 03:28:33.052475] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:58.777 [2024-04-25 03:28:33.052488] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:58.777 [2024-04-25 03:28:33.055453] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:58.777 [2024-04-25 03:28:33.064680] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:58.777 [2024-04-25 03:28:33.065170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.777 [2024-04-25 03:28:33.065365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.777 [2024-04-25 03:28:33.065391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:58.777 [2024-04-25 03:28:33.065408] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:58.777 [2024-04-25 03:28:33.065672] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:58.777 [2024-04-25 03:28:33.065890] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:58.777 [2024-04-25 03:28:33.065912] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:58.777 [2024-04-25 03:28:33.065942] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:58.777 [2024-04-25 03:28:33.069000] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:58.777 [2024-04-25 03:28:33.077929] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:58.777 [2024-04-25 03:28:33.078439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.777 [2024-04-25 03:28:33.078646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.777 [2024-04-25 03:28:33.078687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:58.777 [2024-04-25 03:28:33.078705] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:58.777 [2024-04-25 03:28:33.078944] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:58.777 [2024-04-25 03:28:33.079154] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:58.777 [2024-04-25 03:28:33.079173] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:58.777 [2024-04-25 03:28:33.079185] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:58.777 [2024-04-25 03:28:33.082226] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:58.777 [2024-04-25 03:28:33.091192] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:58.777 [2024-04-25 03:28:33.091591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.777 [2024-04-25 03:28:33.091803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.777 [2024-04-25 03:28:33.091830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:58.777 [2024-04-25 03:28:33.091846] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:58.777 [2024-04-25 03:28:33.092097] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:58.777 [2024-04-25 03:28:33.092289] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:58.777 [2024-04-25 03:28:33.092309] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:58.777 [2024-04-25 03:28:33.092323] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:58.777 [2024-04-25 03:28:33.095483] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:58.777 [2024-04-25 03:28:33.104538] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:58.777 [2024-04-25 03:28:33.104979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.777 [2024-04-25 03:28:33.105201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.777 [2024-04-25 03:28:33.105226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:58.777 [2024-04-25 03:28:33.105241] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:58.777 [2024-04-25 03:28:33.105486] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:58.777 [2024-04-25 03:28:33.105704] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:58.777 [2024-04-25 03:28:33.105725] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:58.777 [2024-04-25 03:28:33.105738] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:58.777 [2024-04-25 03:28:33.108702] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:58.777 [2024-04-25 03:28:33.117688] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:58.777 [2024-04-25 03:28:33.118100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.777 [2024-04-25 03:28:33.118270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.777 [2024-04-25 03:28:33.118295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:58.777 [2024-04-25 03:28:33.118326] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:58.777 [2024-04-25 03:28:33.118552] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:58.777 [2024-04-25 03:28:33.118771] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:58.777 [2024-04-25 03:28:33.118792] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:58.777 [2024-04-25 03:28:33.118805] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:58.777 [2024-04-25 03:28:33.121776] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:58.777 [2024-04-25 03:28:33.130967] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:58.777 [2024-04-25 03:28:33.131373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.777 [2024-04-25 03:28:33.131602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.777 [2024-04-25 03:28:33.131635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:58.777 [2024-04-25 03:28:33.131654] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:58.777 [2024-04-25 03:28:33.131879] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:58.777 [2024-04-25 03:28:33.132105] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:58.777 [2024-04-25 03:28:33.132125] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:58.777 [2024-04-25 03:28:33.132138] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:58.777 [2024-04-25 03:28:33.135067] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:58.777 [2024-04-25 03:28:33.144091] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:58.777 [2024-04-25 03:28:33.144489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.777 [2024-04-25 03:28:33.144725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.777 [2024-04-25 03:28:33.144752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:58.777 [2024-04-25 03:28:33.144769] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:58.777 [2024-04-25 03:28:33.145030] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:58.777 [2024-04-25 03:28:33.145222] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:58.778 [2024-04-25 03:28:33.145241] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:58.778 [2024-04-25 03:28:33.145254] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:58.778 [2024-04-25 03:28:33.148183] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:58.778 [2024-04-25 03:28:33.157423] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:58.778 [2024-04-25 03:28:33.157894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.778 [2024-04-25 03:28:33.158108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.778 [2024-04-25 03:28:33.158138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:58.778 [2024-04-25 03:28:33.158155] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:58.778 [2024-04-25 03:28:33.158397] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:58.778 [2024-04-25 03:28:33.158588] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:58.778 [2024-04-25 03:28:33.158624] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:58.778 [2024-04-25 03:28:33.158648] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:58.778 [2024-04-25 03:28:33.161565] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:58.778 [2024-04-25 03:28:33.170580] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:58.778 [2024-04-25 03:28:33.171113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.778 [2024-04-25 03:28:33.171365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.778 [2024-04-25 03:28:33.171410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:58.778 [2024-04-25 03:28:33.171426] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:58.778 [2024-04-25 03:28:33.171680] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:58.778 [2024-04-25 03:28:33.171884] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:58.778 [2024-04-25 03:28:33.171905] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:58.778 [2024-04-25 03:28:33.171917] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:58.778 [2024-04-25 03:28:33.174874] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:58.778 [2024-04-25 03:28:33.183767] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:58.778 [2024-04-25 03:28:33.184247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.778 [2024-04-25 03:28:33.184497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.778 [2024-04-25 03:28:33.184522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:58.778 [2024-04-25 03:28:33.184538] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:58.778 [2024-04-25 03:28:33.184807] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:58.778 [2024-04-25 03:28:33.185020] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:58.778 [2024-04-25 03:28:33.185039] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:58.778 [2024-04-25 03:28:33.185052] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:58.778 [2024-04-25 03:28:33.188028] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:58.778 [2024-04-25 03:28:33.197056] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:58.778 [2024-04-25 03:28:33.197521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.778 [2024-04-25 03:28:33.197731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.778 [2024-04-25 03:28:33.197763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:58.778 [2024-04-25 03:28:33.197781] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:58.778 [2024-04-25 03:28:33.198020] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:58.778 [2024-04-25 03:28:33.198227] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:58.778 [2024-04-25 03:28:33.198247] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:58.778 [2024-04-25 03:28:33.198259] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:58.778 [2024-04-25 03:28:33.201217] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:58.778 [2024-04-25 03:28:33.210268] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:58.778 [2024-04-25 03:28:33.210717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.778 [2024-04-25 03:28:33.210948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.778 [2024-04-25 03:28:33.210974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:58.778 [2024-04-25 03:28:33.210990] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:58.778 [2024-04-25 03:28:33.211236] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:58.778 [2024-04-25 03:28:33.211427] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:58.778 [2024-04-25 03:28:33.211448] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:58.778 [2024-04-25 03:28:33.211461] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:58.778 [2024-04-25 03:28:33.214389] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:58.778 [2024-04-25 03:28:33.223626] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:58.778 [2024-04-25 03:28:33.224070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.778 [2024-04-25 03:28:33.224286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.778 [2024-04-25 03:28:33.224310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:58.778 [2024-04-25 03:28:33.224327] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:58.778 [2024-04-25 03:28:33.224593] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:58.778 [2024-04-25 03:28:33.224833] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:58.778 [2024-04-25 03:28:33.224853] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:58.778 [2024-04-25 03:28:33.224866] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:58.778 [2024-04-25 03:28:33.227879] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:58.778 [2024-04-25 03:28:33.236862] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:58.778 [2024-04-25 03:28:33.237278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.778 [2024-04-25 03:28:33.237457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.778 [2024-04-25 03:28:33.237481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:58.778 [2024-04-25 03:28:33.237502] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:58.778 [2024-04-25 03:28:33.237748] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:58.778 [2024-04-25 03:28:33.237967] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:58.778 [2024-04-25 03:28:33.237989] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:58.778 [2024-04-25 03:28:33.238017] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:58.778 [2024-04-25 03:28:33.241113] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:58.778 [2024-04-25 03:28:33.250602] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:58.778 [2024-04-25 03:28:33.251053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.778 [2024-04-25 03:28:33.251238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.778 [2024-04-25 03:28:33.251262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:58.778 [2024-04-25 03:28:33.251278] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:58.778 [2024-04-25 03:28:33.251490] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:58.778 [2024-04-25 03:28:33.251725] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:58.778 [2024-04-25 03:28:33.251745] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:58.778 [2024-04-25 03:28:33.251758] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:58.778 [2024-04-25 03:28:33.254716] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:58.778 [2024-04-25 03:28:33.263895] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:58.778 [2024-04-25 03:28:33.264298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.778 [2024-04-25 03:28:33.264553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.778 [2024-04-25 03:28:33.264580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:58.778 [2024-04-25 03:28:33.264596] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:58.778 [2024-04-25 03:28:33.264838] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:58.778 [2024-04-25 03:28:33.265050] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:58.778 [2024-04-25 03:28:33.265071] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:58.778 [2024-04-25 03:28:33.265084] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:58.778 [2024-04-25 03:28:33.268070] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:59.037 [2024-04-25 03:28:33.277279] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:59.037 [2024-04-25 03:28:33.277741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.037 [2024-04-25 03:28:33.277959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.037 [2024-04-25 03:28:33.277984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:59.037 [2024-04-25 03:28:33.278000] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:59.037 [2024-04-25 03:28:33.278252] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:59.037 [2024-04-25 03:28:33.278463] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:59.037 [2024-04-25 03:28:33.278484] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:59.037 [2024-04-25 03:28:33.278497] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:59.037 [2024-04-25 03:28:33.281567] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:59.037 [2024-04-25 03:28:33.290428] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:59.037 [2024-04-25 03:28:33.290886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.037 [2024-04-25 03:28:33.291261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.037 [2024-04-25 03:28:33.291286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:59.037 [2024-04-25 03:28:33.291301] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:59.037 [2024-04-25 03:28:33.291543] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:59.037 [2024-04-25 03:28:33.291788] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:59.037 [2024-04-25 03:28:33.291811] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:59.037 [2024-04-25 03:28:33.291825] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:59.037 [2024-04-25 03:28:33.294774] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:59.037 [2024-04-25 03:28:33.303575] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:59.038 [2024-04-25 03:28:33.303989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.038 [2024-04-25 03:28:33.304201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.038 [2024-04-25 03:28:33.304226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:59.038 [2024-04-25 03:28:33.304243] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:59.038 [2024-04-25 03:28:33.304490] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:59.038 [2024-04-25 03:28:33.304725] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:59.038 [2024-04-25 03:28:33.304747] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:59.038 [2024-04-25 03:28:33.304762] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:59.038 [2024-04-25 03:28:33.307755] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:59.038 [2024-04-25 03:28:33.317373] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:59.038 [2024-04-25 03:28:33.317860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.038 [2024-04-25 03:28:33.318177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.038 [2024-04-25 03:28:33.318236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:59.038 [2024-04-25 03:28:33.318254] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:59.038 [2024-04-25 03:28:33.318490] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:59.038 [2024-04-25 03:28:33.318749] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:59.038 [2024-04-25 03:28:33.318775] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:59.038 [2024-04-25 03:28:33.318792] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:59.038 [2024-04-25 03:28:33.322332] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:59.038 [2024-04-25 03:28:33.331315] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:59.038 [2024-04-25 03:28:33.331795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.038 [2024-04-25 03:28:33.332073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.038 [2024-04-25 03:28:33.332119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:59.038 [2024-04-25 03:28:33.332137] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:59.038 [2024-04-25 03:28:33.332374] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:59.038 [2024-04-25 03:28:33.332616] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:59.038 [2024-04-25 03:28:33.332652] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:59.038 [2024-04-25 03:28:33.332670] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:59.038 [2024-04-25 03:28:33.336209] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:59.038 [2024-04-25 03:28:33.345188] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:59.038 [2024-04-25 03:28:33.345666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.038 [2024-04-25 03:28:33.345898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.038 [2024-04-25 03:28:33.345927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:59.038 [2024-04-25 03:28:33.345946] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:59.038 [2024-04-25 03:28:33.346183] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:59.038 [2024-04-25 03:28:33.346425] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:59.038 [2024-04-25 03:28:33.346449] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:59.038 [2024-04-25 03:28:33.346465] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:59.038 [2024-04-25 03:28:33.350015] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:59.038 [2024-04-25 03:28:33.358994] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:59.038 [2024-04-25 03:28:33.359472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.038 [2024-04-25 03:28:33.359724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.038 [2024-04-25 03:28:33.359755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:59.038 [2024-04-25 03:28:33.359773] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:59.038 [2024-04-25 03:28:33.360010] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:59.038 [2024-04-25 03:28:33.360251] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:59.038 [2024-04-25 03:28:33.360281] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:59.038 [2024-04-25 03:28:33.360298] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:59.038 [2024-04-25 03:28:33.363844] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:59.038 [2024-04-25 03:28:33.372840] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:59.038 [2024-04-25 03:28:33.373291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.038 [2024-04-25 03:28:33.373546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.038 [2024-04-25 03:28:33.373575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:59.038 [2024-04-25 03:28:33.373593] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:59.038 [2024-04-25 03:28:33.373843] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:59.038 [2024-04-25 03:28:33.374084] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:59.038 [2024-04-25 03:28:33.374109] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:59.038 [2024-04-25 03:28:33.374125] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:59.038 [2024-04-25 03:28:33.377670] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:59.038 [2024-04-25 03:28:33.386649] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:59.038 [2024-04-25 03:28:33.387099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.038 [2024-04-25 03:28:33.387498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.038 [2024-04-25 03:28:33.387556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:59.038 [2024-04-25 03:28:33.387574] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:59.038 [2024-04-25 03:28:33.387822] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:59.038 [2024-04-25 03:28:33.388063] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:59.038 [2024-04-25 03:28:33.388088] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:59.038 [2024-04-25 03:28:33.388104] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:59.038 [2024-04-25 03:28:33.391648] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:59.038 [2024-04-25 03:28:33.400620] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:59.038 [2024-04-25 03:28:33.401081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.038 [2024-04-25 03:28:33.401401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.038 [2024-04-25 03:28:33.401430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:59.038 [2024-04-25 03:28:33.401448] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:59.038 [2024-04-25 03:28:33.401697] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:59.038 [2024-04-25 03:28:33.401940] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:59.038 [2024-04-25 03:28:33.401964] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:59.038 [2024-04-25 03:28:33.401986] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:59.038 [2024-04-25 03:28:33.405530] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:59.038 [2024-04-25 03:28:33.414512] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:59.038 [2024-04-25 03:28:33.414994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.038 [2024-04-25 03:28:33.415341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.038 [2024-04-25 03:28:33.415398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:59.038 [2024-04-25 03:28:33.415416] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:59.038 [2024-04-25 03:28:33.415665] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:59.038 [2024-04-25 03:28:33.415907] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:59.038 [2024-04-25 03:28:33.415932] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:59.038 [2024-04-25 03:28:33.415948] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:59.038 [2024-04-25 03:28:33.419486] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:59.039 [2024-04-25 03:28:33.428465] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:59.039 [2024-04-25 03:28:33.428937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.039 [2024-04-25 03:28:33.429349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.039 [2024-04-25 03:28:33.429397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:59.039 [2024-04-25 03:28:33.429414] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:59.039 [2024-04-25 03:28:33.429664] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:59.039 [2024-04-25 03:28:33.429905] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:59.039 [2024-04-25 03:28:33.429930] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:59.039 [2024-04-25 03:28:33.429946] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:59.039 [2024-04-25 03:28:33.433482] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:59.039 [2024-04-25 03:28:33.442464] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:59.039 [2024-04-25 03:28:33.442927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.039 [2024-04-25 03:28:33.443369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.039 [2024-04-25 03:28:33.443421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:59.039 [2024-04-25 03:28:33.443438] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:59.039 [2024-04-25 03:28:33.443690] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:59.039 [2024-04-25 03:28:33.443931] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:59.039 [2024-04-25 03:28:33.443956] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:59.039 [2024-04-25 03:28:33.443972] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:59.039 [2024-04-25 03:28:33.447518] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:59.039 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1619050 Killed "${NVMF_APP[@]}" "$@" 00:27:59.039 [2024-04-25 03:28:33.456299] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:59.039 03:28:33 -- host/bdevperf.sh@36 -- # tgt_init 00:27:59.039 03:28:33 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:27:59.039 03:28:33 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:27:59.039 [2024-04-25 03:28:33.456778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.039 03:28:33 -- common/autotest_common.sh@710 -- # xtrace_disable 00:27:59.039 03:28:33 -- common/autotest_common.sh@10 -- # set +x 00:27:59.039 [2024-04-25 03:28:33.457016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.039 [2024-04-25 03:28:33.457047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:59.039 [2024-04-25 03:28:33.457067] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:59.039 [2024-04-25 03:28:33.457303] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:59.039 [2024-04-25 03:28:33.457545] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:59.039 [2024-04-25 03:28:33.457569] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:59.039 [2024-04-25 03:28:33.457585] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:59.039 03:28:33 -- nvmf/common.sh@470 -- # nvmfpid=1620104 00:27:59.039 03:28:33 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:59.039 03:28:33 -- nvmf/common.sh@471 -- # waitforlisten 1620104 00:27:59.039 03:28:33 -- common/autotest_common.sh@817 -- # '[' -z 1620104 ']' 00:27:59.039 [2024-04-25 03:28:33.461130] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:59.039 03:28:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:59.039 03:28:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:59.039 03:28:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:59.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:59.039 03:28:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:59.039 03:28:33 -- common/autotest_common.sh@10 -- # set +x 00:27:59.039 [2024-04-25 03:28:33.470118] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:59.039 [2024-04-25 03:28:33.470593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.039 [2024-04-25 03:28:33.470824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.039 [2024-04-25 03:28:33.470854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:59.039 [2024-04-25 03:28:33.470872] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:59.039 [2024-04-25 03:28:33.471108] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:59.039 [2024-04-25 03:28:33.471349] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:59.039 [2024-04-25 03:28:33.471372] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:59.039 [2024-04-25 03:28:33.471387] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:59.039 [2024-04-25 03:28:33.474937] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:59.039 [2024-04-25 03:28:33.483923] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:59.039 [2024-04-25 03:28:33.484396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.039 [2024-04-25 03:28:33.484642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.039 [2024-04-25 03:28:33.484671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:59.039 [2024-04-25 03:28:33.484690] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:59.039 [2024-04-25 03:28:33.484927] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:59.039 [2024-04-25 03:28:33.485167] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:59.039 [2024-04-25 03:28:33.485190] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:59.039 [2024-04-25 03:28:33.485206] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:59.039 [2024-04-25 03:28:33.488420] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:59.039 [2024-04-25 03:28:33.497374] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:59.039 [2024-04-25 03:28:33.497881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.039 [2024-04-25 03:28:33.498094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.039 [2024-04-25 03:28:33.498119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:59.039 [2024-04-25 03:28:33.498135] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:59.039 [2024-04-25 03:28:33.498382] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:59.039 [2024-04-25 03:28:33.498579] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:59.039 [2024-04-25 03:28:33.498598] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:59.039 [2024-04-25 03:28:33.498611] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:59.039 [2024-04-25 03:28:33.501600] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:59.039 [2024-04-25 03:28:33.504167] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:27:59.039 [2024-04-25 03:28:33.504236] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:59.039 [2024-04-25 03:28:33.510712] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:59.039 [2024-04-25 03:28:33.511127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.039 [2024-04-25 03:28:33.511423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.039 [2024-04-25 03:28:33.511448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:59.039 [2024-04-25 03:28:33.511478] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:59.039 [2024-04-25 03:28:33.511696] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:59.039 [2024-04-25 03:28:33.511894] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:59.039 [2024-04-25 03:28:33.511913] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:59.039 [2024-04-25 03:28:33.511925] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:59.039 [2024-04-25 03:28:33.514972] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:59.039 [2024-04-25 03:28:33.523974] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:59.039 [2024-04-25 03:28:33.524440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.039 [2024-04-25 03:28:33.524666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.040 [2024-04-25 03:28:33.524692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:59.040 [2024-04-25 03:28:33.524707] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:59.040 [2024-04-25 03:28:33.524948] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:59.040 [2024-04-25 03:28:33.525161] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:59.040 [2024-04-25 03:28:33.525180] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:59.040 [2024-04-25 03:28:33.525192] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:59.040 [2024-04-25 03:28:33.528139] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:59.301 [2024-04-25 03:28:33.537412] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:59.301 [2024-04-25 03:28:33.537861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.301 [2024-04-25 03:28:33.538064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.301 [2024-04-25 03:28:33.538089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:59.301 [2024-04-25 03:28:33.538105] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:59.301 [2024-04-25 03:28:33.538340] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:59.301 [2024-04-25 03:28:33.538537] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:59.301 [2024-04-25 03:28:33.538555] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:59.301 [2024-04-25 03:28:33.538568] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:59.301 EAL: No free 2048 kB hugepages reported on node 1 00:27:59.301 [2024-04-25 03:28:33.541704] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:59.301 [2024-04-25 03:28:33.551382] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:59.301 [2024-04-25 03:28:33.551925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.301 [2024-04-25 03:28:33.552164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.301 [2024-04-25 03:28:33.552189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:59.301 [2024-04-25 03:28:33.552218] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:59.301 [2024-04-25 03:28:33.552446] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:59.301 [2024-04-25 03:28:33.552668] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:59.301 [2024-04-25 03:28:33.552688] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:59.301 [2024-04-25 03:28:33.552701] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:59.301 [2024-04-25 03:28:33.556200] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:59.301 [2024-04-25 03:28:33.565250] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:59.301 [2024-04-25 03:28:33.565751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.301 [2024-04-25 03:28:33.565979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.301 [2024-04-25 03:28:33.566005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:59.301 [2024-04-25 03:28:33.566020] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:59.301 [2024-04-25 03:28:33.566269] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:59.301 [2024-04-25 03:28:33.566466] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:59.301 [2024-04-25 03:28:33.566485] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:59.301 [2024-04-25 03:28:33.566497] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:59.301 [2024-04-25 03:28:33.569972] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:59.301 [2024-04-25 03:28:33.573186] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:59.301 [2024-04-25 03:28:33.579002] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:59.301 [2024-04-25 03:28:33.579494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.301 [2024-04-25 03:28:33.579779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.301 [2024-04-25 03:28:33.579805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:59.301 [2024-04-25 03:28:33.579822] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:59.301 [2024-04-25 03:28:33.580067] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:59.301 [2024-04-25 03:28:33.580282] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:59.301 [2024-04-25 03:28:33.580301] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:59.301 [2024-04-25 03:28:33.580315] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:59.301 [2024-04-25 03:28:33.584019] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:59.301 [2024-04-25 03:28:33.592939] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:59.301 [2024-04-25 03:28:33.593548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.301 [2024-04-25 03:28:33.593799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.301 [2024-04-25 03:28:33.593826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:59.301 [2024-04-25 03:28:33.593846] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:59.301 [2024-04-25 03:28:33.594099] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:59.301 [2024-04-25 03:28:33.594344] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:59.301 [2024-04-25 03:28:33.594368] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:59.301 [2024-04-25 03:28:33.594386] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:59.301 [2024-04-25 03:28:33.597977] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:59.301 [2024-04-25 03:28:33.606832] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:59.301 [2024-04-25 03:28:33.607333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.302 [2024-04-25 03:28:33.607572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.302 [2024-04-25 03:28:33.607600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:59.302 [2024-04-25 03:28:33.607617] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:59.302 [2024-04-25 03:28:33.607857] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:59.302 [2024-04-25 03:28:33.608108] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:59.302 [2024-04-25 03:28:33.608131] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:59.302 [2024-04-25 03:28:33.608146] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:59.302 [2024-04-25 03:28:33.611691] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:59.302 [2024-04-25 03:28:33.620687] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:59.302 [2024-04-25 03:28:33.621147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.302 [2024-04-25 03:28:33.621375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.302 [2024-04-25 03:28:33.621403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:59.302 [2024-04-25 03:28:33.621420] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:59.302 [2024-04-25 03:28:33.621665] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:59.302 [2024-04-25 03:28:33.621906] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:59.302 [2024-04-25 03:28:33.621929] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:59.302 [2024-04-25 03:28:33.621944] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:59.302 [2024-04-25 03:28:33.625478] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:59.302 [2024-04-25 03:28:33.634670] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:59.302 [2024-04-25 03:28:33.635167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.302 [2024-04-25 03:28:33.635422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.302 [2024-04-25 03:28:33.635449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:59.302 [2024-04-25 03:28:33.635467] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:59.302 [2024-04-25 03:28:33.635715] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:59.302 [2024-04-25 03:28:33.635957] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:59.302 [2024-04-25 03:28:33.635981] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:59.302 [2024-04-25 03:28:33.635997] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:59.302 [2024-04-25 03:28:33.639538] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:59.302 [2024-04-25 03:28:33.648541] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:59.302 [2024-04-25 03:28:33.649207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.302 [2024-04-25 03:28:33.649492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.302 [2024-04-25 03:28:33.649525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:59.302 [2024-04-25 03:28:33.649547] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:59.302 [2024-04-25 03:28:33.649814] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:59.302 [2024-04-25 03:28:33.650061] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:59.302 [2024-04-25 03:28:33.650085] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:59.302 [2024-04-25 03:28:33.650104] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:59.302 [2024-04-25 03:28:33.653649] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:59.302 [2024-04-25 03:28:33.662409] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:59.302 [2024-04-25 03:28:33.662909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.302 [2024-04-25 03:28:33.663176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.302 [2024-04-25 03:28:33.663204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:59.302 [2024-04-25 03:28:33.663222] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:59.302 [2024-04-25 03:28:33.663458] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:59.302 [2024-04-25 03:28:33.663711] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:59.302 [2024-04-25 03:28:33.663735] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:59.302 [2024-04-25 03:28:33.663751] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:59.302 [2024-04-25 03:28:33.667283] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:59.302 [2024-04-25 03:28:33.676254] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:59.302 [2024-04-25 03:28:33.676740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.302 [2024-04-25 03:28:33.677005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.302 [2024-04-25 03:28:33.677034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:59.302 [2024-04-25 03:28:33.677052] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:59.302 [2024-04-25 03:28:33.677290] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:59.302 [2024-04-25 03:28:33.677532] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:59.302 [2024-04-25 03:28:33.677556] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:59.302 [2024-04-25 03:28:33.677572] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:59.302 [2024-04-25 03:28:33.681114] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:59.302 [2024-04-25 03:28:33.690093] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:59.302 [2024-04-25 03:28:33.690572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.302 [2024-04-25 03:28:33.690839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.302 [2024-04-25 03:28:33.690869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:59.302 [2024-04-25 03:28:33.690895] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:59.302 [2024-04-25 03:28:33.691134] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:59.302 [2024-04-25 03:28:33.691375] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:59.302 [2024-04-25 03:28:33.691398] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:59.302 [2024-04-25 03:28:33.691413] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:59.302 [2024-04-25 03:28:33.691823] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:59.302 [2024-04-25 03:28:33.691861] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:59.302 [2024-04-25 03:28:33.691877] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:59.302 [2024-04-25 03:28:33.691890] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:59.302 [2024-04-25 03:28:33.691902] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:59.302 [2024-04-25 03:28:33.691975] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:59.302 [2024-04-25 03:28:33.692080] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:59.302 [2024-04-25 03:28:33.692084] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:59.302 [2024-04-25 03:28:33.694956] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:59.302 [2024-04-25 03:28:33.703967] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:59.302 [2024-04-25 03:28:33.704604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.302 [2024-04-25 03:28:33.704846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.302 [2024-04-25 03:28:33.704877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:59.302 [2024-04-25 03:28:33.704898] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:59.302 [2024-04-25 03:28:33.705150] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:59.302 [2024-04-25 03:28:33.705397] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:59.302 [2024-04-25 03:28:33.705421] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:59.302 [2024-04-25 03:28:33.705440] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:59.302 [2024-04-25 03:28:33.708988] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:59.302 [2024-04-25 03:28:33.717992] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:59.302 [2024-04-25 03:28:33.718603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.302 [2024-04-25 03:28:33.718884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.302 [2024-04-25 03:28:33.718914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:59.302 [2024-04-25 03:28:33.718936] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:59.302 [2024-04-25 03:28:33.719187] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:59.302 [2024-04-25 03:28:33.719433] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:59.302 [2024-04-25 03:28:33.719457] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:59.303 [2024-04-25 03:28:33.719486] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:59.303 [2024-04-25 03:28:33.723036] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:59.303 [2024-04-25 03:28:33.732038] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:59.303 [2024-04-25 03:28:33.732672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.303 [2024-04-25 03:28:33.732904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.303 [2024-04-25 03:28:33.732933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:59.303 [2024-04-25 03:28:33.732954] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:59.303 [2024-04-25 03:28:33.733203] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:59.303 [2024-04-25 03:28:33.733450] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:59.303 [2024-04-25 03:28:33.733474] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:59.303 [2024-04-25 03:28:33.733494] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:59.303 [2024-04-25 03:28:33.737041] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:59.303 [2024-04-25 03:28:33.746045] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:59.303 [2024-04-25 03:28:33.746635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.303 [2024-04-25 03:28:33.746878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.303 [2024-04-25 03:28:33.746907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:59.303 [2024-04-25 03:28:33.746929] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:59.303 [2024-04-25 03:28:33.747176] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:59.303 [2024-04-25 03:28:33.747423] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:59.303 [2024-04-25 03:28:33.747447] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:59.303 [2024-04-25 03:28:33.747467] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:59.303 [2024-04-25 03:28:33.751015] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:59.303 [2024-04-25 03:28:33.760006] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:59.303 [2024-04-25 03:28:33.760560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.303 [2024-04-25 03:28:33.760791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.303 [2024-04-25 03:28:33.760821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:59.303 [2024-04-25 03:28:33.760843] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:59.303 [2024-04-25 03:28:33.761094] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:59.303 [2024-04-25 03:28:33.761341] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:59.303 [2024-04-25 03:28:33.761366] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:59.303 [2024-04-25 03:28:33.761399] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:59.303 [2024-04-25 03:28:33.764943] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:59.303 [2024-04-25 03:28:33.773946] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:59.303 [2024-04-25 03:28:33.774552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.303 [2024-04-25 03:28:33.774824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.303 [2024-04-25 03:28:33.774855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:59.303 [2024-04-25 03:28:33.774877] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:59.303 [2024-04-25 03:28:33.775127] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:59.303 [2024-04-25 03:28:33.775375] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:59.303 [2024-04-25 03:28:33.775398] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:59.303 [2024-04-25 03:28:33.775420] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:59.303 [2024-04-25 03:28:33.778961] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:59.303 [2024-04-25 03:28:33.787941] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:59.303 [2024-04-25 03:28:33.788433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.303 [2024-04-25 03:28:33.788653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.303 [2024-04-25 03:28:33.788681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:59.303 [2024-04-25 03:28:33.788698] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:59.303 [2024-04-25 03:28:33.788935] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:59.303 [2024-04-25 03:28:33.789175] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:59.303 [2024-04-25 03:28:33.789199] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:59.303 [2024-04-25 03:28:33.789214] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:59.303 [2024-04-25 03:28:33.792761] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:59.564 [2024-04-25 03:28:33.801597] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:59.564 [2024-04-25 03:28:33.802030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.564 [2024-04-25 03:28:33.802204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.565 [2024-04-25 03:28:33.802229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:59.565 [2024-04-25 03:28:33.802245] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:59.565 [2024-04-25 03:28:33.802457] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:59.565 [2024-04-25 03:28:33.802681] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:59.565 [2024-04-25 03:28:33.802702] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:59.565 [2024-04-25 03:28:33.802716] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:59.565 [2024-04-25 03:28:33.806151] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:59.565 03:28:33 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:59.565 03:28:33 -- common/autotest_common.sh@850 -- # return 0 00:27:59.565 03:28:33 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:27:59.565 03:28:33 -- common/autotest_common.sh@716 -- # xtrace_disable 00:27:59.565 03:28:33 -- common/autotest_common.sh@10 -- # set +x 00:27:59.565 [2024-04-25 03:28:33.815125] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:59.565 [2024-04-25 03:28:33.815575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.565 [2024-04-25 03:28:33.815785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.565 [2024-04-25 03:28:33.815815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:59.565 [2024-04-25 03:28:33.815832] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:59.565 [2024-04-25 03:28:33.816060] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:59.565 [2024-04-25 03:28:33.816269] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:59.565 [2024-04-25 03:28:33.816290] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:59.565 [2024-04-25 03:28:33.816303] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:59.565 [2024-04-25 03:28:33.819496] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:59.565 [2024-04-25 03:28:33.828533] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:59.565 [2024-04-25 03:28:33.828962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.565 [2024-04-25 03:28:33.829162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.565 [2024-04-25 03:28:33.829187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:59.565 [2024-04-25 03:28:33.829203] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:59.565 [2024-04-25 03:28:33.829416] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:59.565 [2024-04-25 03:28:33.829668] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:59.565 [2024-04-25 03:28:33.829690] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:59.565 [2024-04-25 03:28:33.829703] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:59.565 [2024-04-25 03:28:33.832929] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:59.565 03:28:33 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:59.565 03:28:33 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:59.565 03:28:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:59.565 03:28:33 -- common/autotest_common.sh@10 -- # set +x 00:27:59.565 [2024-04-25 03:28:33.842213] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:59.565 [2024-04-25 03:28:33.842664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.565 [2024-04-25 03:28:33.842827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.565 [2024-04-25 03:28:33.842852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:59.565 [2024-04-25 03:28:33.842867] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:59.565 [2024-04-25 03:28:33.843092] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:59.565 [2024-04-25 03:28:33.843307] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:59.565 [2024-04-25 03:28:33.843328] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:59.565 [2024-04-25 03:28:33.843341] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:59.565 [2024-04-25 03:28:33.844398] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:59.565 [2024-04-25 03:28:33.846523] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:59.565 03:28:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:59.565 03:28:33 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:59.565 03:28:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:59.565 03:28:33 -- common/autotest_common.sh@10 -- # set +x 00:27:59.565 [2024-04-25 03:28:33.855555] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:59.565 [2024-04-25 03:28:33.855977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.565 [2024-04-25 03:28:33.856154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.565 [2024-04-25 03:28:33.856179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:59.565 [2024-04-25 03:28:33.856195] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:59.565 [2024-04-25 03:28:33.856419] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:59.565 [2024-04-25 03:28:33.856638] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:59.565 [2024-04-25 03:28:33.856658] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:59.565 [2024-04-25 03:28:33.856671] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:59.565 [2024-04-25 03:28:33.859807] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:59.565 [2024-04-25 03:28:33.869122] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:59.565 [2024-04-25 03:28:33.869575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.565 [2024-04-25 03:28:33.869783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.565 [2024-04-25 03:28:33.869809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:59.565 [2024-04-25 03:28:33.869826] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:59.565 [2024-04-25 03:28:33.870055] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:59.565 [2024-04-25 03:28:33.870266] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:59.565 [2024-04-25 03:28:33.870287] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:59.565 [2024-04-25 03:28:33.870302] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:59.565 [2024-04-25 03:28:33.873442] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:59.565 [2024-04-25 03:28:33.882546] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:59.565 [2024-04-25 03:28:33.883114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.565 [2024-04-25 03:28:33.883344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.565 [2024-04-25 03:28:33.883369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:59.565 [2024-04-25 03:28:33.883390] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:59.565 [2024-04-25 03:28:33.883641] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:59.566 [2024-04-25 03:28:33.883899] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:59.566 [2024-04-25 03:28:33.883922] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:59.566 [2024-04-25 03:28:33.883940] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:59.566 Malloc0 00:27:59.566 03:28:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:59.566 03:28:33 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:59.566 03:28:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:59.566 03:28:33 -- common/autotest_common.sh@10 -- # set +x 00:27:59.566 [2024-04-25 03:28:33.887164] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:59.566 03:28:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:59.566 03:28:33 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:59.566 03:28:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:59.566 03:28:33 -- common/autotest_common.sh@10 -- # set +x 00:27:59.566 [2024-04-25 03:28:33.896205] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:59.566 [2024-04-25 03:28:33.896679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.566 [2024-04-25 03:28:33.896873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.566 [2024-04-25 03:28:33.896900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed4cb0 with addr=10.0.0.2, port=4420 00:27:59.566 [2024-04-25 03:28:33.896916] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4cb0 is same with the state(5) to be set 00:27:59.566 [2024-04-25 03:28:33.897143] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4cb0 (9): Bad file descriptor 00:27:59.566 [2024-04-25 03:28:33.897353] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:59.566 [2024-04-25 03:28:33.897373] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:59.566 [2024-04-25 03:28:33.897385] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:59.566 03:28:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:59.566 03:28:33 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:59.566 03:28:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:59.566 03:28:33 -- common/autotest_common.sh@10 -- # set +x 00:27:59.566 [2024-04-25 03:28:33.900564] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:59.566 [2024-04-25 03:28:33.903855] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:59.566 03:28:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:59.566 03:28:33 -- host/bdevperf.sh@38 -- # wait 1619342 00:27:59.566 [2024-04-25 03:28:33.909760] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:59.566 [2024-04-25 03:28:33.981567] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:09.545 00:28:09.545 Latency(us) 00:28:09.545 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:09.545 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:09.545 Verification LBA range: start 0x0 length 0x4000 00:28:09.545 Nvme1n1 : 15.00 6684.59 26.11 8812.39 0.00 8235.86 1074.06 18350.08 00:28:09.545 =================================================================================================================== 00:28:09.545 Total : 6684.59 26.11 8812.39 0.00 8235.86 1074.06 18350.08 00:28:09.545 03:28:43 -- host/bdevperf.sh@39 -- # sync 00:28:09.545 03:28:43 -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:09.545 03:28:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:09.545 03:28:43 -- common/autotest_common.sh@10 -- # set +x 00:28:09.545 03:28:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:09.545 03:28:43 -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:28:09.545 03:28:43 -- host/bdevperf.sh@44 -- # nvmftestfini 00:28:09.545 03:28:43 -- nvmf/common.sh@477 -- # nvmfcleanup 00:28:09.545 03:28:43 -- nvmf/common.sh@117 -- # sync 00:28:09.545 03:28:43 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:09.545 03:28:43 -- nvmf/common.sh@120 -- # set +e 00:28:09.545 03:28:43 -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:09.545 03:28:43 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:09.545 rmmod nvme_tcp 00:28:09.545 rmmod nvme_fabrics 00:28:09.545 rmmod nvme_keyring 00:28:09.545 03:28:43 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:09.545 03:28:43 -- nvmf/common.sh@124 -- # set -e 00:28:09.545 03:28:43 -- nvmf/common.sh@125 -- # return 0 00:28:09.545 03:28:43 -- nvmf/common.sh@478 -- # '[' -n 1620104 ']' 00:28:09.545 03:28:43 -- nvmf/common.sh@479 -- # killprocess 1620104 00:28:09.545 03:28:43 -- common/autotest_common.sh@936 -- # '[' -z 1620104 ']' 00:28:09.545 03:28:43 -- common/autotest_common.sh@940 -- # kill -0 1620104 00:28:09.545 03:28:43 -- common/autotest_common.sh@941 -- # uname 00:28:09.545 03:28:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:09.545 03:28:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1620104 00:28:09.545 03:28:43 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:28:09.545 03:28:43 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:28:09.545 03:28:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1620104' 00:28:09.545 killing process with pid 1620104 00:28:09.545 03:28:43 -- common/autotest_common.sh@955 -- # kill 1620104 00:28:09.545 03:28:43 -- common/autotest_common.sh@960 -- # wait 1620104 00:28:09.545 03:28:43 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:28:09.545 03:28:43 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:28:09.545 03:28:43 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:28:09.545 03:28:43 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:09.545 03:28:43 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:09.545 03:28:43 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:09.545 03:28:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:09.545 03:28:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:11.452 03:28:45 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:11.452 00:28:11.452 real 0m22.525s 00:28:11.452 user 1m0.586s 00:28:11.452 sys 0m4.197s 00:28:11.452 03:28:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:28:11.452 03:28:45 -- common/autotest_common.sh@10 -- # set +x 00:28:11.452 ************************************ 00:28:11.452 END TEST nvmf_bdevperf 00:28:11.452 ************************************ 00:28:11.452 03:28:45 -- nvmf/nvmf.sh@120 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:28:11.452 03:28:45 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:28:11.452 03:28:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:11.452 03:28:45 -- common/autotest_common.sh@10 -- # set +x 00:28:11.452 ************************************ 00:28:11.452 START TEST nvmf_target_disconnect 00:28:11.452 ************************************ 00:28:11.452 03:28:45 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:28:11.452 * Looking for test storage... 00:28:11.452 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:11.452 03:28:45 -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:11.452 03:28:45 -- nvmf/common.sh@7 -- # uname -s 00:28:11.452 03:28:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:11.452 03:28:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:11.452 03:28:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:11.452 03:28:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:11.452 03:28:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:11.452 03:28:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:11.452 03:28:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:11.452 03:28:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:11.452 03:28:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:11.452 03:28:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:11.452 03:28:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:11.452 03:28:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:11.452 03:28:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:11.452 03:28:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:11.452 03:28:45 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:11.452 03:28:45 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:11.452 03:28:45 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:11.453 03:28:45 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:11.453 03:28:45 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:11.453 03:28:45 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:11.453 03:28:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:11.453 03:28:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:11.453 03:28:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:11.453 03:28:45 -- paths/export.sh@5 -- # export PATH 00:28:11.453 03:28:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:11.453 03:28:45 -- nvmf/common.sh@47 -- # : 0 00:28:11.453 03:28:45 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:11.453 03:28:45 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:11.453 03:28:45 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:11.453 03:28:45 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:11.453 03:28:45 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:11.453 03:28:45 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:11.453 03:28:45 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:11.453 03:28:45 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:11.453 03:28:45 -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:28:11.453 03:28:45 -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:28:11.453 03:28:45 -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:28:11.453 03:28:45 -- host/target_disconnect.sh@77 -- # nvmftestinit 00:28:11.453 03:28:45 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:28:11.453 03:28:45 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:11.453 03:28:45 -- nvmf/common.sh@437 -- # prepare_net_devs 00:28:11.453 03:28:45 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:28:11.453 03:28:45 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:28:11.453 03:28:45 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:11.453 03:28:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:11.453 03:28:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:11.453 03:28:45 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:28:11.453 03:28:45 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:28:11.453 03:28:45 -- nvmf/common.sh@285 -- # xtrace_disable 00:28:11.453 03:28:45 -- common/autotest_common.sh@10 -- # set +x 00:28:13.358 03:28:47 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:13.358 03:28:47 -- nvmf/common.sh@291 -- # pci_devs=() 00:28:13.358 03:28:47 -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:13.358 03:28:47 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:13.358 03:28:47 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:13.358 03:28:47 -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:13.358 03:28:47 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:13.358 03:28:47 -- nvmf/common.sh@295 -- # net_devs=() 00:28:13.358 03:28:47 -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:13.358 03:28:47 -- nvmf/common.sh@296 -- # e810=() 00:28:13.358 03:28:47 -- nvmf/common.sh@296 -- # local -ga e810 00:28:13.358 03:28:47 -- nvmf/common.sh@297 -- # x722=() 00:28:13.358 03:28:47 -- nvmf/common.sh@297 -- # local -ga x722 00:28:13.358 03:28:47 -- nvmf/common.sh@298 -- # mlx=() 00:28:13.358 03:28:47 -- nvmf/common.sh@298 -- # local -ga mlx 00:28:13.358 03:28:47 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:13.358 03:28:47 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:13.358 03:28:47 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:13.358 03:28:47 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:13.358 03:28:47 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:13.358 03:28:47 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:13.358 03:28:47 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:13.358 03:28:47 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:13.358 03:28:47 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:13.358 03:28:47 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:13.358 03:28:47 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:13.358 03:28:47 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:13.358 03:28:47 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:13.358 03:28:47 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:13.358 03:28:47 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:13.358 03:28:47 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:13.358 03:28:47 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:13.358 03:28:47 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:13.358 03:28:47 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:13.358 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:13.358 03:28:47 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:13.358 03:28:47 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:13.358 03:28:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:13.358 03:28:47 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:13.358 03:28:47 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:13.358 03:28:47 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:13.358 03:28:47 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:13.358 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:13.358 03:28:47 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:13.358 03:28:47 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:13.358 03:28:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:13.358 03:28:47 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:13.358 03:28:47 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:13.358 03:28:47 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:13.358 03:28:47 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:13.358 03:28:47 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:13.358 03:28:47 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:13.358 03:28:47 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:13.358 03:28:47 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:28:13.358 03:28:47 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:13.358 03:28:47 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:13.358 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:13.358 03:28:47 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:28:13.358 03:28:47 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:13.358 03:28:47 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:13.358 03:28:47 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:28:13.358 03:28:47 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:13.358 03:28:47 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:13.358 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:13.358 03:28:47 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:28:13.358 03:28:47 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:28:13.358 03:28:47 -- nvmf/common.sh@403 -- # is_hw=yes 00:28:13.358 03:28:47 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:28:13.358 03:28:47 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:28:13.358 03:28:47 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:28:13.358 03:28:47 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:13.358 03:28:47 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:13.358 03:28:47 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:13.358 03:28:47 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:13.358 03:28:47 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:13.358 03:28:47 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:13.358 03:28:47 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:13.358 03:28:47 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:13.358 03:28:47 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:13.358 03:28:47 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:13.358 03:28:47 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:13.358 03:28:47 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:13.358 03:28:47 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:13.358 03:28:47 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:13.358 03:28:47 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:13.358 03:28:47 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:13.358 03:28:47 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:13.619 03:28:47 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:13.619 03:28:47 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:13.619 03:28:47 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:13.619 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:13.619 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.146 ms 00:28:13.619 00:28:13.619 --- 10.0.0.2 ping statistics --- 00:28:13.619 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:13.619 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:28:13.619 03:28:47 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:13.619 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:13.619 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:28:13.619 00:28:13.619 --- 10.0.0.1 ping statistics --- 00:28:13.619 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:13.619 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:28:13.619 03:28:47 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:13.619 03:28:47 -- nvmf/common.sh@411 -- # return 0 00:28:13.619 03:28:47 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:28:13.619 03:28:47 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:13.619 03:28:47 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:28:13.619 03:28:47 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:28:13.619 03:28:47 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:13.619 03:28:47 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:28:13.619 03:28:47 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:28:13.619 03:28:47 -- host/target_disconnect.sh@78 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:28:13.619 03:28:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:28:13.619 03:28:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:13.619 03:28:47 -- common/autotest_common.sh@10 -- # set +x 00:28:13.619 ************************************ 00:28:13.619 START TEST nvmf_target_disconnect_tc1 00:28:13.619 ************************************ 00:28:13.619 03:28:48 -- common/autotest_common.sh@1111 -- # nvmf_target_disconnect_tc1 00:28:13.619 03:28:48 -- host/target_disconnect.sh@32 -- # set +e 00:28:13.619 03:28:48 -- host/target_disconnect.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:13.619 EAL: No free 2048 kB hugepages reported on node 1 00:28:13.878 [2024-04-25 03:28:48.125435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.878 [2024-04-25 03:28:48.125728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.878 [2024-04-25 03:28:48.125763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2352ad0 with addr=10.0.0.2, port=4420 00:28:13.878 [2024-04-25 03:28:48.125806] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:28:13.878 [2024-04-25 03:28:48.125828] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:28:13.878 [2024-04-25 03:28:48.125843] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:28:13.878 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:28:13.878 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:28:13.878 Initializing NVMe Controllers 00:28:13.878 03:28:48 -- host/target_disconnect.sh@33 -- # trap - ERR 00:28:13.878 03:28:48 -- host/target_disconnect.sh@33 -- # print_backtrace 00:28:13.878 03:28:48 -- common/autotest_common.sh@1139 -- # [[ hxBET =~ e ]] 00:28:13.878 03:28:48 -- common/autotest_common.sh@1139 -- # return 0 00:28:13.878 03:28:48 -- host/target_disconnect.sh@37 -- # '[' 1 '!=' 1 ']' 00:28:13.878 03:28:48 -- host/target_disconnect.sh@41 -- # set -e 00:28:13.878 00:28:13.878 real 0m0.096s 00:28:13.878 user 0m0.039s 00:28:13.878 sys 0m0.057s 00:28:13.878 03:28:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:28:13.878 03:28:48 -- common/autotest_common.sh@10 -- # set +x 00:28:13.878 ************************************ 00:28:13.878 END TEST nvmf_target_disconnect_tc1 00:28:13.878 ************************************ 00:28:13.878 03:28:48 -- host/target_disconnect.sh@79 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:28:13.878 03:28:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:28:13.878 03:28:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:13.879 03:28:48 -- common/autotest_common.sh@10 -- # set +x 00:28:13.879 ************************************ 00:28:13.879 START TEST nvmf_target_disconnect_tc2 00:28:13.879 ************************************ 00:28:13.879 03:28:48 -- common/autotest_common.sh@1111 -- # nvmf_target_disconnect_tc2 00:28:13.879 03:28:48 -- host/target_disconnect.sh@45 -- # disconnect_init 10.0.0.2 00:28:13.879 03:28:48 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:28:13.879 03:28:48 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:28:13.879 03:28:48 -- common/autotest_common.sh@710 -- # xtrace_disable 00:28:13.879 03:28:48 -- common/autotest_common.sh@10 -- # set +x 00:28:13.879 03:28:48 -- nvmf/common.sh@470 -- # nvmfpid=1623270 00:28:13.879 03:28:48 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:28:13.879 03:28:48 -- nvmf/common.sh@471 -- # waitforlisten 1623270 00:28:13.879 03:28:48 -- common/autotest_common.sh@817 -- # '[' -z 1623270 ']' 00:28:13.879 03:28:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:13.879 03:28:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:13.879 03:28:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:13.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:13.879 03:28:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:13.879 03:28:48 -- common/autotest_common.sh@10 -- # set +x 00:28:13.879 [2024-04-25 03:28:48.311187] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:28:13.879 [2024-04-25 03:28:48.311283] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:13.879 EAL: No free 2048 kB hugepages reported on node 1 00:28:13.879 [2024-04-25 03:28:48.378106] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:14.137 [2024-04-25 03:28:48.483012] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:14.137 [2024-04-25 03:28:48.483074] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:14.137 [2024-04-25 03:28:48.483126] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:14.137 [2024-04-25 03:28:48.483139] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:14.137 [2024-04-25 03:28:48.483163] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:14.137 [2024-04-25 03:28:48.483256] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:28:14.137 [2024-04-25 03:28:48.483360] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:28:14.137 [2024-04-25 03:28:48.483430] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:28:14.137 [2024-04-25 03:28:48.483433] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:28:14.137 03:28:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:14.137 03:28:48 -- common/autotest_common.sh@850 -- # return 0 00:28:14.137 03:28:48 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:28:14.137 03:28:48 -- common/autotest_common.sh@716 -- # xtrace_disable 00:28:14.137 03:28:48 -- common/autotest_common.sh@10 -- # set +x 00:28:14.137 03:28:48 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:14.137 03:28:48 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:14.137 03:28:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:14.138 03:28:48 -- common/autotest_common.sh@10 -- # set +x 00:28:14.396 Malloc0 00:28:14.396 03:28:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:14.396 03:28:48 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:28:14.396 03:28:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:14.396 03:28:48 -- common/autotest_common.sh@10 -- # set +x 00:28:14.396 [2024-04-25 03:28:48.647728] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:14.396 03:28:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:14.396 03:28:48 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:14.396 03:28:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:14.396 03:28:48 -- common/autotest_common.sh@10 -- # set +x 00:28:14.396 03:28:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:14.396 03:28:48 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:14.396 03:28:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:14.396 03:28:48 -- common/autotest_common.sh@10 -- # set +x 00:28:14.396 03:28:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:14.396 03:28:48 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:14.396 03:28:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:14.396 03:28:48 -- common/autotest_common.sh@10 -- # set +x 00:28:14.396 [2024-04-25 03:28:48.675981] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:14.396 03:28:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:14.396 03:28:48 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:14.396 03:28:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:14.396 03:28:48 -- common/autotest_common.sh@10 -- # set +x 00:28:14.396 03:28:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:14.396 03:28:48 -- host/target_disconnect.sh@50 -- # reconnectpid=1623296 00:28:14.396 03:28:48 -- host/target_disconnect.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:14.396 03:28:48 -- host/target_disconnect.sh@52 -- # sleep 2 00:28:14.396 EAL: No free 2048 kB hugepages reported on node 1 00:28:16.309 03:28:50 -- host/target_disconnect.sh@53 -- # kill -9 1623270 00:28:16.309 03:28:50 -- host/target_disconnect.sh@55 -- # sleep 2 00:28:16.309 Read completed with error (sct=0, sc=8) 00:28:16.309 starting I/O failed 00:28:16.309 Read completed with error (sct=0, sc=8) 00:28:16.309 starting I/O failed 00:28:16.309 Read completed with error (sct=0, sc=8) 00:28:16.309 starting I/O failed 00:28:16.309 Read completed with error (sct=0, sc=8) 00:28:16.309 starting I/O failed 00:28:16.309 Read completed with error (sct=0, sc=8) 00:28:16.309 starting I/O failed 00:28:16.309 Read completed with error (sct=0, sc=8) 00:28:16.309 starting I/O failed 00:28:16.309 Read completed with error (sct=0, sc=8) 00:28:16.309 starting I/O failed 00:28:16.309 Read completed with error (sct=0, sc=8) 00:28:16.309 starting I/O failed 00:28:16.309 Read completed with error (sct=0, sc=8) 00:28:16.309 starting I/O failed 00:28:16.309 Read completed with error (sct=0, sc=8) 00:28:16.309 starting I/O failed 00:28:16.309 Read completed with error (sct=0, sc=8) 00:28:16.309 starting I/O failed 00:28:16.309 Read completed with error (sct=0, sc=8) 00:28:16.309 starting I/O failed 00:28:16.309 Read completed with error (sct=0, sc=8) 00:28:16.309 starting I/O failed 00:28:16.309 Read completed with error (sct=0, sc=8) 00:28:16.309 starting I/O failed 00:28:16.309 Write completed with error (sct=0, sc=8) 00:28:16.309 starting I/O failed 00:28:16.309 Read completed with error (sct=0, sc=8) 00:28:16.309 starting I/O failed 00:28:16.309 Read completed with error (sct=0, sc=8) 00:28:16.309 starting I/O failed 00:28:16.309 Read completed with error (sct=0, sc=8) 00:28:16.309 starting I/O failed 00:28:16.309 Write completed with error (sct=0, sc=8) 00:28:16.309 starting I/O failed 00:28:16.309 Read completed with error (sct=0, sc=8) 00:28:16.309 starting I/O failed 00:28:16.309 Read completed with error (sct=0, sc=8) 00:28:16.309 starting I/O failed 00:28:16.309 Write completed with error (sct=0, sc=8) 00:28:16.309 starting I/O failed 00:28:16.309 Write completed with error (sct=0, sc=8) 00:28:16.309 starting I/O failed 00:28:16.309 Write completed with error (sct=0, sc=8) 00:28:16.309 starting I/O failed 00:28:16.309 Read completed with error (sct=0, sc=8) 00:28:16.309 starting I/O failed 00:28:16.309 Write completed with error (sct=0, sc=8) 00:28:16.309 starting I/O failed 00:28:16.309 Write completed with error (sct=0, sc=8) 00:28:16.309 starting I/O failed 00:28:16.309 Write completed with error (sct=0, sc=8) 00:28:16.309 starting I/O failed 00:28:16.309 Write completed with error (sct=0, sc=8) 00:28:16.309 starting I/O failed 00:28:16.309 Write completed with error (sct=0, sc=8) 00:28:16.309 starting I/O failed 00:28:16.309 Read completed with error (sct=0, sc=8) 00:28:16.309 starting I/O failed 00:28:16.309 Write completed with error (sct=0, sc=8) 00:28:16.309 starting I/O failed 00:28:16.309 [2024-04-25 03:28:50.701100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:16.309 Read completed with error (sct=0, sc=8) 00:28:16.309 starting I/O failed 00:28:16.309 Read completed with error (sct=0, sc=8) 00:28:16.309 starting I/O failed 00:28:16.309 Read completed with error (sct=0, sc=8) 00:28:16.309 starting I/O failed 00:28:16.309 Read completed with error (sct=0, sc=8) 00:28:16.309 starting I/O failed 00:28:16.309 Read completed with error (sct=0, sc=8) 00:28:16.309 starting I/O failed 00:28:16.309 Read completed with error (sct=0, sc=8) 00:28:16.309 starting I/O failed 00:28:16.309 Read completed with error (sct=0, sc=8) 00:28:16.309 starting I/O failed 00:28:16.309 Read completed with error (sct=0, sc=8) 00:28:16.309 starting I/O failed 00:28:16.309 Read completed with error (sct=0, sc=8) 00:28:16.309 starting I/O failed 00:28:16.309 Read completed with error (sct=0, sc=8) 00:28:16.309 starting I/O failed 00:28:16.309 Read completed with error (sct=0, sc=8) 00:28:16.309 starting I/O failed 00:28:16.309 Write completed with error (sct=0, sc=8) 00:28:16.309 starting I/O failed 00:28:16.309 Write completed with error (sct=0, sc=8) 00:28:16.309 starting I/O failed 00:28:16.309 Read completed with error (sct=0, sc=8) 00:28:16.309 starting I/O failed 00:28:16.309 Write completed with error (sct=0, sc=8) 00:28:16.309 starting I/O failed 00:28:16.309 Read completed with error (sct=0, sc=8) 00:28:16.309 starting I/O failed 00:28:16.309 Read completed with error (sct=0, sc=8) 00:28:16.309 starting I/O failed 00:28:16.309 Read completed with error (sct=0, sc=8) 00:28:16.309 starting I/O failed 00:28:16.309 Read completed with error (sct=0, sc=8) 00:28:16.309 starting I/O failed 00:28:16.309 Write completed with error (sct=0, sc=8) 00:28:16.309 starting I/O failed 00:28:16.309 Write completed with error (sct=0, sc=8) 00:28:16.309 starting I/O failed 00:28:16.309 Read completed with error (sct=0, sc=8) 00:28:16.309 starting I/O failed 00:28:16.309 Read completed with error (sct=0, sc=8) 00:28:16.309 starting I/O failed 00:28:16.309 Read completed with error (sct=0, sc=8) 00:28:16.309 starting I/O failed 00:28:16.309 Write completed with error (sct=0, sc=8) 00:28:16.309 starting I/O failed 00:28:16.309 Read completed with error (sct=0, sc=8) 00:28:16.309 starting I/O failed 00:28:16.309 Write completed with error (sct=0, sc=8) 00:28:16.309 starting I/O failed 00:28:16.309 Read completed with error (sct=0, sc=8) 00:28:16.309 starting I/O failed 00:28:16.309 Read completed with error (sct=0, sc=8) 00:28:16.309 starting I/O failed 00:28:16.309 Read completed with error (sct=0, sc=8) 00:28:16.309 starting I/O failed 00:28:16.309 Write completed with error (sct=0, sc=8) 00:28:16.309 starting I/O failed 00:28:16.309 Read completed with error (sct=0, sc=8) 00:28:16.309 starting I/O failed 00:28:16.309 [2024-04-25 03:28:50.701519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:16.309 Read completed with error (sct=0, sc=8) 00:28:16.309 starting I/O failed 00:28:16.309 Read completed with error (sct=0, sc=8) 00:28:16.309 starting I/O failed 00:28:16.309 Read completed with error (sct=0, sc=8) 00:28:16.309 starting I/O failed 00:28:16.309 Read completed with error (sct=0, sc=8) 00:28:16.309 starting I/O failed 00:28:16.309 Read completed with error (sct=0, sc=8) 00:28:16.309 starting I/O failed 00:28:16.309 Read completed with error (sct=0, sc=8) 00:28:16.309 starting I/O failed 00:28:16.309 Read completed with error (sct=0, sc=8) 00:28:16.309 starting I/O failed 00:28:16.309 Read completed with error (sct=0, sc=8) 00:28:16.309 starting I/O failed 00:28:16.309 Read completed with error (sct=0, sc=8) 00:28:16.309 starting I/O failed 00:28:16.309 Read completed with error (sct=0, sc=8) 00:28:16.309 starting I/O failed 00:28:16.309 Write completed with error (sct=0, sc=8) 00:28:16.309 starting I/O failed 00:28:16.309 Read completed with error (sct=0, sc=8) 00:28:16.309 starting I/O failed 00:28:16.309 Read completed with error (sct=0, sc=8) 00:28:16.309 starting I/O failed 00:28:16.309 Read completed with error (sct=0, sc=8) 00:28:16.309 starting I/O failed 00:28:16.309 Write completed with error (sct=0, sc=8) 00:28:16.309 starting I/O failed 00:28:16.309 Write completed with error (sct=0, sc=8) 00:28:16.309 starting I/O failed 00:28:16.309 Read completed with error (sct=0, sc=8) 00:28:16.309 starting I/O failed 00:28:16.309 Write completed with error (sct=0, sc=8) 00:28:16.309 starting I/O failed 00:28:16.309 Read completed with error (sct=0, sc=8) 00:28:16.309 starting I/O failed 00:28:16.309 Write completed with error (sct=0, sc=8) 00:28:16.309 starting I/O failed 00:28:16.309 Write completed with error (sct=0, sc=8) 00:28:16.309 starting I/O failed 00:28:16.309 Read completed with error (sct=0, sc=8) 00:28:16.309 starting I/O failed 00:28:16.309 Read completed with error (sct=0, sc=8) 00:28:16.309 starting I/O failed 00:28:16.310 Read completed with error (sct=0, sc=8) 00:28:16.310 starting I/O failed 00:28:16.310 Read completed with error (sct=0, sc=8) 00:28:16.310 starting I/O failed 00:28:16.310 Read completed with error (sct=0, sc=8) 00:28:16.310 starting I/O failed 00:28:16.310 Read completed with error (sct=0, sc=8) 00:28:16.310 starting I/O failed 00:28:16.310 Read completed with error (sct=0, sc=8) 00:28:16.310 starting I/O failed 00:28:16.310 Read completed with error (sct=0, sc=8) 00:28:16.310 starting I/O failed 00:28:16.310 Read completed with error (sct=0, sc=8) 00:28:16.310 starting I/O failed 00:28:16.310 Read completed with error (sct=0, sc=8) 00:28:16.310 starting I/O failed 00:28:16.310 Read completed with error (sct=0, sc=8) 00:28:16.310 starting I/O failed 00:28:16.310 [2024-04-25 03:28:50.701883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:16.310 Read completed with error (sct=0, sc=8) 00:28:16.310 starting I/O failed 00:28:16.310 Read completed with error (sct=0, sc=8) 00:28:16.310 starting I/O failed 00:28:16.310 Read completed with error (sct=0, sc=8) 00:28:16.310 starting I/O failed 00:28:16.310 Read completed with error (sct=0, sc=8) 00:28:16.310 starting I/O failed 00:28:16.310 Read completed with error (sct=0, sc=8) 00:28:16.310 starting I/O failed 00:28:16.310 Read completed with error (sct=0, sc=8) 00:28:16.310 starting I/O failed 00:28:16.310 Read completed with error (sct=0, sc=8) 00:28:16.310 starting I/O failed 00:28:16.310 Read completed with error (sct=0, sc=8) 00:28:16.310 starting I/O failed 00:28:16.310 Read completed with error (sct=0, sc=8) 00:28:16.310 starting I/O failed 00:28:16.310 Read completed with error (sct=0, sc=8) 00:28:16.310 starting I/O failed 00:28:16.310 Read completed with error (sct=0, sc=8) 00:28:16.310 starting I/O failed 00:28:16.310 Read completed with error (sct=0, sc=8) 00:28:16.310 starting I/O failed 00:28:16.310 Read completed with error (sct=0, sc=8) 00:28:16.310 starting I/O failed 00:28:16.310 Read completed with error (sct=0, sc=8) 00:28:16.310 starting I/O failed 00:28:16.310 Read completed with error (sct=0, sc=8) 00:28:16.310 starting I/O failed 00:28:16.310 Write completed with error (sct=0, sc=8) 00:28:16.310 starting I/O failed 00:28:16.310 Write completed with error (sct=0, sc=8) 00:28:16.310 starting I/O failed 00:28:16.310 Write completed with error (sct=0, sc=8) 00:28:16.310 starting I/O failed 00:28:16.310 Write completed with error (sct=0, sc=8) 00:28:16.310 starting I/O failed 00:28:16.310 Read completed with error (sct=0, sc=8) 00:28:16.310 starting I/O failed 00:28:16.310 Write completed with error (sct=0, sc=8) 00:28:16.310 starting I/O failed 00:28:16.310 Write completed with error (sct=0, sc=8) 00:28:16.310 starting I/O failed 00:28:16.310 Write completed with error (sct=0, sc=8) 00:28:16.310 starting I/O failed 00:28:16.310 Read completed with error (sct=0, sc=8) 00:28:16.310 starting I/O failed 00:28:16.310 Write completed with error (sct=0, sc=8) 00:28:16.310 starting I/O failed 00:28:16.310 Read completed with error (sct=0, sc=8) 00:28:16.310 starting I/O failed 00:28:16.310 Write completed with error (sct=0, sc=8) 00:28:16.310 starting I/O failed 00:28:16.310 Write completed with error (sct=0, sc=8) 00:28:16.310 starting I/O failed 00:28:16.310 Write completed with error (sct=0, sc=8) 00:28:16.310 starting I/O failed 00:28:16.310 Write completed with error (sct=0, sc=8) 00:28:16.310 starting I/O failed 00:28:16.310 Write completed with error (sct=0, sc=8) 00:28:16.310 starting I/O failed 00:28:16.310 Write completed with error (sct=0, sc=8) 00:28:16.310 starting I/O failed 00:28:16.310 [2024-04-25 03:28:50.702185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:16.310 [2024-04-25 03:28:50.702497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.310 [2024-04-25 03:28:50.702734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.310 [2024-04-25 03:28:50.702761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.310 qpair failed and we were unable to recover it. 00:28:16.310 [2024-04-25 03:28:50.702940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.310 [2024-04-25 03:28:50.703168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.310 [2024-04-25 03:28:50.703193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.310 qpair failed and we were unable to recover it. 00:28:16.310 [2024-04-25 03:28:50.703397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.310 [2024-04-25 03:28:50.703697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.310 [2024-04-25 03:28:50.703722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.310 qpair failed and we were unable to recover it. 00:28:16.310 [2024-04-25 03:28:50.703925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.310 [2024-04-25 03:28:50.704166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.310 [2024-04-25 03:28:50.704191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.310 qpair failed and we were unable to recover it. 00:28:16.310 [2024-04-25 03:28:50.704493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.310 [2024-04-25 03:28:50.704759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.310 [2024-04-25 03:28:50.704784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.310 qpair failed and we were unable to recover it. 00:28:16.310 [2024-04-25 03:28:50.704959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.310 [2024-04-25 03:28:50.705138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.310 [2024-04-25 03:28:50.705163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.310 qpair failed and we were unable to recover it. 00:28:16.310 [2024-04-25 03:28:50.705380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.310 [2024-04-25 03:28:50.705613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.310 [2024-04-25 03:28:50.705647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.310 qpair failed and we were unable to recover it. 00:28:16.310 [2024-04-25 03:28:50.705844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.310 [2024-04-25 03:28:50.706128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.310 [2024-04-25 03:28:50.706151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.310 qpair failed and we were unable to recover it. 00:28:16.310 [2024-04-25 03:28:50.706374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.310 [2024-04-25 03:28:50.706626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.310 [2024-04-25 03:28:50.706675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.310 qpair failed and we were unable to recover it. 00:28:16.310 [2024-04-25 03:28:50.706873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.310 [2024-04-25 03:28:50.707066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.310 [2024-04-25 03:28:50.707090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.310 qpair failed and we were unable to recover it. 00:28:16.310 [2024-04-25 03:28:50.707353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.310 [2024-04-25 03:28:50.707622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.310 [2024-04-25 03:28:50.707657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.310 qpair failed and we were unable to recover it. 00:28:16.310 [2024-04-25 03:28:50.707886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.310 [2024-04-25 03:28:50.708112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.310 [2024-04-25 03:28:50.708137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.310 qpair failed and we were unable to recover it. 00:28:16.310 [2024-04-25 03:28:50.708360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.310 [2024-04-25 03:28:50.708645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.310 [2024-04-25 03:28:50.708687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.310 qpair failed and we were unable to recover it. 00:28:16.310 [2024-04-25 03:28:50.708868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.310 [2024-04-25 03:28:50.709119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.310 [2024-04-25 03:28:50.709143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.310 qpair failed and we were unable to recover it. 00:28:16.310 [2024-04-25 03:28:50.709397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.310 [2024-04-25 03:28:50.709654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.310 [2024-04-25 03:28:50.709695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.310 qpair failed and we were unable to recover it. 00:28:16.310 [2024-04-25 03:28:50.709896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.310 [2024-04-25 03:28:50.710105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.310 [2024-04-25 03:28:50.710133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.310 qpair failed and we were unable to recover it. 00:28:16.310 [2024-04-25 03:28:50.710387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.310 [2024-04-25 03:28:50.710575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.310 [2024-04-25 03:28:50.710600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.310 qpair failed and we were unable to recover it. 00:28:16.310 [2024-04-25 03:28:50.710812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.310 [2024-04-25 03:28:50.711004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.310 [2024-04-25 03:28:50.711028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.310 qpair failed and we were unable to recover it. 00:28:16.311 [2024-04-25 03:28:50.711256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.311 [2024-04-25 03:28:50.711467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.311 [2024-04-25 03:28:50.711491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.311 qpair failed and we were unable to recover it. 00:28:16.311 [2024-04-25 03:28:50.711760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.311 [2024-04-25 03:28:50.711945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.311 [2024-04-25 03:28:50.711969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.311 qpair failed and we were unable to recover it. 00:28:16.311 [2024-04-25 03:28:50.712201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.311 [2024-04-25 03:28:50.712457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.311 [2024-04-25 03:28:50.712504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.311 qpair failed and we were unable to recover it. 00:28:16.311 [2024-04-25 03:28:50.712807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.311 [2024-04-25 03:28:50.712984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.311 [2024-04-25 03:28:50.713009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.311 qpair failed and we were unable to recover it. 00:28:16.311 [2024-04-25 03:28:50.713208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.311 [2024-04-25 03:28:50.713407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.311 [2024-04-25 03:28:50.713432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.311 qpair failed and we were unable to recover it. 00:28:16.311 [2024-04-25 03:28:50.713639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.311 [2024-04-25 03:28:50.713838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.311 [2024-04-25 03:28:50.713863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.311 qpair failed and we were unable to recover it. 00:28:16.311 [2024-04-25 03:28:50.714085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.311 [2024-04-25 03:28:50.714308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.311 [2024-04-25 03:28:50.714357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.311 qpair failed and we were unable to recover it. 00:28:16.311 [2024-04-25 03:28:50.714597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.311 [2024-04-25 03:28:50.714799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.311 [2024-04-25 03:28:50.714826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.311 qpair failed and we were unable to recover it. 00:28:16.311 [2024-04-25 03:28:50.715072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.311 [2024-04-25 03:28:50.715278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.311 [2024-04-25 03:28:50.715302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.311 qpair failed and we were unable to recover it. 00:28:16.311 [2024-04-25 03:28:50.715529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.311 [2024-04-25 03:28:50.715723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.311 [2024-04-25 03:28:50.715748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.311 qpair failed and we were unable to recover it. 00:28:16.311 [2024-04-25 03:28:50.715965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.311 [2024-04-25 03:28:50.716193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.311 [2024-04-25 03:28:50.716218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.311 qpair failed and we were unable to recover it. 00:28:16.311 [2024-04-25 03:28:50.716455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.311 [2024-04-25 03:28:50.716684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.311 [2024-04-25 03:28:50.716710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.311 qpair failed and we were unable to recover it. 00:28:16.311 [2024-04-25 03:28:50.716910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.311 [2024-04-25 03:28:50.717105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.311 [2024-04-25 03:28:50.717130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.311 qpair failed and we were unable to recover it. 00:28:16.311 [2024-04-25 03:28:50.717378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.311 [2024-04-25 03:28:50.717589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.311 [2024-04-25 03:28:50.717621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.311 qpair failed and we were unable to recover it. 00:28:16.311 [2024-04-25 03:28:50.717850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.311 [2024-04-25 03:28:50.718108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.311 [2024-04-25 03:28:50.718157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.311 qpair failed and we were unable to recover it. 00:28:16.311 [2024-04-25 03:28:50.718358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.311 [2024-04-25 03:28:50.718552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.311 [2024-04-25 03:28:50.718576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.311 qpair failed and we were unable to recover it. 00:28:16.311 [2024-04-25 03:28:50.718793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.311 [2024-04-25 03:28:50.718979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.311 [2024-04-25 03:28:50.719007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.311 qpair failed and we were unable to recover it. 00:28:16.311 [2024-04-25 03:28:50.719225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.311 [2024-04-25 03:28:50.719441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.311 [2024-04-25 03:28:50.719465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.311 qpair failed and we were unable to recover it. 00:28:16.311 [2024-04-25 03:28:50.719697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.311 [2024-04-25 03:28:50.719875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.311 [2024-04-25 03:28:50.719900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.311 qpair failed and we were unable to recover it. 00:28:16.311 [2024-04-25 03:28:50.720164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.311 [2024-04-25 03:28:50.720426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.311 [2024-04-25 03:28:50.720471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.311 qpair failed and we were unable to recover it. 00:28:16.311 [2024-04-25 03:28:50.720685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.311 [2024-04-25 03:28:50.720911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.311 [2024-04-25 03:28:50.720936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.311 qpair failed and we were unable to recover it. 00:28:16.311 [2024-04-25 03:28:50.721198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.311 [2024-04-25 03:28:50.721405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.311 [2024-04-25 03:28:50.721430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.311 qpair failed and we were unable to recover it. 00:28:16.311 [2024-04-25 03:28:50.721655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.311 [2024-04-25 03:28:50.721850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.311 [2024-04-25 03:28:50.721878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.311 qpair failed and we were unable to recover it. 00:28:16.311 [2024-04-25 03:28:50.722093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.311 [2024-04-25 03:28:50.722338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.311 [2024-04-25 03:28:50.722365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.311 qpair failed and we were unable to recover it. 00:28:16.311 [2024-04-25 03:28:50.722574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.311 [2024-04-25 03:28:50.722842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.311 [2024-04-25 03:28:50.722867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.311 qpair failed and we were unable to recover it. 00:28:16.311 [2024-04-25 03:28:50.723056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.311 [2024-04-25 03:28:50.723271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.311 [2024-04-25 03:28:50.723295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.311 qpair failed and we were unable to recover it. 00:28:16.311 [2024-04-25 03:28:50.723569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.311 [2024-04-25 03:28:50.723764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.311 [2024-04-25 03:28:50.723791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.311 qpair failed and we were unable to recover it. 00:28:16.311 [2024-04-25 03:28:50.724009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.311 [2024-04-25 03:28:50.724203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.311 [2024-04-25 03:28:50.724229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.311 qpair failed and we were unable to recover it. 00:28:16.311 [2024-04-25 03:28:50.724456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.311 [2024-04-25 03:28:50.724699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.311 [2024-04-25 03:28:50.724740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.312 qpair failed and we were unable to recover it. 00:28:16.312 [2024-04-25 03:28:50.724967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.312 [2024-04-25 03:28:50.725180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.312 [2024-04-25 03:28:50.725207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.312 qpair failed and we were unable to recover it. 00:28:16.312 [2024-04-25 03:28:50.725442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.312 [2024-04-25 03:28:50.725674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.312 [2024-04-25 03:28:50.725699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.312 qpair failed and we were unable to recover it. 00:28:16.312 [2024-04-25 03:28:50.725946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.312 [2024-04-25 03:28:50.726232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.312 [2024-04-25 03:28:50.726256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.312 qpair failed and we were unable to recover it. 00:28:16.312 [2024-04-25 03:28:50.726481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.312 [2024-04-25 03:28:50.726751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.312 [2024-04-25 03:28:50.726776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.312 qpair failed and we were unable to recover it. 00:28:16.312 [2024-04-25 03:28:50.726988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.312 [2024-04-25 03:28:50.727218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.312 [2024-04-25 03:28:50.727244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.312 qpair failed and we were unable to recover it. 00:28:16.312 [2024-04-25 03:28:50.727455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.312 [2024-04-25 03:28:50.727736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.312 [2024-04-25 03:28:50.727762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.312 qpair failed and we were unable to recover it. 00:28:16.312 [2024-04-25 03:28:50.728035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.312 [2024-04-25 03:28:50.728206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.312 [2024-04-25 03:28:50.728231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.312 qpair failed and we were unable to recover it. 00:28:16.312 [2024-04-25 03:28:50.728441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.312 [2024-04-25 03:28:50.728732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.312 [2024-04-25 03:28:50.728757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.312 qpair failed and we were unable to recover it. 00:28:16.312 [2024-04-25 03:28:50.729001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.312 [2024-04-25 03:28:50.729210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.312 [2024-04-25 03:28:50.729235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.312 qpair failed and we were unable to recover it. 00:28:16.312 [2024-04-25 03:28:50.729450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.312 [2024-04-25 03:28:50.729706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.312 [2024-04-25 03:28:50.729732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.312 qpair failed and we were unable to recover it. 00:28:16.312 [2024-04-25 03:28:50.729977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.312 [2024-04-25 03:28:50.730187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.312 [2024-04-25 03:28:50.730211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.312 qpair failed and we were unable to recover it. 00:28:16.312 [2024-04-25 03:28:50.730388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.312 [2024-04-25 03:28:50.730681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.312 [2024-04-25 03:28:50.730707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.312 qpair failed and we were unable to recover it. 00:28:16.312 [2024-04-25 03:28:50.730946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.312 [2024-04-25 03:28:50.731164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.312 [2024-04-25 03:28:50.731189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.312 qpair failed and we were unable to recover it. 00:28:16.312 [2024-04-25 03:28:50.731425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.312 [2024-04-25 03:28:50.731646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.312 [2024-04-25 03:28:50.731701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.312 qpair failed and we were unable to recover it. 00:28:16.312 [2024-04-25 03:28:50.731924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.312 [2024-04-25 03:28:50.732182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.312 [2024-04-25 03:28:50.732229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.312 qpair failed and we were unable to recover it. 00:28:16.312 [2024-04-25 03:28:50.732487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.312 [2024-04-25 03:28:50.732712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.312 [2024-04-25 03:28:50.732738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.312 qpair failed and we were unable to recover it. 00:28:16.312 [2024-04-25 03:28:50.733005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.312 [2024-04-25 03:28:50.733262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.312 [2024-04-25 03:28:50.733289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.312 qpair failed and we were unable to recover it. 00:28:16.312 [2024-04-25 03:28:50.733511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.312 [2024-04-25 03:28:50.733759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.312 [2024-04-25 03:28:50.733787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.312 qpair failed and we were unable to recover it. 00:28:16.312 [2024-04-25 03:28:50.734030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.312 [2024-04-25 03:28:50.734280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.312 [2024-04-25 03:28:50.734308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.312 qpair failed and we were unable to recover it. 00:28:16.312 [2024-04-25 03:28:50.734572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.312 [2024-04-25 03:28:50.734809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.312 [2024-04-25 03:28:50.734835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.312 qpair failed and we were unable to recover it. 00:28:16.312 [2024-04-25 03:28:50.735049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.312 [2024-04-25 03:28:50.735292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.312 [2024-04-25 03:28:50.735315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.312 qpair failed and we were unable to recover it. 00:28:16.312 [2024-04-25 03:28:50.735526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.312 [2024-04-25 03:28:50.735742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.312 [2024-04-25 03:28:50.735768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.312 qpair failed and we were unable to recover it. 00:28:16.312 [2024-04-25 03:28:50.736022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.312 [2024-04-25 03:28:50.736264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.312 [2024-04-25 03:28:50.736288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.312 qpair failed and we were unable to recover it. 00:28:16.312 [2024-04-25 03:28:50.736551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.312 [2024-04-25 03:28:50.736738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.312 [2024-04-25 03:28:50.736763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.312 qpair failed and we were unable to recover it. 00:28:16.312 [2024-04-25 03:28:50.737016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.312 [2024-04-25 03:28:50.737267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.312 [2024-04-25 03:28:50.737307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.312 qpair failed and we were unable to recover it. 00:28:16.312 [2024-04-25 03:28:50.737518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.312 [2024-04-25 03:28:50.737737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.312 [2024-04-25 03:28:50.737763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.312 qpair failed and we were unable to recover it. 00:28:16.312 [2024-04-25 03:28:50.737997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.312 [2024-04-25 03:28:50.738286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.312 [2024-04-25 03:28:50.738335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.312 qpair failed and we were unable to recover it. 00:28:16.312 [2024-04-25 03:28:50.738560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.312 [2024-04-25 03:28:50.738812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.312 [2024-04-25 03:28:50.738852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.312 qpair failed and we were unable to recover it. 00:28:16.312 [2024-04-25 03:28:50.739058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.312 [2024-04-25 03:28:50.739300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.312 [2024-04-25 03:28:50.739324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.313 qpair failed and we were unable to recover it. 00:28:16.313 [2024-04-25 03:28:50.739548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.313 [2024-04-25 03:28:50.739798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.313 [2024-04-25 03:28:50.739824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.313 qpair failed and we were unable to recover it. 00:28:16.313 [2024-04-25 03:28:50.740047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.313 [2024-04-25 03:28:50.740271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.313 [2024-04-25 03:28:50.740296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.313 qpair failed and we were unable to recover it. 00:28:16.313 [2024-04-25 03:28:50.740569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.313 [2024-04-25 03:28:50.740784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.313 [2024-04-25 03:28:50.740809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.313 qpair failed and we were unable to recover it. 00:28:16.313 [2024-04-25 03:28:50.741077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.313 [2024-04-25 03:28:50.741307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.313 [2024-04-25 03:28:50.741332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.313 qpair failed and we were unable to recover it. 00:28:16.313 [2024-04-25 03:28:50.741560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.313 [2024-04-25 03:28:50.741813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.313 [2024-04-25 03:28:50.741838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.313 qpair failed and we were unable to recover it. 00:28:16.313 [2024-04-25 03:28:50.742022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.313 [2024-04-25 03:28:50.742266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.313 [2024-04-25 03:28:50.742314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.313 qpair failed and we were unable to recover it. 00:28:16.313 [2024-04-25 03:28:50.742570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.313 [2024-04-25 03:28:50.742736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.313 [2024-04-25 03:28:50.742766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.313 qpair failed and we were unable to recover it. 00:28:16.313 [2024-04-25 03:28:50.742986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.313 [2024-04-25 03:28:50.743273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.313 [2024-04-25 03:28:50.743320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.313 qpair failed and we were unable to recover it. 00:28:16.313 [2024-04-25 03:28:50.743543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.313 [2024-04-25 03:28:50.743748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.313 [2024-04-25 03:28:50.743773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.313 qpair failed and we were unable to recover it. 00:28:16.313 [2024-04-25 03:28:50.743973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.313 [2024-04-25 03:28:50.744168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.313 [2024-04-25 03:28:50.744193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.313 qpair failed and we were unable to recover it. 00:28:16.313 [2024-04-25 03:28:50.744400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.313 [2024-04-25 03:28:50.744617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.313 [2024-04-25 03:28:50.744650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.313 qpair failed and we were unable to recover it. 00:28:16.313 [2024-04-25 03:28:50.744865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.313 [2024-04-25 03:28:50.745262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.313 [2024-04-25 03:28:50.745286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.313 qpair failed and we were unable to recover it. 00:28:16.313 [2024-04-25 03:28:50.745563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.313 [2024-04-25 03:28:50.745762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.313 [2024-04-25 03:28:50.745789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.313 qpair failed and we were unable to recover it. 00:28:16.313 [2024-04-25 03:28:50.746046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.313 [2024-04-25 03:28:50.746278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.313 [2024-04-25 03:28:50.746323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.313 qpair failed and we were unable to recover it. 00:28:16.313 [2024-04-25 03:28:50.746532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.313 [2024-04-25 03:28:50.746793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.313 [2024-04-25 03:28:50.746821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.313 qpair failed and we were unable to recover it. 00:28:16.313 [2024-04-25 03:28:50.747049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.313 [2024-04-25 03:28:50.747298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.313 [2024-04-25 03:28:50.747338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.313 qpair failed and we were unable to recover it. 00:28:16.313 [2024-04-25 03:28:50.747563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.313 [2024-04-25 03:28:50.747764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.313 [2024-04-25 03:28:50.747790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.313 qpair failed and we were unable to recover it. 00:28:16.313 [2024-04-25 03:28:50.748006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.313 [2024-04-25 03:28:50.748225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.313 [2024-04-25 03:28:50.748249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.313 qpair failed and we were unable to recover it. 00:28:16.313 [2024-04-25 03:28:50.748477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.313 [2024-04-25 03:28:50.748647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.313 [2024-04-25 03:28:50.748672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.313 qpair failed and we were unable to recover it. 00:28:16.313 [2024-04-25 03:28:50.748890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.313 [2024-04-25 03:28:50.749103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.313 [2024-04-25 03:28:50.749127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.313 qpair failed and we were unable to recover it. 00:28:16.313 [2024-04-25 03:28:50.749355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.313 [2024-04-25 03:28:50.749580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.313 [2024-04-25 03:28:50.749607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.313 qpair failed and we were unable to recover it. 00:28:16.313 [2024-04-25 03:28:50.749820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.313 [2024-04-25 03:28:50.750133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.313 [2024-04-25 03:28:50.750181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.313 qpair failed and we were unable to recover it. 00:28:16.313 [2024-04-25 03:28:50.750403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.313 [2024-04-25 03:28:50.750651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.313 [2024-04-25 03:28:50.750677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.313 qpair failed and we were unable to recover it. 00:28:16.314 [2024-04-25 03:28:50.750936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.314 [2024-04-25 03:28:50.751173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.314 [2024-04-25 03:28:50.751197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.314 qpair failed and we were unable to recover it. 00:28:16.314 [2024-04-25 03:28:50.751407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.314 [2024-04-25 03:28:50.751659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.314 [2024-04-25 03:28:50.751687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.314 qpair failed and we were unable to recover it. 00:28:16.314 [2024-04-25 03:28:50.751904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.314 [2024-04-25 03:28:50.752130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.314 [2024-04-25 03:28:50.752154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.314 qpair failed and we were unable to recover it. 00:28:16.314 [2024-04-25 03:28:50.752393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.314 [2024-04-25 03:28:50.752608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.314 [2024-04-25 03:28:50.752641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.314 qpair failed and we were unable to recover it. 00:28:16.314 [2024-04-25 03:28:50.752845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.314 [2024-04-25 03:28:50.753066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.314 [2024-04-25 03:28:50.753094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.314 qpair failed and we were unable to recover it. 00:28:16.314 [2024-04-25 03:28:50.753331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.314 [2024-04-25 03:28:50.753574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.314 [2024-04-25 03:28:50.753598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.314 qpair failed and we were unable to recover it. 00:28:16.314 [2024-04-25 03:28:50.753836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.314 [2024-04-25 03:28:50.754031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.314 [2024-04-25 03:28:50.754056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.314 qpair failed and we were unable to recover it. 00:28:16.314 [2024-04-25 03:28:50.754302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.314 [2024-04-25 03:28:50.754507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.314 [2024-04-25 03:28:50.754532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.314 qpair failed and we were unable to recover it. 00:28:16.314 [2024-04-25 03:28:50.754780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.314 [2024-04-25 03:28:50.755044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.314 [2024-04-25 03:28:50.755069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.314 qpair failed and we were unable to recover it. 00:28:16.314 [2024-04-25 03:28:50.755270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.314 [2024-04-25 03:28:50.755493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.314 [2024-04-25 03:28:50.755518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.314 qpair failed and we were unable to recover it. 00:28:16.314 [2024-04-25 03:28:50.755794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.314 [2024-04-25 03:28:50.756006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.314 [2024-04-25 03:28:50.756030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.314 qpair failed and we were unable to recover it. 00:28:16.314 [2024-04-25 03:28:50.756230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.314 [2024-04-25 03:28:50.756470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.314 [2024-04-25 03:28:50.756497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.314 qpair failed and we were unable to recover it. 00:28:16.314 [2024-04-25 03:28:50.756755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.314 [2024-04-25 03:28:50.756993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.314 [2024-04-25 03:28:50.757017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.314 qpair failed and we were unable to recover it. 00:28:16.314 [2024-04-25 03:28:50.757224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.314 [2024-04-25 03:28:50.757468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.314 [2024-04-25 03:28:50.757510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.314 qpair failed and we were unable to recover it. 00:28:16.314 [2024-04-25 03:28:50.757738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.314 [2024-04-25 03:28:50.757941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.314 [2024-04-25 03:28:50.757966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.314 qpair failed and we were unable to recover it. 00:28:16.314 [2024-04-25 03:28:50.758187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.314 [2024-04-25 03:28:50.758394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.314 [2024-04-25 03:28:50.758418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.314 qpair failed and we were unable to recover it. 00:28:16.314 [2024-04-25 03:28:50.758673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.314 [2024-04-25 03:28:50.758846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.314 [2024-04-25 03:28:50.758871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.314 qpair failed and we were unable to recover it. 00:28:16.314 [2024-04-25 03:28:50.759091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.314 [2024-04-25 03:28:50.759279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.314 [2024-04-25 03:28:50.759307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.314 qpair failed and we were unable to recover it. 00:28:16.314 [2024-04-25 03:28:50.759531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.314 [2024-04-25 03:28:50.759733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.314 [2024-04-25 03:28:50.759758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.314 qpair failed and we were unable to recover it. 00:28:16.314 [2024-04-25 03:28:50.759949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.314 [2024-04-25 03:28:50.760159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.314 [2024-04-25 03:28:50.760186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.314 qpair failed and we were unable to recover it. 00:28:16.314 [2024-04-25 03:28:50.760426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.314 [2024-04-25 03:28:50.760651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.314 [2024-04-25 03:28:50.760678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.314 qpair failed and we were unable to recover it. 00:28:16.314 [2024-04-25 03:28:50.760919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.314 [2024-04-25 03:28:50.761128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.314 [2024-04-25 03:28:50.761153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.314 qpair failed and we were unable to recover it. 00:28:16.314 [2024-04-25 03:28:50.761374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.314 [2024-04-25 03:28:50.761633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.314 [2024-04-25 03:28:50.761658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.314 qpair failed and we were unable to recover it. 00:28:16.314 [2024-04-25 03:28:50.761832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.314 [2024-04-25 03:28:50.762107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.314 [2024-04-25 03:28:50.762132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.314 qpair failed and we were unable to recover it. 00:28:16.314 [2024-04-25 03:28:50.762341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.314 [2024-04-25 03:28:50.762557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.314 [2024-04-25 03:28:50.762582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.314 qpair failed and we were unable to recover it. 00:28:16.314 [2024-04-25 03:28:50.762813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.314 [2024-04-25 03:28:50.763045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.314 [2024-04-25 03:28:50.763069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.314 qpair failed and we were unable to recover it. 00:28:16.314 [2024-04-25 03:28:50.763397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.314 [2024-04-25 03:28:50.763660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.314 [2024-04-25 03:28:50.763685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.314 qpair failed and we were unable to recover it. 00:28:16.314 [2024-04-25 03:28:50.763908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.314 [2024-04-25 03:28:50.764166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.314 [2024-04-25 03:28:50.764208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.314 qpair failed and we were unable to recover it. 00:28:16.314 [2024-04-25 03:28:50.764420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.314 [2024-04-25 03:28:50.764661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.314 [2024-04-25 03:28:50.764703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.314 qpair failed and we were unable to recover it. 00:28:16.315 [2024-04-25 03:28:50.764941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.315 [2024-04-25 03:28:50.765132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.315 [2024-04-25 03:28:50.765158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.315 qpair failed and we were unable to recover it. 00:28:16.315 [2024-04-25 03:28:50.765449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.315 [2024-04-25 03:28:50.765693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.315 [2024-04-25 03:28:50.765718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.315 qpair failed and we were unable to recover it. 00:28:16.315 [2024-04-25 03:28:50.765883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.315 [2024-04-25 03:28:50.766144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.315 [2024-04-25 03:28:50.766182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.315 qpair failed and we were unable to recover it. 00:28:16.315 [2024-04-25 03:28:50.766387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.315 [2024-04-25 03:28:50.766642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.315 [2024-04-25 03:28:50.766668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.315 qpair failed and we were unable to recover it. 00:28:16.315 [2024-04-25 03:28:50.766841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.315 [2024-04-25 03:28:50.767114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.315 [2024-04-25 03:28:50.767138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.315 qpair failed and we were unable to recover it. 00:28:16.315 [2024-04-25 03:28:50.767356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.315 [2024-04-25 03:28:50.767585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.315 [2024-04-25 03:28:50.767617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.315 qpair failed and we were unable to recover it. 00:28:16.315 [2024-04-25 03:28:50.767844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.315 [2024-04-25 03:28:50.768183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.315 [2024-04-25 03:28:50.768241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.315 qpair failed and we were unable to recover it. 00:28:16.315 [2024-04-25 03:28:50.768685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.315 [2024-04-25 03:28:50.768899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.315 [2024-04-25 03:28:50.768924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.315 qpair failed and we were unable to recover it. 00:28:16.315 [2024-04-25 03:28:50.769122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.315 [2024-04-25 03:28:50.769348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.315 [2024-04-25 03:28:50.769373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.315 qpair failed and we were unable to recover it. 00:28:16.315 [2024-04-25 03:28:50.769600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.315 [2024-04-25 03:28:50.769845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.315 [2024-04-25 03:28:50.769871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.315 qpair failed and we were unable to recover it. 00:28:16.315 [2024-04-25 03:28:50.770060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.315 [2024-04-25 03:28:50.770277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.315 [2024-04-25 03:28:50.770301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.315 qpair failed and we were unable to recover it. 00:28:16.315 [2024-04-25 03:28:50.770540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.315 [2024-04-25 03:28:50.770741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.315 [2024-04-25 03:28:50.770767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.315 qpair failed and we were unable to recover it. 00:28:16.315 [2024-04-25 03:28:50.770942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.315 [2024-04-25 03:28:50.771206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.315 [2024-04-25 03:28:50.771234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.315 qpair failed and we were unable to recover it. 00:28:16.315 [2024-04-25 03:28:50.771421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.315 [2024-04-25 03:28:50.771647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.315 [2024-04-25 03:28:50.771673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.315 qpair failed and we were unable to recover it. 00:28:16.315 [2024-04-25 03:28:50.771840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.315 [2024-04-25 03:28:50.772016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.315 [2024-04-25 03:28:50.772041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.315 qpair failed and we were unable to recover it. 00:28:16.315 [2024-04-25 03:28:50.772220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.315 [2024-04-25 03:28:50.772461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.315 [2024-04-25 03:28:50.772489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.315 qpair failed and we were unable to recover it. 00:28:16.315 [2024-04-25 03:28:50.772714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.315 [2024-04-25 03:28:50.772888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.315 [2024-04-25 03:28:50.772913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.315 qpair failed and we were unable to recover it. 00:28:16.315 [2024-04-25 03:28:50.773135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.315 [2024-04-25 03:28:50.773312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.315 [2024-04-25 03:28:50.773337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.315 qpair failed and we were unable to recover it. 00:28:16.315 [2024-04-25 03:28:50.773532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.315 [2024-04-25 03:28:50.773732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.315 [2024-04-25 03:28:50.773757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.315 qpair failed and we were unable to recover it. 00:28:16.315 [2024-04-25 03:28:50.773943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.315 [2024-04-25 03:28:50.774096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.315 [2024-04-25 03:28:50.774120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.315 qpair failed and we were unable to recover it. 00:28:16.315 [2024-04-25 03:28:50.774328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.315 [2024-04-25 03:28:50.774502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.315 [2024-04-25 03:28:50.774526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.315 qpair failed and we were unable to recover it. 00:28:16.315 [2024-04-25 03:28:50.774713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.315 [2024-04-25 03:28:50.774916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.315 [2024-04-25 03:28:50.774940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.315 qpair failed and we were unable to recover it. 00:28:16.315 [2024-04-25 03:28:50.775281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.315 [2024-04-25 03:28:50.775537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.315 [2024-04-25 03:28:50.775565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.315 qpair failed and we were unable to recover it. 00:28:16.315 [2024-04-25 03:28:50.775790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.315 [2024-04-25 03:28:50.775969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.315 [2024-04-25 03:28:50.775994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.315 qpair failed and we were unable to recover it. 00:28:16.315 [2024-04-25 03:28:50.776233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.315 [2024-04-25 03:28:50.776513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.315 [2024-04-25 03:28:50.776541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.315 qpair failed and we were unable to recover it. 00:28:16.315 [2024-04-25 03:28:50.776767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.315 [2024-04-25 03:28:50.776938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.315 [2024-04-25 03:28:50.776963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.315 qpair failed and we were unable to recover it. 00:28:16.315 [2024-04-25 03:28:50.777189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.315 [2024-04-25 03:28:50.777675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.315 [2024-04-25 03:28:50.777700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.315 qpair failed and we were unable to recover it. 00:28:16.315 [2024-04-25 03:28:50.777901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.315 [2024-04-25 03:28:50.778146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.315 [2024-04-25 03:28:50.778170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.315 qpair failed and we were unable to recover it. 00:28:16.315 [2024-04-25 03:28:50.778350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.315 [2024-04-25 03:28:50.778600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.315 [2024-04-25 03:28:50.778648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.315 qpair failed and we were unable to recover it. 00:28:16.315 [2024-04-25 03:28:50.778870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.316 [2024-04-25 03:28:50.779131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.316 [2024-04-25 03:28:50.779155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.316 qpair failed and we were unable to recover it. 00:28:16.316 [2024-04-25 03:28:50.779403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.316 [2024-04-25 03:28:50.779586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.316 [2024-04-25 03:28:50.779614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.316 qpair failed and we were unable to recover it. 00:28:16.316 [2024-04-25 03:28:50.779864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.316 [2024-04-25 03:28:50.780143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.316 [2024-04-25 03:28:50.780168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.316 qpair failed and we were unable to recover it. 00:28:16.316 [2024-04-25 03:28:50.780427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.316 [2024-04-25 03:28:50.780619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.316 [2024-04-25 03:28:50.780650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.316 qpair failed and we were unable to recover it. 00:28:16.316 [2024-04-25 03:28:50.780849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.316 [2024-04-25 03:28:50.781065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.316 [2024-04-25 03:28:50.781091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.316 qpair failed and we were unable to recover it. 00:28:16.316 [2024-04-25 03:28:50.781285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.316 [2024-04-25 03:28:50.781504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.316 [2024-04-25 03:28:50.781529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.316 qpair failed and we were unable to recover it. 00:28:16.316 [2024-04-25 03:28:50.781754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.316 [2024-04-25 03:28:50.781993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.316 [2024-04-25 03:28:50.782020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.316 qpair failed and we were unable to recover it. 00:28:16.316 [2024-04-25 03:28:50.782263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.316 [2024-04-25 03:28:50.782556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.316 [2024-04-25 03:28:50.782581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.316 qpair failed and we were unable to recover it. 00:28:16.316 [2024-04-25 03:28:50.782798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.316 [2024-04-25 03:28:50.782963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.316 [2024-04-25 03:28:50.782991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.316 qpair failed and we were unable to recover it. 00:28:16.316 [2024-04-25 03:28:50.783188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.316 [2024-04-25 03:28:50.783504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.316 [2024-04-25 03:28:50.783560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.316 qpair failed and we were unable to recover it. 00:28:16.316 [2024-04-25 03:28:50.783789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.316 [2024-04-25 03:28:50.784022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.316 [2024-04-25 03:28:50.784063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.316 qpair failed and we were unable to recover it. 00:28:16.316 [2024-04-25 03:28:50.784308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.316 [2024-04-25 03:28:50.784509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.316 [2024-04-25 03:28:50.784533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.316 qpair failed and we were unable to recover it. 00:28:16.316 [2024-04-25 03:28:50.784762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.316 [2024-04-25 03:28:50.785012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.316 [2024-04-25 03:28:50.785064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.316 qpair failed and we were unable to recover it. 00:28:16.316 [2024-04-25 03:28:50.785306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.316 [2024-04-25 03:28:50.785519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.316 [2024-04-25 03:28:50.785547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.316 qpair failed and we were unable to recover it. 00:28:16.316 [2024-04-25 03:28:50.785776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.316 [2024-04-25 03:28:50.785971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.316 [2024-04-25 03:28:50.785995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.316 qpair failed and we were unable to recover it. 00:28:16.316 [2024-04-25 03:28:50.786218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.316 [2024-04-25 03:28:50.786472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.316 [2024-04-25 03:28:50.786497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.316 qpair failed and we were unable to recover it. 00:28:16.316 [2024-04-25 03:28:50.786752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.316 [2024-04-25 03:28:50.786916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.316 [2024-04-25 03:28:50.786951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.316 qpair failed and we were unable to recover it. 00:28:16.316 [2024-04-25 03:28:50.787174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.316 [2024-04-25 03:28:50.787415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.316 [2024-04-25 03:28:50.787447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.316 qpair failed and we were unable to recover it. 00:28:16.316 [2024-04-25 03:28:50.787709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.316 [2024-04-25 03:28:50.787903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.316 [2024-04-25 03:28:50.787934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.316 qpair failed and we were unable to recover it. 00:28:16.316 [2024-04-25 03:28:50.788182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.316 [2024-04-25 03:28:50.788494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.316 [2024-04-25 03:28:50.788529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.316 qpair failed and we were unable to recover it. 00:28:16.316 [2024-04-25 03:28:50.788738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.316 [2024-04-25 03:28:50.788951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.316 [2024-04-25 03:28:50.788979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.316 qpair failed and we were unable to recover it. 00:28:16.316 [2024-04-25 03:28:50.789220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.316 [2024-04-25 03:28:50.789431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.316 [2024-04-25 03:28:50.789458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.316 qpair failed and we were unable to recover it. 00:28:16.316 [2024-04-25 03:28:50.789677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.316 [2024-04-25 03:28:50.789898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.316 [2024-04-25 03:28:50.789923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.316 qpair failed and we were unable to recover it. 00:28:16.316 [2024-04-25 03:28:50.790144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.316 [2024-04-25 03:28:50.790520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.316 [2024-04-25 03:28:50.790576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.316 qpair failed and we were unable to recover it. 00:28:16.316 [2024-04-25 03:28:50.790780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.316 [2024-04-25 03:28:50.791096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.316 [2024-04-25 03:28:50.791145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.316 qpair failed and we were unable to recover it. 00:28:16.316 [2024-04-25 03:28:50.791365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.316 [2024-04-25 03:28:50.791603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.316 [2024-04-25 03:28:50.791639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.316 qpair failed and we were unable to recover it. 00:28:16.316 [2024-04-25 03:28:50.791872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.316 [2024-04-25 03:28:50.792236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.316 [2024-04-25 03:28:50.792284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.316 qpair failed and we were unable to recover it. 00:28:16.316 [2024-04-25 03:28:50.792599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.316 [2024-04-25 03:28:50.792806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.316 [2024-04-25 03:28:50.792835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.316 qpair failed and we were unable to recover it. 00:28:16.316 [2024-04-25 03:28:50.793090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.316 [2024-04-25 03:28:50.793391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.316 [2024-04-25 03:28:50.793442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.316 qpair failed and we were unable to recover it. 00:28:16.316 [2024-04-25 03:28:50.793690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.317 [2024-04-25 03:28:50.793868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.317 [2024-04-25 03:28:50.793893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.317 qpair failed and we were unable to recover it. 00:28:16.317 [2024-04-25 03:28:50.794331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.317 [2024-04-25 03:28:50.794731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.317 [2024-04-25 03:28:50.794761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.317 qpair failed and we were unable to recover it. 00:28:16.317 [2024-04-25 03:28:50.794991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.317 [2024-04-25 03:28:50.795230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.317 [2024-04-25 03:28:50.795257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.317 qpair failed and we were unable to recover it. 00:28:16.317 [2024-04-25 03:28:50.795472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.317 [2024-04-25 03:28:50.795725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.317 [2024-04-25 03:28:50.795754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.317 qpair failed and we were unable to recover it. 00:28:16.317 [2024-04-25 03:28:50.795973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.317 [2024-04-25 03:28:50.796192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.317 [2024-04-25 03:28:50.796219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.317 qpair failed and we were unable to recover it. 00:28:16.317 [2024-04-25 03:28:50.796445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.317 [2024-04-25 03:28:50.796692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.317 [2024-04-25 03:28:50.796720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.317 qpair failed and we were unable to recover it. 00:28:16.317 [2024-04-25 03:28:50.796942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.317 [2024-04-25 03:28:50.797114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.317 [2024-04-25 03:28:50.797138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.317 qpair failed and we were unable to recover it. 00:28:16.317 [2024-04-25 03:28:50.797365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.317 [2024-04-25 03:28:50.797598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.317 [2024-04-25 03:28:50.797639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.317 qpair failed and we were unable to recover it. 00:28:16.317 [2024-04-25 03:28:50.797886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.317 [2024-04-25 03:28:50.798111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.317 [2024-04-25 03:28:50.798135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.317 qpair failed and we were unable to recover it. 00:28:16.317 [2024-04-25 03:28:50.798372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.317 [2024-04-25 03:28:50.798569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.317 [2024-04-25 03:28:50.798604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.317 qpair failed and we were unable to recover it. 00:28:16.317 [2024-04-25 03:28:50.798821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.317 [2024-04-25 03:28:50.798990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.317 [2024-04-25 03:28:50.799014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.317 qpair failed and we were unable to recover it. 00:28:16.317 [2024-04-25 03:28:50.799212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.317 [2024-04-25 03:28:50.799594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.317 [2024-04-25 03:28:50.799659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.317 qpair failed and we were unable to recover it. 00:28:16.317 [2024-04-25 03:28:50.799886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.317 [2024-04-25 03:28:50.800173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.317 [2024-04-25 03:28:50.800210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.317 qpair failed and we were unable to recover it. 00:28:16.317 [2024-04-25 03:28:50.800435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.317 [2024-04-25 03:28:50.800659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.317 [2024-04-25 03:28:50.800687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.317 qpair failed and we were unable to recover it. 00:28:16.317 [2024-04-25 03:28:50.800929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.317 [2024-04-25 03:28:50.801204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.317 [2024-04-25 03:28:50.801255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.317 qpair failed and we were unable to recover it. 00:28:16.317 [2024-04-25 03:28:50.801502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.317 [2024-04-25 03:28:50.801663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.317 [2024-04-25 03:28:50.801689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.317 qpair failed and we were unable to recover it. 00:28:16.317 [2024-04-25 03:28:50.801957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.317 [2024-04-25 03:28:50.802180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.317 [2024-04-25 03:28:50.802231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.317 qpair failed and we were unable to recover it. 00:28:16.317 [2024-04-25 03:28:50.802458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.317 [2024-04-25 03:28:50.802714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.317 [2024-04-25 03:28:50.802739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.317 qpair failed and we were unable to recover it. 00:28:16.317 [2024-04-25 03:28:50.803014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.317 [2024-04-25 03:28:50.803233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.317 [2024-04-25 03:28:50.803258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.317 qpair failed and we were unable to recover it. 00:28:16.317 [2024-04-25 03:28:50.803485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.317 [2024-04-25 03:28:50.803706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.317 [2024-04-25 03:28:50.803732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.317 qpair failed and we were unable to recover it. 00:28:16.317 [2024-04-25 03:28:50.803909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.586 [2024-04-25 03:28:50.804142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.586 [2024-04-25 03:28:50.804187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.586 qpair failed and we were unable to recover it. 00:28:16.586 [2024-04-25 03:28:50.804424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.586 [2024-04-25 03:28:50.804635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.586 [2024-04-25 03:28:50.804661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.586 qpair failed and we were unable to recover it. 00:28:16.586 [2024-04-25 03:28:50.804899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.586 [2024-04-25 03:28:50.805134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.586 [2024-04-25 03:28:50.805174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.586 qpair failed and we were unable to recover it. 00:28:16.586 [2024-04-25 03:28:50.805421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.586 [2024-04-25 03:28:50.805605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.586 [2024-04-25 03:28:50.805644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.586 qpair failed and we were unable to recover it. 00:28:16.586 [2024-04-25 03:28:50.805893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.586 [2024-04-25 03:28:50.806119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.586 [2024-04-25 03:28:50.806145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.586 qpair failed and we were unable to recover it. 00:28:16.586 [2024-04-25 03:28:50.806358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.586 [2024-04-25 03:28:50.806578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.586 [2024-04-25 03:28:50.806605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.586 qpair failed and we were unable to recover it. 00:28:16.587 [2024-04-25 03:28:50.806839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.587 [2024-04-25 03:28:50.807163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.587 [2024-04-25 03:28:50.807204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.587 qpair failed and we were unable to recover it. 00:28:16.587 [2024-04-25 03:28:50.807425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.587 [2024-04-25 03:28:50.807702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.587 [2024-04-25 03:28:50.807731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.587 qpair failed and we were unable to recover it. 00:28:16.587 [2024-04-25 03:28:50.807957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.587 [2024-04-25 03:28:50.808194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.587 [2024-04-25 03:28:50.808221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.587 qpair failed and we were unable to recover it. 00:28:16.587 [2024-04-25 03:28:50.808437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.587 [2024-04-25 03:28:50.808687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.587 [2024-04-25 03:28:50.808715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.587 qpair failed and we were unable to recover it. 00:28:16.587 [2024-04-25 03:28:50.808912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.587 [2024-04-25 03:28:50.809251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.587 [2024-04-25 03:28:50.809301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.587 qpair failed and we were unable to recover it. 00:28:16.587 [2024-04-25 03:28:50.809554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.587 [2024-04-25 03:28:50.809787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.587 [2024-04-25 03:28:50.809827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.587 qpair failed and we were unable to recover it. 00:28:16.587 [2024-04-25 03:28:50.810028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.587 [2024-04-25 03:28:50.810256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.587 [2024-04-25 03:28:50.810300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.587 qpair failed and we were unable to recover it. 00:28:16.587 [2024-04-25 03:28:50.810554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.587 [2024-04-25 03:28:50.810778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.587 [2024-04-25 03:28:50.810806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.587 qpair failed and we were unable to recover it. 00:28:16.587 [2024-04-25 03:28:50.810991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.587 [2024-04-25 03:28:50.811261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.587 [2024-04-25 03:28:50.811309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.587 qpair failed and we were unable to recover it. 00:28:16.587 [2024-04-25 03:28:50.811565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.587 [2024-04-25 03:28:50.811778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.587 [2024-04-25 03:28:50.811805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.587 qpair failed and we were unable to recover it. 00:28:16.587 [2024-04-25 03:28:50.812015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.587 [2024-04-25 03:28:50.812248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.587 [2024-04-25 03:28:50.812275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.587 qpair failed and we were unable to recover it. 00:28:16.587 [2024-04-25 03:28:50.812516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.587 [2024-04-25 03:28:50.812686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.587 [2024-04-25 03:28:50.812711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.587 qpair failed and we were unable to recover it. 00:28:16.587 [2024-04-25 03:28:50.812933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.587 [2024-04-25 03:28:50.813365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.587 [2024-04-25 03:28:50.813416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.587 qpair failed and we were unable to recover it. 00:28:16.587 [2024-04-25 03:28:50.813673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.587 [2024-04-25 03:28:50.813905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.587 [2024-04-25 03:28:50.813948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.587 qpair failed and we were unable to recover it. 00:28:16.587 [2024-04-25 03:28:50.814178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.587 [2024-04-25 03:28:50.814367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.587 [2024-04-25 03:28:50.814394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.587 qpair failed and we were unable to recover it. 00:28:16.587 [2024-04-25 03:28:50.814657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.587 [2024-04-25 03:28:50.814858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.587 [2024-04-25 03:28:50.814883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.587 qpair failed and we were unable to recover it. 00:28:16.587 [2024-04-25 03:28:50.815062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.587 [2024-04-25 03:28:50.815366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.587 [2024-04-25 03:28:50.815427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.587 qpair failed and we were unable to recover it. 00:28:16.587 [2024-04-25 03:28:50.815644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.587 [2024-04-25 03:28:50.815878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.587 [2024-04-25 03:28:50.815901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.587 qpair failed and we were unable to recover it. 00:28:16.587 [2024-04-25 03:28:50.816129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.587 [2024-04-25 03:28:50.816378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.587 [2024-04-25 03:28:50.816431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.587 qpair failed and we were unable to recover it. 00:28:16.587 [2024-04-25 03:28:50.816660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.587 [2024-04-25 03:28:50.816833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.587 [2024-04-25 03:28:50.816858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.587 qpair failed and we were unable to recover it. 00:28:16.587 [2024-04-25 03:28:50.817081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.587 [2024-04-25 03:28:50.817336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.587 [2024-04-25 03:28:50.817381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.587 qpair failed and we were unable to recover it. 00:28:16.587 [2024-04-25 03:28:50.817625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.587 [2024-04-25 03:28:50.817890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.587 [2024-04-25 03:28:50.817930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.587 qpair failed and we were unable to recover it. 00:28:16.587 [2024-04-25 03:28:50.818299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.587 [2024-04-25 03:28:50.818622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.587 [2024-04-25 03:28:50.818653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.587 qpair failed and we were unable to recover it. 00:28:16.587 [2024-04-25 03:28:50.818855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.587 [2024-04-25 03:28:50.819087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.587 [2024-04-25 03:28:50.819114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.587 qpair failed and we were unable to recover it. 00:28:16.587 [2024-04-25 03:28:50.819357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.587 [2024-04-25 03:28:50.819609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.587 [2024-04-25 03:28:50.819641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.587 qpair failed and we were unable to recover it. 00:28:16.587 [2024-04-25 03:28:50.819843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.587 [2024-04-25 03:28:50.820070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.587 [2024-04-25 03:28:50.820095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.587 qpair failed and we were unable to recover it. 00:28:16.587 [2024-04-25 03:28:50.820334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.587 [2024-04-25 03:28:50.820562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.587 [2024-04-25 03:28:50.820589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.587 qpair failed and we were unable to recover it. 00:28:16.587 [2024-04-25 03:28:50.820851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.587 [2024-04-25 03:28:50.821092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.587 [2024-04-25 03:28:50.821119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.587 qpair failed and we were unable to recover it. 00:28:16.588 [2024-04-25 03:28:50.821308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.588 [2024-04-25 03:28:50.821729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.588 [2024-04-25 03:28:50.821757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.588 qpair failed and we were unable to recover it. 00:28:16.588 [2024-04-25 03:28:50.822003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.588 [2024-04-25 03:28:50.822275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.588 [2024-04-25 03:28:50.822303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.588 qpair failed and we were unable to recover it. 00:28:16.588 [2024-04-25 03:28:50.822519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.588 [2024-04-25 03:28:50.822698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.588 [2024-04-25 03:28:50.822725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.588 qpair failed and we were unable to recover it. 00:28:16.588 [2024-04-25 03:28:50.822976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.588 [2024-04-25 03:28:50.823264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.588 [2024-04-25 03:28:50.823321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.588 qpair failed and we were unable to recover it. 00:28:16.588 [2024-04-25 03:28:50.823561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.588 [2024-04-25 03:28:50.823773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.588 [2024-04-25 03:28:50.823800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.588 qpair failed and we were unable to recover it. 00:28:16.588 [2024-04-25 03:28:50.824016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.588 [2024-04-25 03:28:50.824382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.588 [2024-04-25 03:28:50.824442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.588 qpair failed and we were unable to recover it. 00:28:16.588 [2024-04-25 03:28:50.824677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.588 [2024-04-25 03:28:50.824877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.588 [2024-04-25 03:28:50.824902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.588 qpair failed and we were unable to recover it. 00:28:16.588 [2024-04-25 03:28:50.825175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.588 [2024-04-25 03:28:50.825439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.588 [2024-04-25 03:28:50.825487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.588 qpair failed and we were unable to recover it. 00:28:16.588 [2024-04-25 03:28:50.825733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.588 [2024-04-25 03:28:50.825990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.588 [2024-04-25 03:28:50.826038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.588 qpair failed and we were unable to recover it. 00:28:16.588 [2024-04-25 03:28:50.826285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.588 [2024-04-25 03:28:50.826664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.588 [2024-04-25 03:28:50.826712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.588 qpair failed and we were unable to recover it. 00:28:16.588 [2024-04-25 03:28:50.826933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.588 [2024-04-25 03:28:50.827156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.588 [2024-04-25 03:28:50.827184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.588 qpair failed and we were unable to recover it. 00:28:16.588 [2024-04-25 03:28:50.827392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.588 [2024-04-25 03:28:50.827604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.588 [2024-04-25 03:28:50.827710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.588 qpair failed and we were unable to recover it. 00:28:16.588 [2024-04-25 03:28:50.827987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.588 [2024-04-25 03:28:50.828216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.588 [2024-04-25 03:28:50.828242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.588 qpair failed and we were unable to recover it. 00:28:16.588 [2024-04-25 03:28:50.828511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.588 [2024-04-25 03:28:50.828714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.588 [2024-04-25 03:28:50.828739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.588 qpair failed and we were unable to recover it. 00:28:16.588 [2024-04-25 03:28:50.828940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.588 [2024-04-25 03:28:50.829258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.588 [2024-04-25 03:28:50.829310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.588 qpair failed and we were unable to recover it. 00:28:16.588 [2024-04-25 03:28:50.829525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.588 [2024-04-25 03:28:50.829719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.588 [2024-04-25 03:28:50.829746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.588 qpair failed and we were unable to recover it. 00:28:16.588 [2024-04-25 03:28:50.829983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.588 [2024-04-25 03:28:50.830245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.588 [2024-04-25 03:28:50.830298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.588 qpair failed and we were unable to recover it. 00:28:16.588 [2024-04-25 03:28:50.830546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.588 [2024-04-25 03:28:50.830757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.588 [2024-04-25 03:28:50.830783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.588 qpair failed and we were unable to recover it. 00:28:16.588 [2024-04-25 03:28:50.831085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.588 [2024-04-25 03:28:50.831307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.588 [2024-04-25 03:28:50.831331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.588 qpair failed and we were unable to recover it. 00:28:16.588 [2024-04-25 03:28:50.831592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.588 [2024-04-25 03:28:50.831814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.588 [2024-04-25 03:28:50.831842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.588 qpair failed and we were unable to recover it. 00:28:16.588 [2024-04-25 03:28:50.832084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.588 [2024-04-25 03:28:50.832398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.588 [2024-04-25 03:28:50.832449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.588 qpair failed and we were unable to recover it. 00:28:16.588 [2024-04-25 03:28:50.832705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.588 [2024-04-25 03:28:50.832928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.588 [2024-04-25 03:28:50.832952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.588 qpair failed and we were unable to recover it. 00:28:16.588 [2024-04-25 03:28:50.833210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.588 [2024-04-25 03:28:50.833435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.588 [2024-04-25 03:28:50.833460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.588 qpair failed and we were unable to recover it. 00:28:16.588 [2024-04-25 03:28:50.833682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.588 [2024-04-25 03:28:50.833944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.588 [2024-04-25 03:28:50.833972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.588 qpair failed and we were unable to recover it. 00:28:16.588 [2024-04-25 03:28:50.834220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.588 [2024-04-25 03:28:50.834403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.588 [2024-04-25 03:28:50.834427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.588 qpair failed and we were unable to recover it. 00:28:16.588 [2024-04-25 03:28:50.834686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.588 [2024-04-25 03:28:50.834901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.588 [2024-04-25 03:28:50.834940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.588 qpair failed and we were unable to recover it. 00:28:16.588 [2024-04-25 03:28:50.835165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.588 [2024-04-25 03:28:50.835517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.588 [2024-04-25 03:28:50.835584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.588 qpair failed and we were unable to recover it. 00:28:16.588 [2024-04-25 03:28:50.835852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.588 [2024-04-25 03:28:50.836059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.588 [2024-04-25 03:28:50.836083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.588 qpair failed and we were unable to recover it. 00:28:16.588 [2024-04-25 03:28:50.836250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.588 [2024-04-25 03:28:50.836533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.588 [2024-04-25 03:28:50.836587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.588 qpair failed and we were unable to recover it. 00:28:16.588 [2024-04-25 03:28:50.836836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.589 [2024-04-25 03:28:50.837210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.589 [2024-04-25 03:28:50.837260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.589 qpair failed and we were unable to recover it. 00:28:16.589 [2024-04-25 03:28:50.837453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.589 [2024-04-25 03:28:50.837656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.589 [2024-04-25 03:28:50.837685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.589 qpair failed and we were unable to recover it. 00:28:16.589 [2024-04-25 03:28:50.837950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.589 [2024-04-25 03:28:50.838140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.589 [2024-04-25 03:28:50.838165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.589 qpair failed and we were unable to recover it. 00:28:16.589 [2024-04-25 03:28:50.838427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.589 [2024-04-25 03:28:50.838644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.589 [2024-04-25 03:28:50.838671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.589 qpair failed and we were unable to recover it. 00:28:16.589 [2024-04-25 03:28:50.838924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.589 [2024-04-25 03:28:50.839204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.589 [2024-04-25 03:28:50.839249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.589 qpair failed and we were unable to recover it. 00:28:16.589 [2024-04-25 03:28:50.839472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.589 [2024-04-25 03:28:50.839712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.589 [2024-04-25 03:28:50.839740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.589 qpair failed and we were unable to recover it. 00:28:16.589 [2024-04-25 03:28:50.839980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.589 [2024-04-25 03:28:50.840297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.589 [2024-04-25 03:28:50.840321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.589 qpair failed and we were unable to recover it. 00:28:16.589 [2024-04-25 03:28:50.840678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.589 [2024-04-25 03:28:50.840913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.589 [2024-04-25 03:28:50.840945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.589 qpair failed and we were unable to recover it. 00:28:16.589 [2024-04-25 03:28:50.841187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.589 [2024-04-25 03:28:50.841427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.589 [2024-04-25 03:28:50.841471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.589 qpair failed and we were unable to recover it. 00:28:16.589 [2024-04-25 03:28:50.841660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.589 [2024-04-25 03:28:50.841879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.589 [2024-04-25 03:28:50.841906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.589 qpair failed and we were unable to recover it. 00:28:16.589 [2024-04-25 03:28:50.842128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.589 [2024-04-25 03:28:50.842410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.589 [2024-04-25 03:28:50.842437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.589 qpair failed and we were unable to recover it. 00:28:16.589 [2024-04-25 03:28:50.842681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.589 [2024-04-25 03:28:50.842889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.589 [2024-04-25 03:28:50.842917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.589 qpair failed and we were unable to recover it. 00:28:16.589 [2024-04-25 03:28:50.843107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.589 [2024-04-25 03:28:50.843287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.589 [2024-04-25 03:28:50.843315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.589 qpair failed and we were unable to recover it. 00:28:16.589 [2024-04-25 03:28:50.843542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.589 [2024-04-25 03:28:50.843761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.589 [2024-04-25 03:28:50.843804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.589 qpair failed and we were unable to recover it. 00:28:16.589 [2024-04-25 03:28:50.844027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.589 [2024-04-25 03:28:50.844234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.589 [2024-04-25 03:28:50.844262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.589 qpair failed and we were unable to recover it. 00:28:16.589 [2024-04-25 03:28:50.844477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.589 [2024-04-25 03:28:50.844793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.589 [2024-04-25 03:28:50.844821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.589 qpair failed and we were unable to recover it. 00:28:16.589 [2024-04-25 03:28:50.845002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.589 [2024-04-25 03:28:50.845339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.589 [2024-04-25 03:28:50.845397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.589 qpair failed and we were unable to recover it. 00:28:16.589 [2024-04-25 03:28:50.845613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.589 [2024-04-25 03:28:50.845831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.589 [2024-04-25 03:28:50.845856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.589 qpair failed and we were unable to recover it. 00:28:16.589 [2024-04-25 03:28:50.846127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.589 [2024-04-25 03:28:50.846444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.589 [2024-04-25 03:28:50.846495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.589 qpair failed and we were unable to recover it. 00:28:16.589 [2024-04-25 03:28:50.846749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.589 [2024-04-25 03:28:50.847122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.589 [2024-04-25 03:28:50.847175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.589 qpair failed and we were unable to recover it. 00:28:16.589 [2024-04-25 03:28:50.847392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.589 [2024-04-25 03:28:50.847636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.589 [2024-04-25 03:28:50.847663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.589 qpair failed and we were unable to recover it. 00:28:16.589 [2024-04-25 03:28:50.847865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.589 [2024-04-25 03:28:50.848178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.589 [2024-04-25 03:28:50.848233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.589 qpair failed and we were unable to recover it. 00:28:16.589 [2024-04-25 03:28:50.848538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.589 [2024-04-25 03:28:50.848784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.589 [2024-04-25 03:28:50.848809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.589 qpair failed and we were unable to recover it. 00:28:16.589 [2024-04-25 03:28:50.849036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.589 [2024-04-25 03:28:50.849381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.589 [2024-04-25 03:28:50.849434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.589 qpair failed and we were unable to recover it. 00:28:16.589 [2024-04-25 03:28:50.849681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.589 [2024-04-25 03:28:50.849850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.589 [2024-04-25 03:28:50.849874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.589 qpair failed and we were unable to recover it. 00:28:16.589 [2024-04-25 03:28:50.850060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.589 [2024-04-25 03:28:50.850285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.589 [2024-04-25 03:28:50.850310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.589 qpair failed and we were unable to recover it. 00:28:16.589 [2024-04-25 03:28:50.850527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.589 [2024-04-25 03:28:50.850760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.589 [2024-04-25 03:28:50.850788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.589 qpair failed and we were unable to recover it. 00:28:16.589 [2024-04-25 03:28:50.851008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.589 [2024-04-25 03:28:50.851336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.589 [2024-04-25 03:28:50.851387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.589 qpair failed and we were unable to recover it. 00:28:16.589 [2024-04-25 03:28:50.851711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.589 [2024-04-25 03:28:50.851950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.589 [2024-04-25 03:28:50.851979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.589 qpair failed and we were unable to recover it. 00:28:16.589 [2024-04-25 03:28:50.852203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.590 [2024-04-25 03:28:50.852487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.590 [2024-04-25 03:28:50.852536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.590 qpair failed and we were unable to recover it. 00:28:16.590 [2024-04-25 03:28:50.852780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.590 [2024-04-25 03:28:50.853048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.590 [2024-04-25 03:28:50.853098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.590 qpair failed and we were unable to recover it. 00:28:16.590 [2024-04-25 03:28:50.853411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.590 [2024-04-25 03:28:50.853692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.590 [2024-04-25 03:28:50.853717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.590 qpair failed and we were unable to recover it. 00:28:16.590 [2024-04-25 03:28:50.853941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.590 [2024-04-25 03:28:50.854251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.590 [2024-04-25 03:28:50.854289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.590 qpair failed and we were unable to recover it. 00:28:16.590 [2024-04-25 03:28:50.854490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.590 [2024-04-25 03:28:50.854706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.590 [2024-04-25 03:28:50.854734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.590 qpair failed and we were unable to recover it. 00:28:16.590 [2024-04-25 03:28:50.854926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.590 [2024-04-25 03:28:50.855308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.590 [2024-04-25 03:28:50.855362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.590 qpair failed and we were unable to recover it. 00:28:16.590 [2024-04-25 03:28:50.855600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.590 [2024-04-25 03:28:50.855820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.590 [2024-04-25 03:28:50.855849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.590 qpair failed and we were unable to recover it. 00:28:16.590 [2024-04-25 03:28:50.856045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.590 [2024-04-25 03:28:50.856239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.590 [2024-04-25 03:28:50.856264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.590 qpair failed and we were unable to recover it. 00:28:16.590 [2024-04-25 03:28:50.856485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.590 [2024-04-25 03:28:50.856791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.590 [2024-04-25 03:28:50.856819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.590 qpair failed and we were unable to recover it. 00:28:16.590 [2024-04-25 03:28:50.857062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.590 [2024-04-25 03:28:50.857333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.590 [2024-04-25 03:28:50.857377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.590 qpair failed and we were unable to recover it. 00:28:16.590 [2024-04-25 03:28:50.857593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.590 [2024-04-25 03:28:50.857847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.590 [2024-04-25 03:28:50.857875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.590 qpair failed and we were unable to recover it. 00:28:16.590 [2024-04-25 03:28:50.858091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.590 [2024-04-25 03:28:50.858288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.590 [2024-04-25 03:28:50.858320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.590 qpair failed and we were unable to recover it. 00:28:16.590 [2024-04-25 03:28:50.858551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.590 [2024-04-25 03:28:50.858771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.590 [2024-04-25 03:28:50.858799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.590 qpair failed and we were unable to recover it. 00:28:16.590 [2024-04-25 03:28:50.859043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.590 [2024-04-25 03:28:50.859403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.590 [2024-04-25 03:28:50.859433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.590 qpair failed and we were unable to recover it. 00:28:16.590 [2024-04-25 03:28:50.859698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.590 [2024-04-25 03:28:50.859911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.590 [2024-04-25 03:28:50.859938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.590 qpair failed and we were unable to recover it. 00:28:16.590 [2024-04-25 03:28:50.860156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.590 [2024-04-25 03:28:50.860427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.590 [2024-04-25 03:28:50.860473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.590 qpair failed and we were unable to recover it. 00:28:16.590 [2024-04-25 03:28:50.860709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.590 [2024-04-25 03:28:50.860958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.590 [2024-04-25 03:28:50.860987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.590 qpair failed and we were unable to recover it. 00:28:16.590 [2024-04-25 03:28:50.861218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.590 [2024-04-25 03:28:50.861567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.590 [2024-04-25 03:28:50.861610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.590 qpair failed and we were unable to recover it. 00:28:16.590 [2024-04-25 03:28:50.861840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.590 [2024-04-25 03:28:50.862062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.590 [2024-04-25 03:28:50.862087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.590 qpair failed and we were unable to recover it. 00:28:16.590 [2024-04-25 03:28:50.862320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.590 [2024-04-25 03:28:50.862578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.590 [2024-04-25 03:28:50.862610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.590 qpair failed and we were unable to recover it. 00:28:16.590 [2024-04-25 03:28:50.862865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.590 [2024-04-25 03:28:50.863250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.590 [2024-04-25 03:28:50.863299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.590 qpair failed and we were unable to recover it. 00:28:16.590 [2024-04-25 03:28:50.863548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.590 [2024-04-25 03:28:50.863741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.590 [2024-04-25 03:28:50.863769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.590 qpair failed and we were unable to recover it. 00:28:16.590 [2024-04-25 03:28:50.864006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.590 [2024-04-25 03:28:50.864258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.590 [2024-04-25 03:28:50.864303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.590 qpair failed and we were unable to recover it. 00:28:16.590 [2024-04-25 03:28:50.864604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.590 [2024-04-25 03:28:50.864851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.590 [2024-04-25 03:28:50.864880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.590 qpair failed and we were unable to recover it. 00:28:16.590 [2024-04-25 03:28:50.865119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.590 [2024-04-25 03:28:50.865429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.590 [2024-04-25 03:28:50.865492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.590 qpair failed and we were unable to recover it. 00:28:16.590 [2024-04-25 03:28:50.865714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.590 [2024-04-25 03:28:50.865882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.590 [2024-04-25 03:28:50.865908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.590 qpair failed and we were unable to recover it. 00:28:16.590 [2024-04-25 03:28:50.866068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.590 [2024-04-25 03:28:50.866397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.590 [2024-04-25 03:28:50.866454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.590 qpair failed and we were unable to recover it. 00:28:16.590 [2024-04-25 03:28:50.866759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.590 [2024-04-25 03:28:50.867069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.590 [2024-04-25 03:28:50.867126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.590 qpair failed and we were unable to recover it. 00:28:16.590 [2024-04-25 03:28:50.867366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.590 [2024-04-25 03:28:50.867582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.590 [2024-04-25 03:28:50.867610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.590 qpair failed and we were unable to recover it. 00:28:16.590 [2024-04-25 03:28:50.867835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.591 [2024-04-25 03:28:50.868070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.591 [2024-04-25 03:28:50.868115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.591 qpair failed and we were unable to recover it. 00:28:16.591 [2024-04-25 03:28:50.868360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.591 [2024-04-25 03:28:50.868574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.591 [2024-04-25 03:28:50.868602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.591 qpair failed and we were unable to recover it. 00:28:16.591 [2024-04-25 03:28:50.868855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.591 [2024-04-25 03:28:50.869076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.591 [2024-04-25 03:28:50.869101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.591 qpair failed and we were unable to recover it. 00:28:16.591 [2024-04-25 03:28:50.869361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.591 [2024-04-25 03:28:50.869585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.591 [2024-04-25 03:28:50.869613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.591 qpair failed and we were unable to recover it. 00:28:16.591 [2024-04-25 03:28:50.869841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.591 [2024-04-25 03:28:50.870095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.591 [2024-04-25 03:28:50.870134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.591 qpair failed and we were unable to recover it. 00:28:16.591 [2024-04-25 03:28:50.870402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.591 [2024-04-25 03:28:50.870618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.591 [2024-04-25 03:28:50.870651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.591 qpair failed and we were unable to recover it. 00:28:16.591 [2024-04-25 03:28:50.870861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.591 [2024-04-25 03:28:50.871033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.591 [2024-04-25 03:28:50.871057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.591 qpair failed and we were unable to recover it. 00:28:16.591 [2024-04-25 03:28:50.871257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.591 [2024-04-25 03:28:50.871657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.591 [2024-04-25 03:28:50.871704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.591 qpair failed and we were unable to recover it. 00:28:16.591 [2024-04-25 03:28:50.871918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.591 [2024-04-25 03:28:50.872188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.591 [2024-04-25 03:28:50.872233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.591 qpair failed and we were unable to recover it. 00:28:16.591 [2024-04-25 03:28:50.872475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.591 [2024-04-25 03:28:50.872763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.591 [2024-04-25 03:28:50.872792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.591 qpair failed and we were unable to recover it. 00:28:16.591 [2024-04-25 03:28:50.873043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.591 [2024-04-25 03:28:50.873317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.591 [2024-04-25 03:28:50.873367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.591 qpair failed and we were unable to recover it. 00:28:16.591 [2024-04-25 03:28:50.873604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.591 [2024-04-25 03:28:50.873847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.591 [2024-04-25 03:28:50.873876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.591 qpair failed and we were unable to recover it. 00:28:16.591 [2024-04-25 03:28:50.874117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.591 [2024-04-25 03:28:50.874334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.591 [2024-04-25 03:28:50.874361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.591 qpair failed and we were unable to recover it. 00:28:16.591 [2024-04-25 03:28:50.874574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.591 [2024-04-25 03:28:50.874825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.591 [2024-04-25 03:28:50.874850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.591 qpair failed and we were unable to recover it. 00:28:16.591 [2024-04-25 03:28:50.875223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.591 [2024-04-25 03:28:50.875479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.591 [2024-04-25 03:28:50.875503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.591 qpair failed and we were unable to recover it. 00:28:16.591 [2024-04-25 03:28:50.875737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.591 [2024-04-25 03:28:50.876038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.591 [2024-04-25 03:28:50.876067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.591 qpair failed and we were unable to recover it. 00:28:16.591 [2024-04-25 03:28:50.876318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.591 [2024-04-25 03:28:50.876655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.591 [2024-04-25 03:28:50.876709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.591 qpair failed and we were unable to recover it. 00:28:16.591 [2024-04-25 03:28:50.876924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.591 [2024-04-25 03:28:50.877360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.591 [2024-04-25 03:28:50.877413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.591 qpair failed and we were unable to recover it. 00:28:16.591 [2024-04-25 03:28:50.877600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.591 [2024-04-25 03:28:50.877859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.591 [2024-04-25 03:28:50.877886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.591 qpair failed and we were unable to recover it. 00:28:16.591 [2024-04-25 03:28:50.878322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.591 [2024-04-25 03:28:50.878729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.591 [2024-04-25 03:28:50.878757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.591 qpair failed and we were unable to recover it. 00:28:16.591 [2024-04-25 03:28:50.878981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.591 [2024-04-25 03:28:50.879198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.591 [2024-04-25 03:28:50.879226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.591 qpair failed and we were unable to recover it. 00:28:16.591 [2024-04-25 03:28:50.879476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.591 [2024-04-25 03:28:50.879691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.591 [2024-04-25 03:28:50.879721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.591 qpair failed and we were unable to recover it. 00:28:16.591 [2024-04-25 03:28:50.879907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.591 [2024-04-25 03:28:50.880191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.591 [2024-04-25 03:28:50.880242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.591 qpair failed and we were unable to recover it. 00:28:16.591 [2024-04-25 03:28:50.880567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.591 [2024-04-25 03:28:50.880781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.591 [2024-04-25 03:28:50.880809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.591 qpair failed and we were unable to recover it. 00:28:16.591 [2024-04-25 03:28:50.881051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.592 [2024-04-25 03:28:50.881423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.592 [2024-04-25 03:28:50.881480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.592 qpair failed and we were unable to recover it. 00:28:16.592 [2024-04-25 03:28:50.881704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.592 [2024-04-25 03:28:50.881919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.592 [2024-04-25 03:28:50.881947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.592 qpair failed and we were unable to recover it. 00:28:16.592 [2024-04-25 03:28:50.882179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.592 [2024-04-25 03:28:50.882400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.592 [2024-04-25 03:28:50.882425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.592 qpair failed and we were unable to recover it. 00:28:16.592 [2024-04-25 03:28:50.882691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.592 [2024-04-25 03:28:50.882892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.592 [2024-04-25 03:28:50.882919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.592 qpair failed and we were unable to recover it. 00:28:16.592 [2024-04-25 03:28:50.883139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.592 [2024-04-25 03:28:50.883388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.592 [2024-04-25 03:28:50.883428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.592 qpair failed and we were unable to recover it. 00:28:16.592 [2024-04-25 03:28:50.883638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.592 [2024-04-25 03:28:50.883841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.592 [2024-04-25 03:28:50.883870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.592 qpair failed and we were unable to recover it. 00:28:16.592 [2024-04-25 03:28:50.884163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.592 [2024-04-25 03:28:50.884390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.592 [2024-04-25 03:28:50.884414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.592 qpair failed and we were unable to recover it. 00:28:16.592 [2024-04-25 03:28:50.884644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.592 [2024-04-25 03:28:50.884862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.592 [2024-04-25 03:28:50.884890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.592 qpair failed and we were unable to recover it. 00:28:16.592 [2024-04-25 03:28:50.885106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.592 [2024-04-25 03:28:50.885340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.592 [2024-04-25 03:28:50.885386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.592 qpair failed and we were unable to recover it. 00:28:16.592 [2024-04-25 03:28:50.885625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.592 [2024-04-25 03:28:50.885850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.592 [2024-04-25 03:28:50.885877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.592 qpair failed and we were unable to recover it. 00:28:16.592 [2024-04-25 03:28:50.886139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.592 [2024-04-25 03:28:50.886412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.592 [2024-04-25 03:28:50.886456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.592 qpair failed and we were unable to recover it. 00:28:16.592 [2024-04-25 03:28:50.886677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.592 [2024-04-25 03:28:50.886905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.592 [2024-04-25 03:28:50.886929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.592 qpair failed and we were unable to recover it. 00:28:16.592 [2024-04-25 03:28:50.887207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.592 [2024-04-25 03:28:50.887455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.592 [2024-04-25 03:28:50.887479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.592 qpair failed and we were unable to recover it. 00:28:16.592 [2024-04-25 03:28:50.887713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.592 [2024-04-25 03:28:50.887956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.592 [2024-04-25 03:28:50.887984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.592 qpair failed and we were unable to recover it. 00:28:16.592 [2024-04-25 03:28:50.888240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.592 [2024-04-25 03:28:50.888684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.592 [2024-04-25 03:28:50.888712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.592 qpair failed and we were unable to recover it. 00:28:16.592 [2024-04-25 03:28:50.888932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.592 [2024-04-25 03:28:50.889281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.592 [2024-04-25 03:28:50.889309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.592 qpair failed and we were unable to recover it. 00:28:16.592 [2024-04-25 03:28:50.889525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.592 [2024-04-25 03:28:50.889738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.592 [2024-04-25 03:28:50.889767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.592 qpair failed and we were unable to recover it. 00:28:16.592 [2024-04-25 03:28:50.889979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.592 [2024-04-25 03:28:50.890342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.592 [2024-04-25 03:28:50.890398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.592 qpair failed and we were unable to recover it. 00:28:16.592 [2024-04-25 03:28:50.890619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.592 [2024-04-25 03:28:50.890802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.592 [2024-04-25 03:28:50.890831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.592 qpair failed and we were unable to recover it. 00:28:16.592 [2024-04-25 03:28:50.891194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.592 [2024-04-25 03:28:50.891653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.592 [2024-04-25 03:28:50.891698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.592 qpair failed and we were unable to recover it. 00:28:16.592 [2024-04-25 03:28:50.891939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.592 [2024-04-25 03:28:50.892318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.592 [2024-04-25 03:28:50.892346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.592 qpair failed and we were unable to recover it. 00:28:16.592 [2024-04-25 03:28:50.892563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.592 [2024-04-25 03:28:50.892816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.592 [2024-04-25 03:28:50.892844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.592 qpair failed and we were unable to recover it. 00:28:16.592 [2024-04-25 03:28:50.893195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.592 [2024-04-25 03:28:50.893528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.592 [2024-04-25 03:28:50.893556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.592 qpair failed and we were unable to recover it. 00:28:16.592 [2024-04-25 03:28:50.893799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.592 [2024-04-25 03:28:50.894168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.592 [2024-04-25 03:28:50.894196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.592 qpair failed and we were unable to recover it. 00:28:16.592 [2024-04-25 03:28:50.894411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.592 [2024-04-25 03:28:50.894650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.592 [2024-04-25 03:28:50.894675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.592 qpair failed and we were unable to recover it. 00:28:16.592 [2024-04-25 03:28:50.894916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.592 [2024-04-25 03:28:50.895153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.592 [2024-04-25 03:28:50.895177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.592 qpair failed and we were unable to recover it. 00:28:16.592 [2024-04-25 03:28:50.895370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.592 [2024-04-25 03:28:50.895544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.592 [2024-04-25 03:28:50.895583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.592 qpair failed and we were unable to recover it. 00:28:16.592 [2024-04-25 03:28:50.895789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.592 [2024-04-25 03:28:50.895954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.592 [2024-04-25 03:28:50.895992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.592 qpair failed and we were unable to recover it. 00:28:16.592 [2024-04-25 03:28:50.896209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.592 [2024-04-25 03:28:50.896424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.592 [2024-04-25 03:28:50.896453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.593 qpair failed and we were unable to recover it. 00:28:16.593 [2024-04-25 03:28:50.896679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.593 [2024-04-25 03:28:50.896929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.593 [2024-04-25 03:28:50.896957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.593 qpair failed and we were unable to recover it. 00:28:16.593 [2024-04-25 03:28:50.897172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.593 [2024-04-25 03:28:50.897518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.593 [2024-04-25 03:28:50.897545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.593 qpair failed and we were unable to recover it. 00:28:16.593 [2024-04-25 03:28:50.897820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.593 [2024-04-25 03:28:50.898089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.593 [2024-04-25 03:28:50.898133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.593 qpair failed and we were unable to recover it. 00:28:16.593 [2024-04-25 03:28:50.898318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.593 [2024-04-25 03:28:50.898568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.593 [2024-04-25 03:28:50.898608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.593 qpair failed and we were unable to recover it. 00:28:16.593 [2024-04-25 03:28:50.898812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.593 [2024-04-25 03:28:50.899159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.593 [2024-04-25 03:28:50.899222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.593 qpair failed and we were unable to recover it. 00:28:16.593 [2024-04-25 03:28:50.899438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.593 [2024-04-25 03:28:50.899668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.593 [2024-04-25 03:28:50.899693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.593 qpair failed and we were unable to recover it. 00:28:16.593 [2024-04-25 03:28:50.899928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.593 [2024-04-25 03:28:50.900143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.593 [2024-04-25 03:28:50.900171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.593 qpair failed and we were unable to recover it. 00:28:16.593 [2024-04-25 03:28:50.900385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.593 [2024-04-25 03:28:50.900605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.593 [2024-04-25 03:28:50.900647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.593 qpair failed and we were unable to recover it. 00:28:16.593 [2024-04-25 03:28:50.900863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.593 [2024-04-25 03:28:50.901207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.593 [2024-04-25 03:28:50.901275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.593 qpair failed and we were unable to recover it. 00:28:16.593 [2024-04-25 03:28:50.901550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.593 [2024-04-25 03:28:50.901739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.593 [2024-04-25 03:28:50.901766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.593 qpair failed and we were unable to recover it. 00:28:16.593 [2024-04-25 03:28:50.902002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.593 [2024-04-25 03:28:50.902218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.593 [2024-04-25 03:28:50.902243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.593 qpair failed and we were unable to recover it. 00:28:16.593 [2024-04-25 03:28:50.902500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.593 [2024-04-25 03:28:50.902711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.593 [2024-04-25 03:28:50.902738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.593 qpair failed and we were unable to recover it. 00:28:16.593 [2024-04-25 03:28:50.902955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.593 [2024-04-25 03:28:50.903163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.593 [2024-04-25 03:28:50.903190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.593 qpair failed and we were unable to recover it. 00:28:16.593 [2024-04-25 03:28:50.903422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.593 [2024-04-25 03:28:50.903663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.593 [2024-04-25 03:28:50.903703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.593 qpair failed and we were unable to recover it. 00:28:16.593 [2024-04-25 03:28:50.904162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.593 [2024-04-25 03:28:50.904513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.593 [2024-04-25 03:28:50.904557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.593 qpair failed and we were unable to recover it. 00:28:16.593 [2024-04-25 03:28:50.904793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.593 [2024-04-25 03:28:50.904983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.593 [2024-04-25 03:28:50.905012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.593 qpair failed and we were unable to recover it. 00:28:16.593 [2024-04-25 03:28:50.905252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.593 [2024-04-25 03:28:50.905651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.593 [2024-04-25 03:28:50.905702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.593 qpair failed and we were unable to recover it. 00:28:16.593 [2024-04-25 03:28:50.905922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.593 [2024-04-25 03:28:50.906341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.593 [2024-04-25 03:28:50.906368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.593 qpair failed and we were unable to recover it. 00:28:16.593 [2024-04-25 03:28:50.906587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.593 [2024-04-25 03:28:50.906856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.593 [2024-04-25 03:28:50.906884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.593 qpair failed and we were unable to recover it. 00:28:16.593 [2024-04-25 03:28:50.907101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.593 [2024-04-25 03:28:50.907488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.593 [2024-04-25 03:28:50.907543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.593 qpair failed and we were unable to recover it. 00:28:16.593 [2024-04-25 03:28:50.907798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.593 [2024-04-25 03:28:50.908038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.593 [2024-04-25 03:28:50.908066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.593 qpair failed and we were unable to recover it. 00:28:16.593 [2024-04-25 03:28:50.908283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.593 [2024-04-25 03:28:50.908474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.593 [2024-04-25 03:28:50.908497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.593 qpair failed and we were unable to recover it. 00:28:16.593 [2024-04-25 03:28:50.908804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.593 [2024-04-25 03:28:50.909252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.593 [2024-04-25 03:28:50.909299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.593 qpair failed and we were unable to recover it. 00:28:16.593 [2024-04-25 03:28:50.909527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.593 [2024-04-25 03:28:50.909743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.593 [2024-04-25 03:28:50.909772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.593 qpair failed and we were unable to recover it. 00:28:16.593 [2024-04-25 03:28:50.909992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.593 [2024-04-25 03:28:50.910260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.593 [2024-04-25 03:28:50.910288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.593 qpair failed and we were unable to recover it. 00:28:16.593 [2024-04-25 03:28:50.910540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.593 [2024-04-25 03:28:50.910763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.593 [2024-04-25 03:28:50.910793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.593 qpair failed and we were unable to recover it. 00:28:16.593 [2024-04-25 03:28:50.911040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.593 [2024-04-25 03:28:50.911289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.593 [2024-04-25 03:28:50.911324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.593 qpair failed and we were unable to recover it. 00:28:16.593 [2024-04-25 03:28:50.911540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.593 [2024-04-25 03:28:50.911784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.593 [2024-04-25 03:28:50.911812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.593 qpair failed and we were unable to recover it. 00:28:16.593 [2024-04-25 03:28:50.912060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.593 [2024-04-25 03:28:50.912355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.593 [2024-04-25 03:28:50.912383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.593 qpair failed and we were unable to recover it. 00:28:16.594 [2024-04-25 03:28:50.912619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.594 [2024-04-25 03:28:50.912845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.594 [2024-04-25 03:28:50.912871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.594 qpair failed and we were unable to recover it. 00:28:16.594 [2024-04-25 03:28:50.913090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.594 [2024-04-25 03:28:50.913355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.594 [2024-04-25 03:28:50.913380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.594 qpair failed and we were unable to recover it. 00:28:16.594 [2024-04-25 03:28:50.913647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.594 [2024-04-25 03:28:50.913889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.594 [2024-04-25 03:28:50.913927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.594 qpair failed and we were unable to recover it. 00:28:16.594 [2024-04-25 03:28:50.914145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.594 [2024-04-25 03:28:50.914478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.594 [2024-04-25 03:28:50.914528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.594 qpair failed and we were unable to recover it. 00:28:16.594 [2024-04-25 03:28:50.914802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.594 [2024-04-25 03:28:50.915029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.594 [2024-04-25 03:28:50.915057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.594 qpair failed and we were unable to recover it. 00:28:16.594 [2024-04-25 03:28:50.915325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.594 [2024-04-25 03:28:50.915550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.594 [2024-04-25 03:28:50.915573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.594 qpair failed and we were unable to recover it. 00:28:16.594 [2024-04-25 03:28:50.915813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.594 [2024-04-25 03:28:50.916161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.594 [2024-04-25 03:28:50.916221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.594 qpair failed and we were unable to recover it. 00:28:16.594 [2024-04-25 03:28:50.916468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.594 [2024-04-25 03:28:50.916658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.594 [2024-04-25 03:28:50.916686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.594 qpair failed and we were unable to recover it. 00:28:16.594 [2024-04-25 03:28:50.916885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.594 [2024-04-25 03:28:50.917117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.594 [2024-04-25 03:28:50.917142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.594 qpair failed and we were unable to recover it. 00:28:16.594 [2024-04-25 03:28:50.917590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.594 [2024-04-25 03:28:50.917858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.594 [2024-04-25 03:28:50.917886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.594 qpair failed and we were unable to recover it. 00:28:16.594 [2024-04-25 03:28:50.918137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.594 [2024-04-25 03:28:50.918372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.594 [2024-04-25 03:28:50.918400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.594 qpair failed and we were unable to recover it. 00:28:16.594 [2024-04-25 03:28:50.918654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.594 [2024-04-25 03:28:50.918875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.594 [2024-04-25 03:28:50.918902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.594 qpair failed and we were unable to recover it. 00:28:16.594 [2024-04-25 03:28:50.919137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.594 [2024-04-25 03:28:50.919495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.594 [2024-04-25 03:28:50.919549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.594 qpair failed and we were unable to recover it. 00:28:16.594 [2024-04-25 03:28:50.919772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.594 [2024-04-25 03:28:50.919991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.594 [2024-04-25 03:28:50.920019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.594 qpair failed and we were unable to recover it. 00:28:16.594 [2024-04-25 03:28:50.920268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.594 [2024-04-25 03:28:50.920591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.594 [2024-04-25 03:28:50.920655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.594 qpair failed and we were unable to recover it. 00:28:16.594 [2024-04-25 03:28:50.920873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.594 [2024-04-25 03:28:50.921299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.594 [2024-04-25 03:28:50.921349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.594 qpair failed and we were unable to recover it. 00:28:16.594 [2024-04-25 03:28:50.921568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.594 [2024-04-25 03:28:50.921790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.594 [2024-04-25 03:28:50.921819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.594 qpair failed and we were unable to recover it. 00:28:16.594 [2024-04-25 03:28:50.922209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.594 [2024-04-25 03:28:50.922600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.594 [2024-04-25 03:28:50.922661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.594 qpair failed and we were unable to recover it. 00:28:16.594 [2024-04-25 03:28:50.922908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.594 [2024-04-25 03:28:50.923128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.594 [2024-04-25 03:28:50.923153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.594 qpair failed and we were unable to recover it. 00:28:16.594 [2024-04-25 03:28:50.923391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.594 [2024-04-25 03:28:50.923609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.594 [2024-04-25 03:28:50.923643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.594 qpair failed and we were unable to recover it. 00:28:16.594 [2024-04-25 03:28:50.923894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.594 [2024-04-25 03:28:50.924279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.594 [2024-04-25 03:28:50.924340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.594 qpair failed and we were unable to recover it. 00:28:16.594 [2024-04-25 03:28:50.924556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.594 [2024-04-25 03:28:50.924808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.594 [2024-04-25 03:28:50.924837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.594 qpair failed and we were unable to recover it. 00:28:16.594 [2024-04-25 03:28:50.925093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.594 [2024-04-25 03:28:50.925265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.594 [2024-04-25 03:28:50.925290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.594 qpair failed and we were unable to recover it. 00:28:16.594 [2024-04-25 03:28:50.925510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.594 [2024-04-25 03:28:50.925719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.594 [2024-04-25 03:28:50.925748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.594 qpair failed and we were unable to recover it. 00:28:16.594 [2024-04-25 03:28:50.925960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.594 [2024-04-25 03:28:50.926174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.594 [2024-04-25 03:28:50.926198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.594 qpair failed and we were unable to recover it. 00:28:16.594 [2024-04-25 03:28:50.926550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.594 [2024-04-25 03:28:50.926833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.594 [2024-04-25 03:28:50.926861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.594 qpair failed and we were unable to recover it. 00:28:16.594 [2024-04-25 03:28:50.927104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.594 [2024-04-25 03:28:50.927423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.594 [2024-04-25 03:28:50.927450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.594 qpair failed and we were unable to recover it. 00:28:16.594 [2024-04-25 03:28:50.927702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.594 [2024-04-25 03:28:50.927943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.594 [2024-04-25 03:28:50.927971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.594 qpair failed and we were unable to recover it. 00:28:16.594 [2024-04-25 03:28:50.928214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.594 [2024-04-25 03:28:50.928499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.594 [2024-04-25 03:28:50.928522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.594 qpair failed and we were unable to recover it. 00:28:16.595 [2024-04-25 03:28:50.928761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.595 [2024-04-25 03:28:50.928971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.595 [2024-04-25 03:28:50.928997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.595 qpair failed and we were unable to recover it. 00:28:16.595 [2024-04-25 03:28:50.929265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.595 [2024-04-25 03:28:50.929662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.595 [2024-04-25 03:28:50.929714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.595 qpair failed and we were unable to recover it. 00:28:16.595 [2024-04-25 03:28:50.929973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.595 [2024-04-25 03:28:50.930280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.595 [2024-04-25 03:28:50.930340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.595 qpair failed and we were unable to recover it. 00:28:16.595 [2024-04-25 03:28:50.930552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.595 [2024-04-25 03:28:50.930809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.595 [2024-04-25 03:28:50.930837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.595 qpair failed and we were unable to recover it. 00:28:16.595 [2024-04-25 03:28:50.931206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.595 [2024-04-25 03:28:50.931697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.595 [2024-04-25 03:28:50.931725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.595 qpair failed and we were unable to recover it. 00:28:16.595 [2024-04-25 03:28:50.931944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.595 [2024-04-25 03:28:50.932135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.595 [2024-04-25 03:28:50.932162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.595 qpair failed and we were unable to recover it. 00:28:16.595 [2024-04-25 03:28:50.932370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.595 [2024-04-25 03:28:50.932611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.595 [2024-04-25 03:28:50.932651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.595 qpair failed and we were unable to recover it. 00:28:16.595 [2024-04-25 03:28:50.932854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.595 [2024-04-25 03:28:50.933076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.595 [2024-04-25 03:28:50.933100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.595 qpair failed and we were unable to recover it. 00:28:16.595 [2024-04-25 03:28:50.933311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.595 [2024-04-25 03:28:50.933701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.595 [2024-04-25 03:28:50.933729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.595 qpair failed and we were unable to recover it. 00:28:16.595 [2024-04-25 03:28:50.933970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.595 [2024-04-25 03:28:50.934170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.595 [2024-04-25 03:28:50.934197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.595 qpair failed and we were unable to recover it. 00:28:16.595 [2024-04-25 03:28:50.934416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.595 [2024-04-25 03:28:50.934635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.595 [2024-04-25 03:28:50.934660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.595 qpair failed and we were unable to recover it. 00:28:16.595 [2024-04-25 03:28:50.934865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.595 [2024-04-25 03:28:50.935420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.595 [2024-04-25 03:28:50.935471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.595 qpair failed and we were unable to recover it. 00:28:16.595 [2024-04-25 03:28:50.935711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.595 [2024-04-25 03:28:50.935933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.595 [2024-04-25 03:28:50.935961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.595 qpair failed and we were unable to recover it. 00:28:16.595 [2024-04-25 03:28:50.936206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.595 [2024-04-25 03:28:50.936423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.595 [2024-04-25 03:28:50.936448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.595 qpair failed and we were unable to recover it. 00:28:16.595 [2024-04-25 03:28:50.936669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.595 [2024-04-25 03:28:50.936860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.595 [2024-04-25 03:28:50.936887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.595 qpair failed and we were unable to recover it. 00:28:16.595 [2024-04-25 03:28:50.937173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.595 [2024-04-25 03:28:50.937411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.595 [2024-04-25 03:28:50.937463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.595 qpair failed and we were unable to recover it. 00:28:16.595 [2024-04-25 03:28:50.937707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.595 [2024-04-25 03:28:50.937923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.595 [2024-04-25 03:28:50.937961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.595 qpair failed and we were unable to recover it. 00:28:16.595 [2024-04-25 03:28:50.938182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.595 [2024-04-25 03:28:50.938403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.595 [2024-04-25 03:28:50.938432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.595 qpair failed and we were unable to recover it. 00:28:16.595 [2024-04-25 03:28:50.938642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.595 [2024-04-25 03:28:50.938829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.595 [2024-04-25 03:28:50.938857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.595 qpair failed and we were unable to recover it. 00:28:16.595 [2024-04-25 03:28:50.939069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.595 [2024-04-25 03:28:50.939314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.595 [2024-04-25 03:28:50.939342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.595 qpair failed and we were unable to recover it. 00:28:16.595 [2024-04-25 03:28:50.939727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.595 [2024-04-25 03:28:50.939978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.595 [2024-04-25 03:28:50.940006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.595 qpair failed and we were unable to recover it. 00:28:16.595 [2024-04-25 03:28:50.940246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.595 [2024-04-25 03:28:50.940567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.595 [2024-04-25 03:28:50.940604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.595 qpair failed and we were unable to recover it. 00:28:16.595 [2024-04-25 03:28:50.940830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.595 [2024-04-25 03:28:50.941002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.595 [2024-04-25 03:28:50.941034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.595 qpair failed and we were unable to recover it. 00:28:16.595 [2024-04-25 03:28:50.941359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.595 [2024-04-25 03:28:50.941652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.595 [2024-04-25 03:28:50.941680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.595 qpair failed and we were unable to recover it. 00:28:16.595 [2024-04-25 03:28:50.941901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.595 [2024-04-25 03:28:50.942244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.595 [2024-04-25 03:28:50.942290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.595 qpair failed and we were unable to recover it. 00:28:16.595 [2024-04-25 03:28:50.942547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.595 [2024-04-25 03:28:50.942788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.595 [2024-04-25 03:28:50.942816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.596 qpair failed and we were unable to recover it. 00:28:16.596 [2024-04-25 03:28:50.943046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.596 [2024-04-25 03:28:50.943262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.596 [2024-04-25 03:28:50.943301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.596 qpair failed and we were unable to recover it. 00:28:16.596 [2024-04-25 03:28:50.943569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.596 [2024-04-25 03:28:50.943829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.596 [2024-04-25 03:28:50.943854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.596 qpair failed and we were unable to recover it. 00:28:16.596 [2024-04-25 03:28:50.944055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.596 [2024-04-25 03:28:50.944451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.596 [2024-04-25 03:28:50.944500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.596 qpair failed and we were unable to recover it. 00:28:16.596 [2024-04-25 03:28:50.944726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.596 [2024-04-25 03:28:50.945043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.596 [2024-04-25 03:28:50.945103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.596 qpair failed and we were unable to recover it. 00:28:16.596 [2024-04-25 03:28:50.945321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.596 [2024-04-25 03:28:50.945536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.596 [2024-04-25 03:28:50.945564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.596 qpair failed and we were unable to recover it. 00:28:16.596 [2024-04-25 03:28:50.945791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.596 [2024-04-25 03:28:50.945959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.596 [2024-04-25 03:28:50.945984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.596 qpair failed and we were unable to recover it. 00:28:16.596 [2024-04-25 03:28:50.946279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.596 [2024-04-25 03:28:50.946562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.596 [2024-04-25 03:28:50.946590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.596 qpair failed and we were unable to recover it. 00:28:16.596 [2024-04-25 03:28:50.946851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.596 [2024-04-25 03:28:50.947250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.596 [2024-04-25 03:28:50.947300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.596 qpair failed and we were unable to recover it. 00:28:16.596 [2024-04-25 03:28:50.947539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.596 [2024-04-25 03:28:50.947777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.596 [2024-04-25 03:28:50.947802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.596 qpair failed and we were unable to recover it. 00:28:16.596 [2024-04-25 03:28:50.948054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.596 [2024-04-25 03:28:50.948493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.596 [2024-04-25 03:28:50.948551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.596 qpair failed and we were unable to recover it. 00:28:16.596 [2024-04-25 03:28:50.948774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.596 [2024-04-25 03:28:50.949018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.596 [2024-04-25 03:28:50.949046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.596 qpair failed and we were unable to recover it. 00:28:16.596 [2024-04-25 03:28:50.949262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.596 [2024-04-25 03:28:50.949621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.596 [2024-04-25 03:28:50.949698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.596 qpair failed and we were unable to recover it. 00:28:16.596 [2024-04-25 03:28:50.949956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.596 [2024-04-25 03:28:50.950263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.596 [2024-04-25 03:28:50.950290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.596 qpair failed and we were unable to recover it. 00:28:16.596 [2024-04-25 03:28:50.950602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.596 [2024-04-25 03:28:50.950867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.596 [2024-04-25 03:28:50.950895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.596 qpair failed and we were unable to recover it. 00:28:16.596 [2024-04-25 03:28:50.951321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.596 [2024-04-25 03:28:50.951726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.596 [2024-04-25 03:28:50.951754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.596 qpair failed and we were unable to recover it. 00:28:16.596 [2024-04-25 03:28:50.951991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.596 [2024-04-25 03:28:50.952202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.596 [2024-04-25 03:28:50.952226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.596 qpair failed and we were unable to recover it. 00:28:16.596 [2024-04-25 03:28:50.952454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.596 [2024-04-25 03:28:50.952690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.596 [2024-04-25 03:28:50.952716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.596 qpair failed and we were unable to recover it. 00:28:16.596 [2024-04-25 03:28:50.952940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.596 [2024-04-25 03:28:50.953135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.596 [2024-04-25 03:28:50.953160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.596 qpair failed and we were unable to recover it. 00:28:16.596 [2024-04-25 03:28:50.953373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.596 [2024-04-25 03:28:50.953574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.596 [2024-04-25 03:28:50.953600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.596 qpair failed and we were unable to recover it. 00:28:16.596 [2024-04-25 03:28:50.953789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.596 [2024-04-25 03:28:50.954035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.596 [2024-04-25 03:28:50.954062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.596 qpair failed and we were unable to recover it. 00:28:16.596 [2024-04-25 03:28:50.954310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.596 [2024-04-25 03:28:50.954538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.596 [2024-04-25 03:28:50.954563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.596 qpair failed and we were unable to recover it. 00:28:16.596 [2024-04-25 03:28:50.954747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.596 [2024-04-25 03:28:50.954942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.596 [2024-04-25 03:28:50.954983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.596 qpair failed and we were unable to recover it. 00:28:16.596 [2024-04-25 03:28:50.955355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.596 [2024-04-25 03:28:50.955691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.596 [2024-04-25 03:28:50.955716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.596 qpair failed and we were unable to recover it. 00:28:16.596 [2024-04-25 03:28:50.955921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.596 [2024-04-25 03:28:50.956117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.596 [2024-04-25 03:28:50.956141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.596 qpair failed and we were unable to recover it. 00:28:16.596 [2024-04-25 03:28:50.956428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.596 [2024-04-25 03:28:50.956638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.596 [2024-04-25 03:28:50.956665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.596 qpair failed and we were unable to recover it. 00:28:16.596 [2024-04-25 03:28:50.956926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.596 [2024-04-25 03:28:50.957143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.596 [2024-04-25 03:28:50.957167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.596 qpair failed and we were unable to recover it. 00:28:16.596 [2024-04-25 03:28:50.957530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.596 [2024-04-25 03:28:50.957737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.596 [2024-04-25 03:28:50.957761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.596 qpair failed and we were unable to recover it. 00:28:16.596 [2024-04-25 03:28:50.958015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.596 [2024-04-25 03:28:50.958341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.596 [2024-04-25 03:28:50.958379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.597 qpair failed and we were unable to recover it. 00:28:16.597 [2024-04-25 03:28:50.958568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.597 [2024-04-25 03:28:50.958845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.597 [2024-04-25 03:28:50.958871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.597 qpair failed and we were unable to recover it. 00:28:16.597 [2024-04-25 03:28:50.959182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.597 [2024-04-25 03:28:50.959371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.597 [2024-04-25 03:28:50.959407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.597 qpair failed and we were unable to recover it. 00:28:16.597 [2024-04-25 03:28:50.959641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.597 [2024-04-25 03:28:50.959868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.597 [2024-04-25 03:28:50.959893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.597 qpair failed and we were unable to recover it. 00:28:16.597 [2024-04-25 03:28:50.960077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.597 [2024-04-25 03:28:50.960257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.597 [2024-04-25 03:28:50.960281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.597 qpair failed and we were unable to recover it. 00:28:16.597 [2024-04-25 03:28:50.960494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.597 [2024-04-25 03:28:50.960723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.597 [2024-04-25 03:28:50.960749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.597 qpair failed and we were unable to recover it. 00:28:16.597 [2024-04-25 03:28:50.960947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.597 [2024-04-25 03:28:50.961140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.597 [2024-04-25 03:28:50.961165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.597 qpair failed and we were unable to recover it. 00:28:16.597 [2024-04-25 03:28:50.961364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.597 [2024-04-25 03:28:50.961586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.597 [2024-04-25 03:28:50.961613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.597 qpair failed and we were unable to recover it. 00:28:16.597 [2024-04-25 03:28:50.961846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.597 [2024-04-25 03:28:50.962063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.597 [2024-04-25 03:28:50.962088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.597 qpair failed and we were unable to recover it. 00:28:16.597 [2024-04-25 03:28:50.962269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.597 [2024-04-25 03:28:50.962466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.597 [2024-04-25 03:28:50.962491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.597 qpair failed and we were unable to recover it. 00:28:16.597 [2024-04-25 03:28:50.962730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.597 [2024-04-25 03:28:50.962928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.597 [2024-04-25 03:28:50.962953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.597 qpair failed and we were unable to recover it. 00:28:16.597 [2024-04-25 03:28:50.963175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.597 [2024-04-25 03:28:50.963404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.597 [2024-04-25 03:28:50.963429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.597 qpair failed and we were unable to recover it. 00:28:16.597 [2024-04-25 03:28:50.963625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.597 [2024-04-25 03:28:50.963862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.597 [2024-04-25 03:28:50.963891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.597 qpair failed and we were unable to recover it. 00:28:16.597 [2024-04-25 03:28:50.964082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.597 [2024-04-25 03:28:50.964326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.597 [2024-04-25 03:28:50.964351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.597 qpair failed and we were unable to recover it. 00:28:16.597 [2024-04-25 03:28:50.964552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.597 [2024-04-25 03:28:50.964750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.597 [2024-04-25 03:28:50.964784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.597 qpair failed and we were unable to recover it. 00:28:16.597 [2024-04-25 03:28:50.965079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.597 [2024-04-25 03:28:50.965358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.597 [2024-04-25 03:28:50.965384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.597 qpair failed and we were unable to recover it. 00:28:16.597 [2024-04-25 03:28:50.965560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.597 [2024-04-25 03:28:50.965753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.597 [2024-04-25 03:28:50.965779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.597 qpair failed and we were unable to recover it. 00:28:16.597 [2024-04-25 03:28:50.966005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.597 [2024-04-25 03:28:50.966291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.597 [2024-04-25 03:28:50.966316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.597 qpair failed and we were unable to recover it. 00:28:16.597 [2024-04-25 03:28:50.966522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.597 [2024-04-25 03:28:50.966758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.597 [2024-04-25 03:28:50.966783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.597 qpair failed and we were unable to recover it. 00:28:16.597 [2024-04-25 03:28:50.966982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.597 [2024-04-25 03:28:50.967268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.597 [2024-04-25 03:28:50.967291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.597 qpair failed and we were unable to recover it. 00:28:16.597 [2024-04-25 03:28:50.967519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.597 [2024-04-25 03:28:50.967692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.597 [2024-04-25 03:28:50.967721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.597 qpair failed and we were unable to recover it. 00:28:16.597 [2024-04-25 03:28:50.967899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.597 [2024-04-25 03:28:50.968236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.597 [2024-04-25 03:28:50.968291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.597 qpair failed and we were unable to recover it. 00:28:16.597 [2024-04-25 03:28:50.968521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.597 [2024-04-25 03:28:50.968728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.597 [2024-04-25 03:28:50.968753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.597 qpair failed and we were unable to recover it. 00:28:16.597 [2024-04-25 03:28:50.968961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.597 [2024-04-25 03:28:50.969219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.597 [2024-04-25 03:28:50.969244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.597 qpair failed and we were unable to recover it. 00:28:16.597 [2024-04-25 03:28:50.969445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.597 [2024-04-25 03:28:50.969617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.597 [2024-04-25 03:28:50.969656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.597 qpair failed and we were unable to recover it. 00:28:16.597 [2024-04-25 03:28:50.969875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.597 [2024-04-25 03:28:50.970099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.597 [2024-04-25 03:28:50.970127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.597 qpair failed and we were unable to recover it. 00:28:16.597 [2024-04-25 03:28:50.970334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.597 [2024-04-25 03:28:50.970509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.597 [2024-04-25 03:28:50.970536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.597 qpair failed and we were unable to recover it. 00:28:16.597 [2024-04-25 03:28:50.970742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.597 [2024-04-25 03:28:50.970945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.597 [2024-04-25 03:28:50.970970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.597 qpair failed and we were unable to recover it. 00:28:16.597 [2024-04-25 03:28:50.971184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.597 [2024-04-25 03:28:50.971493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.597 [2024-04-25 03:28:50.971519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.597 qpair failed and we were unable to recover it. 00:28:16.597 [2024-04-25 03:28:50.971728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.597 [2024-04-25 03:28:50.971953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.597 [2024-04-25 03:28:50.971987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.597 qpair failed and we were unable to recover it. 00:28:16.598 [2024-04-25 03:28:50.972221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.598 [2024-04-25 03:28:50.972398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.598 [2024-04-25 03:28:50.972423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.598 qpair failed and we were unable to recover it. 00:28:16.598 [2024-04-25 03:28:50.972654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.598 [2024-04-25 03:28:50.972860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.598 [2024-04-25 03:28:50.972900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.598 qpair failed and we were unable to recover it. 00:28:16.598 [2024-04-25 03:28:50.973098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.598 [2024-04-25 03:28:50.973331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.598 [2024-04-25 03:28:50.973356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.598 qpair failed and we were unable to recover it. 00:28:16.598 [2024-04-25 03:28:50.973624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.598 [2024-04-25 03:28:50.973847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.598 [2024-04-25 03:28:50.973872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.598 qpair failed and we were unable to recover it. 00:28:16.598 [2024-04-25 03:28:50.974096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.598 [2024-04-25 03:28:50.974304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.598 [2024-04-25 03:28:50.974330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.598 qpair failed and we were unable to recover it. 00:28:16.598 [2024-04-25 03:28:50.974524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.598 [2024-04-25 03:28:50.974691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.598 [2024-04-25 03:28:50.974717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.598 qpair failed and we were unable to recover it. 00:28:16.598 [2024-04-25 03:28:50.974914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.598 [2024-04-25 03:28:50.975140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.598 [2024-04-25 03:28:50.975165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.598 qpair failed and we were unable to recover it. 00:28:16.598 [2024-04-25 03:28:50.975360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.598 [2024-04-25 03:28:50.975545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.598 [2024-04-25 03:28:50.975572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.598 qpair failed and we were unable to recover it. 00:28:16.598 [2024-04-25 03:28:50.975796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.598 [2024-04-25 03:28:50.976063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.598 [2024-04-25 03:28:50.976088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.598 qpair failed and we were unable to recover it. 00:28:16.598 [2024-04-25 03:28:50.976279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.598 [2024-04-25 03:28:50.976523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.598 [2024-04-25 03:28:50.976582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.598 qpair failed and we were unable to recover it. 00:28:16.598 [2024-04-25 03:28:50.976816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.598 [2024-04-25 03:28:50.976993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.598 [2024-04-25 03:28:50.977029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.598 qpair failed and we were unable to recover it. 00:28:16.598 [2024-04-25 03:28:50.977284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.598 [2024-04-25 03:28:50.977510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.598 [2024-04-25 03:28:50.977535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.598 qpair failed and we were unable to recover it. 00:28:16.598 [2024-04-25 03:28:50.977780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.598 [2024-04-25 03:28:50.978035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.598 [2024-04-25 03:28:50.978096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.598 qpair failed and we were unable to recover it. 00:28:16.598 [2024-04-25 03:28:50.978295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.598 [2024-04-25 03:28:50.978520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.598 [2024-04-25 03:28:50.978548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.598 qpair failed and we were unable to recover it. 00:28:16.598 [2024-04-25 03:28:50.978754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.598 [2024-04-25 03:28:50.978956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.598 [2024-04-25 03:28:50.978992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.598 qpair failed and we were unable to recover it. 00:28:16.598 [2024-04-25 03:28:50.979186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.598 [2024-04-25 03:28:50.979488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.598 [2024-04-25 03:28:50.979513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.598 qpair failed and we were unable to recover it. 00:28:16.598 [2024-04-25 03:28:50.979732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.598 [2024-04-25 03:28:50.979903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.598 [2024-04-25 03:28:50.979931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.598 qpair failed and we were unable to recover it. 00:28:16.598 [2024-04-25 03:28:50.980155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.598 [2024-04-25 03:28:50.980317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.598 [2024-04-25 03:28:50.980342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.598 qpair failed and we were unable to recover it. 00:28:16.598 [2024-04-25 03:28:50.980537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.598 [2024-04-25 03:28:50.980787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.598 [2024-04-25 03:28:50.980812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.598 qpair failed and we were unable to recover it. 00:28:16.598 [2024-04-25 03:28:50.980978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.598 [2024-04-25 03:28:50.981178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.598 [2024-04-25 03:28:50.981203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.598 qpair failed and we were unable to recover it. 00:28:16.598 [2024-04-25 03:28:50.981393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.598 [2024-04-25 03:28:50.981562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.598 [2024-04-25 03:28:50.981587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.598 qpair failed and we were unable to recover it. 00:28:16.598 [2024-04-25 03:28:50.981793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.598 [2024-04-25 03:28:50.981971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.598 [2024-04-25 03:28:50.981998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.598 qpair failed and we were unable to recover it. 00:28:16.598 [2024-04-25 03:28:50.982192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.598 [2024-04-25 03:28:50.982414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.598 [2024-04-25 03:28:50.982439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.598 qpair failed and we were unable to recover it. 00:28:16.598 [2024-04-25 03:28:50.982635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.598 [2024-04-25 03:28:50.982805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.598 [2024-04-25 03:28:50.982829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.598 qpair failed and we were unable to recover it. 00:28:16.598 [2024-04-25 03:28:50.983026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.598 [2024-04-25 03:28:50.983275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.598 [2024-04-25 03:28:50.983301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.598 qpair failed and we were unable to recover it. 00:28:16.598 [2024-04-25 03:28:50.983601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.598 [2024-04-25 03:28:50.983815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.598 [2024-04-25 03:28:50.983841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.598 qpair failed and we were unable to recover it. 00:28:16.598 [2024-04-25 03:28:50.984036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.598 [2024-04-25 03:28:50.984351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.598 [2024-04-25 03:28:50.984375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.598 qpair failed and we were unable to recover it. 00:28:16.598 [2024-04-25 03:28:50.984609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.598 [2024-04-25 03:28:50.984861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.598 [2024-04-25 03:28:50.984888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.598 qpair failed and we were unable to recover it. 00:28:16.598 [2024-04-25 03:28:50.985082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.598 [2024-04-25 03:28:50.985304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.598 [2024-04-25 03:28:50.985328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.598 qpair failed and we were unable to recover it. 00:28:16.598 [2024-04-25 03:28:50.985562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.599 [2024-04-25 03:28:50.985781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.599 [2024-04-25 03:28:50.985806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.599 qpair failed and we were unable to recover it. 00:28:16.599 [2024-04-25 03:28:50.986041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.599 [2024-04-25 03:28:50.986268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.599 [2024-04-25 03:28:50.986293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.599 qpair failed and we were unable to recover it. 00:28:16.599 [2024-04-25 03:28:50.986489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.599 [2024-04-25 03:28:50.986669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.599 [2024-04-25 03:28:50.986694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.599 qpair failed and we were unable to recover it. 00:28:16.599 [2024-04-25 03:28:50.986871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.599 [2024-04-25 03:28:50.987052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.599 [2024-04-25 03:28:50.987077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.599 qpair failed and we were unable to recover it. 00:28:16.599 [2024-04-25 03:28:50.987285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.599 [2024-04-25 03:28:50.987483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.599 [2024-04-25 03:28:50.987517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.599 qpair failed and we were unable to recover it. 00:28:16.599 [2024-04-25 03:28:50.987733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.599 [2024-04-25 03:28:50.987924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.599 [2024-04-25 03:28:50.987953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.599 qpair failed and we were unable to recover it. 00:28:16.599 [2024-04-25 03:28:50.988181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.599 [2024-04-25 03:28:50.988400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.599 [2024-04-25 03:28:50.988442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.599 qpair failed and we were unable to recover it. 00:28:16.599 [2024-04-25 03:28:50.988638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.599 [2024-04-25 03:28:50.988889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.599 [2024-04-25 03:28:50.988916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.599 qpair failed and we were unable to recover it. 00:28:16.599 [2024-04-25 03:28:50.989231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.599 [2024-04-25 03:28:50.989444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.599 [2024-04-25 03:28:50.989469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.599 qpair failed and we were unable to recover it. 00:28:16.599 [2024-04-25 03:28:50.989667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.599 [2024-04-25 03:28:50.989859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.599 [2024-04-25 03:28:50.989887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.599 qpair failed and we were unable to recover it. 00:28:16.599 [2024-04-25 03:28:50.990105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.599 [2024-04-25 03:28:50.990273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.599 [2024-04-25 03:28:50.990298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.599 qpair failed and we were unable to recover it. 00:28:16.599 [2024-04-25 03:28:50.990532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.599 [2024-04-25 03:28:50.990771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.599 [2024-04-25 03:28:50.990799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.599 qpair failed and we were unable to recover it. 00:28:16.599 [2024-04-25 03:28:50.990999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.599 [2024-04-25 03:28:50.991192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.599 [2024-04-25 03:28:50.991221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.599 qpair failed and we were unable to recover it. 00:28:16.599 [2024-04-25 03:28:50.991442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.599 [2024-04-25 03:28:50.991682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.599 [2024-04-25 03:28:50.991707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.599 qpair failed and we were unable to recover it. 00:28:16.599 [2024-04-25 03:28:50.991910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.599 [2024-04-25 03:28:50.992105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.599 [2024-04-25 03:28:50.992130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.599 qpair failed and we were unable to recover it. 00:28:16.599 [2024-04-25 03:28:50.992330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.599 [2024-04-25 03:28:50.992526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.599 [2024-04-25 03:28:50.992551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.599 qpair failed and we were unable to recover it. 00:28:16.599 [2024-04-25 03:28:50.992779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.599 [2024-04-25 03:28:50.993003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.599 [2024-04-25 03:28:50.993028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.599 qpair failed and we were unable to recover it. 00:28:16.599 [2024-04-25 03:28:50.993202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.599 [2024-04-25 03:28:50.993425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.599 [2024-04-25 03:28:50.993450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.599 qpair failed and we were unable to recover it. 00:28:16.599 [2024-04-25 03:28:50.993653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.599 [2024-04-25 03:28:50.993894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.599 [2024-04-25 03:28:50.993921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.599 qpair failed and we were unable to recover it. 00:28:16.599 [2024-04-25 03:28:50.994138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.599 [2024-04-25 03:28:50.994433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.599 [2024-04-25 03:28:50.994458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.599 qpair failed and we were unable to recover it. 00:28:16.599 [2024-04-25 03:28:50.994690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.599 [2024-04-25 03:28:50.994933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.599 [2024-04-25 03:28:50.994958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.599 qpair failed and we were unable to recover it. 00:28:16.599 [2024-04-25 03:28:50.995161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.599 [2024-04-25 03:28:50.995358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.599 [2024-04-25 03:28:50.995383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.599 qpair failed and we were unable to recover it. 00:28:16.599 [2024-04-25 03:28:50.995555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.599 [2024-04-25 03:28:50.995748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.599 [2024-04-25 03:28:50.995773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.599 qpair failed and we were unable to recover it. 00:28:16.599 [2024-04-25 03:28:50.995986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.599 [2024-04-25 03:28:50.996156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.599 [2024-04-25 03:28:50.996180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.599 qpair failed and we were unable to recover it. 00:28:16.599 [2024-04-25 03:28:50.996371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.599 [2024-04-25 03:28:50.996565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.599 [2024-04-25 03:28:50.996590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.599 qpair failed and we were unable to recover it. 00:28:16.599 [2024-04-25 03:28:50.996812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.599 [2024-04-25 03:28:50.997016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.599 [2024-04-25 03:28:50.997041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.599 qpair failed and we were unable to recover it. 00:28:16.599 [2024-04-25 03:28:50.997283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.599 [2024-04-25 03:28:50.997514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.599 [2024-04-25 03:28:50.997538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.599 qpair failed and we were unable to recover it. 00:28:16.599 [2024-04-25 03:28:50.997749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.599 [2024-04-25 03:28:50.997966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.599 [2024-04-25 03:28:50.997991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.599 qpair failed and we were unable to recover it. 00:28:16.599 [2024-04-25 03:28:50.998167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.599 [2024-04-25 03:28:50.998339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.599 [2024-04-25 03:28:50.998365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.599 qpair failed and we were unable to recover it. 00:28:16.599 [2024-04-25 03:28:50.998561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.600 [2024-04-25 03:28:50.998731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.600 [2024-04-25 03:28:50.998756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.600 qpair failed and we were unable to recover it. 00:28:16.600 [2024-04-25 03:28:50.998974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.600 [2024-04-25 03:28:50.999186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.600 [2024-04-25 03:28:50.999229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.600 qpair failed and we were unable to recover it. 00:28:16.600 [2024-04-25 03:28:50.999408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.600 [2024-04-25 03:28:50.999611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.600 [2024-04-25 03:28:50.999649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.600 qpair failed and we were unable to recover it. 00:28:16.600 [2024-04-25 03:28:50.999820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.600 [2024-04-25 03:28:51.000005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.600 [2024-04-25 03:28:51.000030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.600 qpair failed and we were unable to recover it. 00:28:16.600 [2024-04-25 03:28:51.000266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.600 [2024-04-25 03:28:51.000470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.600 [2024-04-25 03:28:51.000495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.600 qpair failed and we were unable to recover it. 00:28:16.600 [2024-04-25 03:28:51.000737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.600 [2024-04-25 03:28:51.000925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.600 [2024-04-25 03:28:51.000950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.600 qpair failed and we were unable to recover it. 00:28:16.600 [2024-04-25 03:28:51.001150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.600 [2024-04-25 03:28:51.001368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.600 [2024-04-25 03:28:51.001393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.600 qpair failed and we were unable to recover it. 00:28:16.600 [2024-04-25 03:28:51.001673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.600 [2024-04-25 03:28:51.001873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.600 [2024-04-25 03:28:51.001898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.600 qpair failed and we were unable to recover it. 00:28:16.600 [2024-04-25 03:28:51.002104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.600 [2024-04-25 03:28:51.002323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.600 [2024-04-25 03:28:51.002349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.600 qpair failed and we were unable to recover it. 00:28:16.600 [2024-04-25 03:28:51.002569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.600 [2024-04-25 03:28:51.002760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.600 [2024-04-25 03:28:51.002785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.600 qpair failed and we were unable to recover it. 00:28:16.600 [2024-04-25 03:28:51.003018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.600 [2024-04-25 03:28:51.003293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.600 [2024-04-25 03:28:51.003334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.600 qpair failed and we were unable to recover it. 00:28:16.600 [2024-04-25 03:28:51.003531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.600 [2024-04-25 03:28:51.003721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.600 [2024-04-25 03:28:51.003763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.600 qpair failed and we were unable to recover it. 00:28:16.600 [2024-04-25 03:28:51.004020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.600 [2024-04-25 03:28:51.004247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.600 [2024-04-25 03:28:51.004279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.600 qpair failed and we were unable to recover it. 00:28:16.600 [2024-04-25 03:28:51.004528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.600 [2024-04-25 03:28:51.004733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.600 [2024-04-25 03:28:51.004760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.600 qpair failed and we were unable to recover it. 00:28:16.600 [2024-04-25 03:28:51.004989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.600 [2024-04-25 03:28:51.005188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.600 [2024-04-25 03:28:51.005213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.600 qpair failed and we were unable to recover it. 00:28:16.600 [2024-04-25 03:28:51.005434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.600 [2024-04-25 03:28:51.005676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.600 [2024-04-25 03:28:51.005702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.600 qpair failed and we were unable to recover it. 00:28:16.600 [2024-04-25 03:28:51.005921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.600 [2024-04-25 03:28:51.006094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.600 [2024-04-25 03:28:51.006119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.600 qpair failed and we were unable to recover it. 00:28:16.600 [2024-04-25 03:28:51.006343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.600 [2024-04-25 03:28:51.006565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.600 [2024-04-25 03:28:51.006593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.600 qpair failed and we were unable to recover it. 00:28:16.600 [2024-04-25 03:28:51.006888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.600 [2024-04-25 03:28:51.007269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.600 [2024-04-25 03:28:51.007324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.600 qpair failed and we were unable to recover it. 00:28:16.600 [2024-04-25 03:28:51.007580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.600 [2024-04-25 03:28:51.007757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.600 [2024-04-25 03:28:51.007782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.600 qpair failed and we were unable to recover it. 00:28:16.600 [2024-04-25 03:28:51.007959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.600 [2024-04-25 03:28:51.008219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.600 [2024-04-25 03:28:51.008243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.600 qpair failed and we were unable to recover it. 00:28:16.600 [2024-04-25 03:28:51.008447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.600 [2024-04-25 03:28:51.008675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.600 [2024-04-25 03:28:51.008703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.600 qpair failed and we were unable to recover it. 00:28:16.600 [2024-04-25 03:28:51.008917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.600 [2024-04-25 03:28:51.009138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.600 [2024-04-25 03:28:51.009163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.600 qpair failed and we were unable to recover it. 00:28:16.600 [2024-04-25 03:28:51.009339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.600 [2024-04-25 03:28:51.009544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.600 [2024-04-25 03:28:51.009584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.600 qpair failed and we were unable to recover it. 00:28:16.600 [2024-04-25 03:28:51.009807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.600 [2024-04-25 03:28:51.010018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.600 [2024-04-25 03:28:51.010047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.600 qpair failed and we were unable to recover it. 00:28:16.600 [2024-04-25 03:28:51.010272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.601 [2024-04-25 03:28:51.010442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.601 [2024-04-25 03:28:51.010466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.601 qpair failed and we were unable to recover it. 00:28:16.601 [2024-04-25 03:28:51.010669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.601 [2024-04-25 03:28:51.010842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.601 [2024-04-25 03:28:51.010867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.601 qpair failed and we were unable to recover it. 00:28:16.601 [2024-04-25 03:28:51.011091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.601 [2024-04-25 03:28:51.011290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.601 [2024-04-25 03:28:51.011315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.601 qpair failed and we were unable to recover it. 00:28:16.601 [2024-04-25 03:28:51.011505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.601 [2024-04-25 03:28:51.011746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.601 [2024-04-25 03:28:51.011774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.601 qpair failed and we were unable to recover it. 00:28:16.601 [2024-04-25 03:28:51.011977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.601 [2024-04-25 03:28:51.012179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.601 [2024-04-25 03:28:51.012204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.601 qpair failed and we were unable to recover it. 00:28:16.601 [2024-04-25 03:28:51.012396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.601 [2024-04-25 03:28:51.012587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.601 [2024-04-25 03:28:51.012611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.601 qpair failed and we were unable to recover it. 00:28:16.601 [2024-04-25 03:28:51.012870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.601 [2024-04-25 03:28:51.013264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.601 [2024-04-25 03:28:51.013323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.601 qpair failed and we were unable to recover it. 00:28:16.601 [2024-04-25 03:28:51.013531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.601 [2024-04-25 03:28:51.013753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.601 [2024-04-25 03:28:51.013779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.601 qpair failed and we were unable to recover it. 00:28:16.601 [2024-04-25 03:28:51.014019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.601 [2024-04-25 03:28:51.014212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.601 [2024-04-25 03:28:51.014241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.601 qpair failed and we were unable to recover it. 00:28:16.601 [2024-04-25 03:28:51.014539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.601 [2024-04-25 03:28:51.014751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.601 [2024-04-25 03:28:51.014780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.601 qpair failed and we were unable to recover it. 00:28:16.601 [2024-04-25 03:28:51.015086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.601 [2024-04-25 03:28:51.015310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.601 [2024-04-25 03:28:51.015335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.601 qpair failed and we were unable to recover it. 00:28:16.601 [2024-04-25 03:28:51.015556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.601 [2024-04-25 03:28:51.015788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.601 [2024-04-25 03:28:51.015816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.601 qpair failed and we were unable to recover it. 00:28:16.601 [2024-04-25 03:28:51.016064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.601 [2024-04-25 03:28:51.016257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.601 [2024-04-25 03:28:51.016284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.601 qpair failed and we were unable to recover it. 00:28:16.601 [2024-04-25 03:28:51.016480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.601 [2024-04-25 03:28:51.016708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.601 [2024-04-25 03:28:51.016734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.601 qpair failed and we were unable to recover it. 00:28:16.601 [2024-04-25 03:28:51.016904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.601 [2024-04-25 03:28:51.017122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.601 [2024-04-25 03:28:51.017149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.601 qpair failed and we were unable to recover it. 00:28:16.601 [2024-04-25 03:28:51.017364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.601 [2024-04-25 03:28:51.017603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.601 [2024-04-25 03:28:51.017695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.601 qpair failed and we were unable to recover it. 00:28:16.601 [2024-04-25 03:28:51.017922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.601 [2024-04-25 03:28:51.018218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.601 [2024-04-25 03:28:51.018242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.601 qpair failed and we were unable to recover it. 00:28:16.601 [2024-04-25 03:28:51.018473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.601 [2024-04-25 03:28:51.018790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.601 [2024-04-25 03:28:51.018819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.601 qpair failed and we were unable to recover it. 00:28:16.601 [2024-04-25 03:28:51.019046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.601 [2024-04-25 03:28:51.019243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.601 [2024-04-25 03:28:51.019268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.601 qpair failed and we were unable to recover it. 00:28:16.601 [2024-04-25 03:28:51.019498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.601 [2024-04-25 03:28:51.019753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.601 [2024-04-25 03:28:51.019781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.601 qpair failed and we were unable to recover it. 00:28:16.601 [2024-04-25 03:28:51.020009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.601 [2024-04-25 03:28:51.020219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.601 [2024-04-25 03:28:51.020246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.601 qpair failed and we were unable to recover it. 00:28:16.601 [2024-04-25 03:28:51.020444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.601 [2024-04-25 03:28:51.020613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.601 [2024-04-25 03:28:51.020647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.601 qpair failed and we were unable to recover it. 00:28:16.601 [2024-04-25 03:28:51.020845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.601 [2024-04-25 03:28:51.021225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.601 [2024-04-25 03:28:51.021280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.601 qpair failed and we were unable to recover it. 00:28:16.601 [2024-04-25 03:28:51.021475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.601 [2024-04-25 03:28:51.021692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.601 [2024-04-25 03:28:51.021720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.601 qpair failed and we were unable to recover it. 00:28:16.601 [2024-04-25 03:28:51.021962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.601 [2024-04-25 03:28:51.022332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.601 [2024-04-25 03:28:51.022388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.601 qpair failed and we were unable to recover it. 00:28:16.601 [2024-04-25 03:28:51.022617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.601 [2024-04-25 03:28:51.022836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.601 [2024-04-25 03:28:51.022860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.601 qpair failed and we were unable to recover it. 00:28:16.601 [2024-04-25 03:28:51.023121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.601 [2024-04-25 03:28:51.023341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.601 [2024-04-25 03:28:51.023366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.601 qpair failed and we were unable to recover it. 00:28:16.601 [2024-04-25 03:28:51.023599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.601 [2024-04-25 03:28:51.023832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.601 [2024-04-25 03:28:51.023861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.601 qpair failed and we were unable to recover it. 00:28:16.601 [2024-04-25 03:28:51.024121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.601 [2024-04-25 03:28:51.024473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.601 [2024-04-25 03:28:51.024502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.601 qpair failed and we were unable to recover it. 00:28:16.601 [2024-04-25 03:28:51.024761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.601 [2024-04-25 03:28:51.025078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.602 [2024-04-25 03:28:51.025130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.602 qpair failed and we were unable to recover it. 00:28:16.602 [2024-04-25 03:28:51.025523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.602 [2024-04-25 03:28:51.025793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.602 [2024-04-25 03:28:51.025820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.602 qpair failed and we were unable to recover it. 00:28:16.602 [2024-04-25 03:28:51.026036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.602 [2024-04-25 03:28:51.026439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.602 [2024-04-25 03:28:51.026489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.602 qpair failed and we were unable to recover it. 00:28:16.602 [2024-04-25 03:28:51.026735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.602 [2024-04-25 03:28:51.026952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.602 [2024-04-25 03:28:51.026980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.602 qpair failed and we were unable to recover it. 00:28:16.602 [2024-04-25 03:28:51.027198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.602 [2024-04-25 03:28:51.027396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.602 [2024-04-25 03:28:51.027420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.602 qpair failed and we were unable to recover it. 00:28:16.602 [2024-04-25 03:28:51.027683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.602 [2024-04-25 03:28:51.027931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.602 [2024-04-25 03:28:51.027958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.602 qpair failed and we were unable to recover it. 00:28:16.602 [2024-04-25 03:28:51.028185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.602 [2024-04-25 03:28:51.028403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.602 [2024-04-25 03:28:51.028430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.602 qpair failed and we were unable to recover it. 00:28:16.602 [2024-04-25 03:28:51.028613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.602 [2024-04-25 03:28:51.028832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.602 [2024-04-25 03:28:51.028860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.602 qpair failed and we were unable to recover it. 00:28:16.602 [2024-04-25 03:28:51.029155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.602 [2024-04-25 03:28:51.029486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.602 [2024-04-25 03:28:51.029510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.602 qpair failed and we were unable to recover it. 00:28:16.602 [2024-04-25 03:28:51.029743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.602 [2024-04-25 03:28:51.030133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.602 [2024-04-25 03:28:51.030189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.602 qpair failed and we were unable to recover it. 00:28:16.602 [2024-04-25 03:28:51.030405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.602 [2024-04-25 03:28:51.030649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.602 [2024-04-25 03:28:51.030677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.602 qpair failed and we were unable to recover it. 00:28:16.602 [2024-04-25 03:28:51.030892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.602 [2024-04-25 03:28:51.031118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.602 [2024-04-25 03:28:51.031146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.602 qpair failed and we were unable to recover it. 00:28:16.602 [2024-04-25 03:28:51.031335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.602 [2024-04-25 03:28:51.031551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.602 [2024-04-25 03:28:51.031575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.602 qpair failed and we were unable to recover it. 00:28:16.602 [2024-04-25 03:28:51.031823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.602 [2024-04-25 03:28:51.032032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.602 [2024-04-25 03:28:51.032063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.602 qpair failed and we were unable to recover it. 00:28:16.602 [2024-04-25 03:28:51.032305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.602 [2024-04-25 03:28:51.032673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.602 [2024-04-25 03:28:51.032726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.602 qpair failed and we were unable to recover it. 00:28:16.602 [2024-04-25 03:28:51.032965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.602 [2024-04-25 03:28:51.033336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.602 [2024-04-25 03:28:51.033386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.602 qpair failed and we were unable to recover it. 00:28:16.602 [2024-04-25 03:28:51.033610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.602 [2024-04-25 03:28:51.033841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.602 [2024-04-25 03:28:51.033868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.602 qpair failed and we were unable to recover it. 00:28:16.602 [2024-04-25 03:28:51.034296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.602 [2024-04-25 03:28:51.034704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.602 [2024-04-25 03:28:51.034739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.602 qpair failed and we were unable to recover it. 00:28:16.602 [2024-04-25 03:28:51.034993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.602 [2024-04-25 03:28:51.035344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.602 [2024-04-25 03:28:51.035395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.602 qpair failed and we were unable to recover it. 00:28:16.602 [2024-04-25 03:28:51.035618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.602 [2024-04-25 03:28:51.035842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.602 [2024-04-25 03:28:51.035870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.602 qpair failed and we were unable to recover it. 00:28:16.602 [2024-04-25 03:28:51.036259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.602 [2024-04-25 03:28:51.036718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.602 [2024-04-25 03:28:51.036745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.602 qpair failed and we were unable to recover it. 00:28:16.602 [2024-04-25 03:28:51.036965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.602 [2024-04-25 03:28:51.037201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.602 [2024-04-25 03:28:51.037234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.602 qpair failed and we were unable to recover it. 00:28:16.602 [2024-04-25 03:28:51.037492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.602 [2024-04-25 03:28:51.037718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.602 [2024-04-25 03:28:51.037747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.602 qpair failed and we were unable to recover it. 00:28:16.602 [2024-04-25 03:28:51.038006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.602 [2024-04-25 03:28:51.038355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.602 [2024-04-25 03:28:51.038409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.602 qpair failed and we were unable to recover it. 00:28:16.602 [2024-04-25 03:28:51.038625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.602 [2024-04-25 03:28:51.038859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.602 [2024-04-25 03:28:51.038886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.602 qpair failed and we were unable to recover it. 00:28:16.602 [2024-04-25 03:28:51.039107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.602 [2024-04-25 03:28:51.039318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.602 [2024-04-25 03:28:51.039359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.602 qpair failed and we were unable to recover it. 00:28:16.602 [2024-04-25 03:28:51.039568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.602 [2024-04-25 03:28:51.039828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.602 [2024-04-25 03:28:51.039856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.602 qpair failed and we were unable to recover it. 00:28:16.602 [2024-04-25 03:28:51.040106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.602 [2024-04-25 03:28:51.040322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.602 [2024-04-25 03:28:51.040350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.602 qpair failed and we were unable to recover it. 00:28:16.602 [2024-04-25 03:28:51.040571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.602 [2024-04-25 03:28:51.040840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.602 [2024-04-25 03:28:51.040868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.602 qpair failed and we were unable to recover it. 00:28:16.602 [2024-04-25 03:28:51.041093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.602 [2024-04-25 03:28:51.041270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.602 [2024-04-25 03:28:51.041294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.603 qpair failed and we were unable to recover it. 00:28:16.603 [2024-04-25 03:28:51.041509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.603 [2024-04-25 03:28:51.041756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.603 [2024-04-25 03:28:51.041780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.603 qpair failed and we were unable to recover it. 00:28:16.603 [2024-04-25 03:28:51.041985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.603 [2024-04-25 03:28:51.042215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.603 [2024-04-25 03:28:51.042243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.603 qpair failed and we were unable to recover it. 00:28:16.603 [2024-04-25 03:28:51.042508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.603 [2024-04-25 03:28:51.042808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.603 [2024-04-25 03:28:51.042833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.603 qpair failed and we were unable to recover it. 00:28:16.603 [2024-04-25 03:28:51.043048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.603 [2024-04-25 03:28:51.043289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.603 [2024-04-25 03:28:51.043316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.603 qpair failed and we were unable to recover it. 00:28:16.603 [2024-04-25 03:28:51.043531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.603 [2024-04-25 03:28:51.043784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.603 [2024-04-25 03:28:51.043817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.603 qpair failed and we were unable to recover it. 00:28:16.603 [2024-04-25 03:28:51.044067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.603 [2024-04-25 03:28:51.044309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.603 [2024-04-25 03:28:51.044337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.603 qpair failed and we were unable to recover it. 00:28:16.603 [2024-04-25 03:28:51.044567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.603 [2024-04-25 03:28:51.044850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.603 [2024-04-25 03:28:51.044889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.603 qpair failed and we were unable to recover it. 00:28:16.603 [2024-04-25 03:28:51.045139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.603 [2024-04-25 03:28:51.045573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.603 [2024-04-25 03:28:51.045636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.603 qpair failed and we were unable to recover it. 00:28:16.603 [2024-04-25 03:28:51.045858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.603 [2024-04-25 03:28:51.046257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.603 [2024-04-25 03:28:51.046309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.603 qpair failed and we were unable to recover it. 00:28:16.603 [2024-04-25 03:28:51.046557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.603 [2024-04-25 03:28:51.046783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.603 [2024-04-25 03:28:51.046811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.603 qpair failed and we were unable to recover it. 00:28:16.603 [2024-04-25 03:28:51.047069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.603 [2024-04-25 03:28:51.047567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.603 [2024-04-25 03:28:51.047640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.603 qpair failed and we were unable to recover it. 00:28:16.603 [2024-04-25 03:28:51.047833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.603 [2024-04-25 03:28:51.048163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.603 [2024-04-25 03:28:51.048220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.603 qpair failed and we were unable to recover it. 00:28:16.603 [2024-04-25 03:28:51.048484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.603 [2024-04-25 03:28:51.048746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.603 [2024-04-25 03:28:51.048775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.603 qpair failed and we were unable to recover it. 00:28:16.603 [2024-04-25 03:28:51.048965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.603 [2024-04-25 03:28:51.049173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.603 [2024-04-25 03:28:51.049198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.603 qpair failed and we were unable to recover it. 00:28:16.603 [2024-04-25 03:28:51.049459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.603 [2024-04-25 03:28:51.049690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.603 [2024-04-25 03:28:51.049718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.603 qpair failed and we were unable to recover it. 00:28:16.603 [2024-04-25 03:28:51.049929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.603 [2024-04-25 03:28:51.050155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.603 [2024-04-25 03:28:51.050179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.603 qpair failed and we were unable to recover it. 00:28:16.603 [2024-04-25 03:28:51.050454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.603 [2024-04-25 03:28:51.050729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.603 [2024-04-25 03:28:51.050757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.603 qpair failed and we were unable to recover it. 00:28:16.603 [2024-04-25 03:28:51.051022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.603 [2024-04-25 03:28:51.051375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.603 [2024-04-25 03:28:51.051431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.603 qpair failed and we were unable to recover it. 00:28:16.603 [2024-04-25 03:28:51.051653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.603 [2024-04-25 03:28:51.051860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.603 [2024-04-25 03:28:51.051885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.603 qpair failed and we were unable to recover it. 00:28:16.603 [2024-04-25 03:28:51.052104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.603 [2024-04-25 03:28:51.052318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.603 [2024-04-25 03:28:51.052345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.603 qpair failed and we were unable to recover it. 00:28:16.603 [2024-04-25 03:28:51.052554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.603 [2024-04-25 03:28:51.052782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.603 [2024-04-25 03:28:51.052810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.603 qpair failed and we were unable to recover it. 00:28:16.603 [2024-04-25 03:28:51.053028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.603 [2024-04-25 03:28:51.053448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.603 [2024-04-25 03:28:51.053505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.603 qpair failed and we were unable to recover it. 00:28:16.603 [2024-04-25 03:28:51.053750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.603 [2024-04-25 03:28:51.053958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.603 [2024-04-25 03:28:51.053986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.603 qpair failed and we were unable to recover it. 00:28:16.603 [2024-04-25 03:28:51.054214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.603 [2024-04-25 03:28:51.054477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.603 [2024-04-25 03:28:51.054501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.603 qpair failed and we were unable to recover it. 00:28:16.603 [2024-04-25 03:28:51.054754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.603 [2024-04-25 03:28:51.055014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.603 [2024-04-25 03:28:51.055039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.603 qpair failed and we were unable to recover it. 00:28:16.603 [2024-04-25 03:28:51.055261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.603 [2024-04-25 03:28:51.055479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.603 [2024-04-25 03:28:51.055503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.603 qpair failed and we were unable to recover it. 00:28:16.603 [2024-04-25 03:28:51.055722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.603 [2024-04-25 03:28:51.055997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.603 [2024-04-25 03:28:51.056022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.603 qpair failed and we were unable to recover it. 00:28:16.603 [2024-04-25 03:28:51.056286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.603 [2024-04-25 03:28:51.056527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.603 [2024-04-25 03:28:51.056554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.603 qpair failed and we were unable to recover it. 00:28:16.603 [2024-04-25 03:28:51.056808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.603 [2024-04-25 03:28:51.057253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.603 [2024-04-25 03:28:51.057303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.603 qpair failed and we were unable to recover it. 00:28:16.604 [2024-04-25 03:28:51.057552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.604 [2024-04-25 03:28:51.057793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.604 [2024-04-25 03:28:51.057822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.604 qpair failed and we were unable to recover it. 00:28:16.604 [2024-04-25 03:28:51.058053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.604 [2024-04-25 03:28:51.058291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.604 [2024-04-25 03:28:51.058339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.604 qpair failed and we were unable to recover it. 00:28:16.604 [2024-04-25 03:28:51.058563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.604 [2024-04-25 03:28:51.058782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.604 [2024-04-25 03:28:51.058810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.604 qpair failed and we were unable to recover it. 00:28:16.604 [2024-04-25 03:28:51.059006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.604 [2024-04-25 03:28:51.059316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.604 [2024-04-25 03:28:51.059374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.604 qpair failed and we were unable to recover it. 00:28:16.604 [2024-04-25 03:28:51.059631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.604 [2024-04-25 03:28:51.059888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.604 [2024-04-25 03:28:51.059925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.604 qpair failed and we were unable to recover it. 00:28:16.604 [2024-04-25 03:28:51.060189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.604 [2024-04-25 03:28:51.060593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.604 [2024-04-25 03:28:51.060659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.604 qpair failed and we were unable to recover it. 00:28:16.604 [2024-04-25 03:28:51.060867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.604 [2024-04-25 03:28:51.061050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.604 [2024-04-25 03:28:51.061074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.604 qpair failed and we were unable to recover it. 00:28:16.604 [2024-04-25 03:28:51.061324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.604 [2024-04-25 03:28:51.061716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.604 [2024-04-25 03:28:51.061744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.604 qpair failed and we were unable to recover it. 00:28:16.604 [2024-04-25 03:28:51.062001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.604 [2024-04-25 03:28:51.062330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.604 [2024-04-25 03:28:51.062385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.604 qpair failed and we were unable to recover it. 00:28:16.604 [2024-04-25 03:28:51.062603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.604 [2024-04-25 03:28:51.062839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.604 [2024-04-25 03:28:51.062867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.604 qpair failed and we were unable to recover it. 00:28:16.604 [2024-04-25 03:28:51.063099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.604 [2024-04-25 03:28:51.063303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.604 [2024-04-25 03:28:51.063328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.604 qpair failed and we were unable to recover it. 00:28:16.604 [2024-04-25 03:28:51.063488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.604 [2024-04-25 03:28:51.063696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.604 [2024-04-25 03:28:51.063721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.604 qpair failed and we were unable to recover it. 00:28:16.604 [2024-04-25 03:28:51.063885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.604 [2024-04-25 03:28:51.064096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.604 [2024-04-25 03:28:51.064120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.604 qpair failed and we were unable to recover it. 00:28:16.604 [2024-04-25 03:28:51.064347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.604 [2024-04-25 03:28:51.064571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.604 [2024-04-25 03:28:51.064602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.604 qpair failed and we were unable to recover it. 00:28:16.604 [2024-04-25 03:28:51.064862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.604 [2024-04-25 03:28:51.065201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.604 [2024-04-25 03:28:51.065259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.604 qpair failed and we were unable to recover it. 00:28:16.604 [2024-04-25 03:28:51.065512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.604 [2024-04-25 03:28:51.065730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.604 [2024-04-25 03:28:51.065758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.604 qpair failed and we were unable to recover it. 00:28:16.604 [2024-04-25 03:28:51.065941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.604 [2024-04-25 03:28:51.066183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.604 [2024-04-25 03:28:51.066209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.604 qpair failed and we were unable to recover it. 00:28:16.604 [2024-04-25 03:28:51.066423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.604 [2024-04-25 03:28:51.066680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.604 [2024-04-25 03:28:51.066709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.604 qpair failed and we were unable to recover it. 00:28:16.604 [2024-04-25 03:28:51.066960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.604 [2024-04-25 03:28:51.067443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.604 [2024-04-25 03:28:51.067495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.604 qpair failed and we were unable to recover it. 00:28:16.604 [2024-04-25 03:28:51.067746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.604 [2024-04-25 03:28:51.067955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.604 [2024-04-25 03:28:51.067982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.604 qpair failed and we were unable to recover it. 00:28:16.604 [2024-04-25 03:28:51.068225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.604 [2024-04-25 03:28:51.068411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.604 [2024-04-25 03:28:51.068438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.604 qpair failed and we were unable to recover it. 00:28:16.604 [2024-04-25 03:28:51.068700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.604 [2024-04-25 03:28:51.068913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.604 [2024-04-25 03:28:51.068940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.604 qpair failed and we were unable to recover it. 00:28:16.604 [2024-04-25 03:28:51.069373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.604 [2024-04-25 03:28:51.069613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.604 [2024-04-25 03:28:51.069647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.604 qpair failed and we were unable to recover it. 00:28:16.604 [2024-04-25 03:28:51.069839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.604 [2024-04-25 03:28:51.070227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.604 [2024-04-25 03:28:51.070287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.604 qpair failed and we were unable to recover it. 00:28:16.604 [2024-04-25 03:28:51.070519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.604 [2024-04-25 03:28:51.070738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.604 [2024-04-25 03:28:51.070767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.604 qpair failed and we were unable to recover it. 00:28:16.604 [2024-04-25 03:28:51.071011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.604 [2024-04-25 03:28:51.071205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.604 [2024-04-25 03:28:51.071229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.604 qpair failed and we were unable to recover it. 00:28:16.604 [2024-04-25 03:28:51.071549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.604 [2024-04-25 03:28:51.071851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.604 [2024-04-25 03:28:51.071880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.604 qpair failed and we were unable to recover it. 00:28:16.604 [2024-04-25 03:28:51.072118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.604 [2024-04-25 03:28:51.072333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.604 [2024-04-25 03:28:51.072361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.604 qpair failed and we were unable to recover it. 00:28:16.604 [2024-04-25 03:28:51.072593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.604 [2024-04-25 03:28:51.072853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.604 [2024-04-25 03:28:51.072882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.604 qpair failed and we were unable to recover it. 00:28:16.605 [2024-04-25 03:28:51.073126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.605 [2024-04-25 03:28:51.073352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.605 [2024-04-25 03:28:51.073391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.605 qpair failed and we were unable to recover it. 00:28:16.605 [2024-04-25 03:28:51.073584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.605 [2024-04-25 03:28:51.073780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.605 [2024-04-25 03:28:51.073808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.605 qpair failed and we were unable to recover it. 00:28:16.605 [2024-04-25 03:28:51.074035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.605 [2024-04-25 03:28:51.074382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.605 [2024-04-25 03:28:51.074409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.605 qpair failed and we were unable to recover it. 00:28:16.605 [2024-04-25 03:28:51.074645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.605 [2024-04-25 03:28:51.074844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.605 [2024-04-25 03:28:51.074869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.605 qpair failed and we were unable to recover it. 00:28:16.605 [2024-04-25 03:28:51.075108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.605 [2024-04-25 03:28:51.075487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.605 [2024-04-25 03:28:51.075538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.605 qpair failed and we were unable to recover it. 00:28:16.605 [2024-04-25 03:28:51.075789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.605 [2024-04-25 03:28:51.076010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.605 [2024-04-25 03:28:51.076048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.605 qpair failed and we were unable to recover it. 00:28:16.605 [2024-04-25 03:28:51.076242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.875 [2024-04-25 03:28:51.076601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.875 [2024-04-25 03:28:51.076688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.875 qpair failed and we were unable to recover it. 00:28:16.875 [2024-04-25 03:28:51.076912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.875 [2024-04-25 03:28:51.077123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.875 [2024-04-25 03:28:51.077147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.875 qpair failed and we were unable to recover it. 00:28:16.875 [2024-04-25 03:28:51.077336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.875 [2024-04-25 03:28:51.077503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.875 [2024-04-25 03:28:51.077534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.875 qpair failed and we were unable to recover it. 00:28:16.875 [2024-04-25 03:28:51.077745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.875 [2024-04-25 03:28:51.077935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.875 [2024-04-25 03:28:51.077963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.875 qpair failed and we were unable to recover it. 00:28:16.875 [2024-04-25 03:28:51.078179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.875 [2024-04-25 03:28:51.078434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.875 [2024-04-25 03:28:51.078474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.875 qpair failed and we were unable to recover it. 00:28:16.875 [2024-04-25 03:28:51.078686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.875 [2024-04-25 03:28:51.078937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.875 [2024-04-25 03:28:51.078964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.875 qpair failed and we were unable to recover it. 00:28:16.875 [2024-04-25 03:28:51.079296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.875 [2024-04-25 03:28:51.079508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.875 [2024-04-25 03:28:51.079536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.875 qpair failed and we were unable to recover it. 00:28:16.875 [2024-04-25 03:28:51.079742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.875 [2024-04-25 03:28:51.079984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.875 [2024-04-25 03:28:51.080012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.875 qpair failed and we were unable to recover it. 00:28:16.875 [2024-04-25 03:28:51.080238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.875 [2024-04-25 03:28:51.080455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.875 [2024-04-25 03:28:51.080482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.875 qpair failed and we were unable to recover it. 00:28:16.875 [2024-04-25 03:28:51.080741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.875 [2024-04-25 03:28:51.081053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.875 [2024-04-25 03:28:51.081078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.875 qpair failed and we were unable to recover it. 00:28:16.875 [2024-04-25 03:28:51.081268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.875 [2024-04-25 03:28:51.081492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.875 [2024-04-25 03:28:51.081517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.875 qpair failed and we were unable to recover it. 00:28:16.875 [2024-04-25 03:28:51.081774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.875 [2024-04-25 03:28:51.082000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.875 [2024-04-25 03:28:51.082024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.875 qpair failed and we were unable to recover it. 00:28:16.875 [2024-04-25 03:28:51.082206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.875 [2024-04-25 03:28:51.082422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.876 [2024-04-25 03:28:51.082471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.876 qpair failed and we were unable to recover it. 00:28:16.876 [2024-04-25 03:28:51.082689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.876 [2024-04-25 03:28:51.083071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.876 [2024-04-25 03:28:51.083135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.876 qpair failed and we were unable to recover it. 00:28:16.876 [2024-04-25 03:28:51.083403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.876 [2024-04-25 03:28:51.083622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.876 [2024-04-25 03:28:51.083665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.876 qpair failed and we were unable to recover it. 00:28:16.876 [2024-04-25 03:28:51.083860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.876 [2024-04-25 03:28:51.084307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.876 [2024-04-25 03:28:51.084366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.876 qpair failed and we were unable to recover it. 00:28:16.876 [2024-04-25 03:28:51.084619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.876 [2024-04-25 03:28:51.084839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.876 [2024-04-25 03:28:51.084867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.876 qpair failed and we were unable to recover it. 00:28:16.876 [2024-04-25 03:28:51.085086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.876 [2024-04-25 03:28:51.085451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.876 [2024-04-25 03:28:51.085512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.876 qpair failed and we were unable to recover it. 00:28:16.876 [2024-04-25 03:28:51.085798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.876 [2024-04-25 03:28:51.086219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.876 [2024-04-25 03:28:51.086271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.876 qpair failed and we were unable to recover it. 00:28:16.876 [2024-04-25 03:28:51.086513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.876 [2024-04-25 03:28:51.086741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.876 [2024-04-25 03:28:51.086774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.876 qpair failed and we were unable to recover it. 00:28:16.876 [2024-04-25 03:28:51.086985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.876 [2024-04-25 03:28:51.087207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.876 [2024-04-25 03:28:51.087234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.876 qpair failed and we were unable to recover it. 00:28:16.876 [2024-04-25 03:28:51.087446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.876 [2024-04-25 03:28:51.087670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.876 [2024-04-25 03:28:51.087698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.876 qpair failed and we were unable to recover it. 00:28:16.876 [2024-04-25 03:28:51.087962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.876 [2024-04-25 03:28:51.088376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.876 [2024-04-25 03:28:51.088435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.876 qpair failed and we were unable to recover it. 00:28:16.876 [2024-04-25 03:28:51.088677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.876 [2024-04-25 03:28:51.088935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.876 [2024-04-25 03:28:51.088963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.876 qpair failed and we were unable to recover it. 00:28:16.876 [2024-04-25 03:28:51.089197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.876 [2024-04-25 03:28:51.089387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.876 [2024-04-25 03:28:51.089414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.876 qpair failed and we were unable to recover it. 00:28:16.876 [2024-04-25 03:28:51.089657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.876 [2024-04-25 03:28:51.089872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.876 [2024-04-25 03:28:51.089900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.876 qpair failed and we were unable to recover it. 00:28:16.876 [2024-04-25 03:28:51.090113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.876 [2024-04-25 03:28:51.090530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.876 [2024-04-25 03:28:51.090579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.876 qpair failed and we were unable to recover it. 00:28:16.876 [2024-04-25 03:28:51.090777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.876 [2024-04-25 03:28:51.091149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.876 [2024-04-25 03:28:51.091207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.876 qpair failed and we were unable to recover it. 00:28:16.876 [2024-04-25 03:28:51.091450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.876 [2024-04-25 03:28:51.091625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.876 [2024-04-25 03:28:51.091658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.876 qpair failed and we were unable to recover it. 00:28:16.876 [2024-04-25 03:28:51.091904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.876 [2024-04-25 03:28:51.092246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.876 [2024-04-25 03:28:51.092300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.876 qpair failed and we were unable to recover it. 00:28:16.876 [2024-04-25 03:28:51.092624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.876 [2024-04-25 03:28:51.092876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.876 [2024-04-25 03:28:51.092904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.876 qpair failed and we were unable to recover it. 00:28:16.876 [2024-04-25 03:28:51.093333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.876 [2024-04-25 03:28:51.093733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.876 [2024-04-25 03:28:51.093761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.876 qpair failed and we were unable to recover it. 00:28:16.876 [2024-04-25 03:28:51.093989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.876 [2024-04-25 03:28:51.094184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.876 [2024-04-25 03:28:51.094212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.876 qpair failed and we were unable to recover it. 00:28:16.876 [2024-04-25 03:28:51.094406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.876 [2024-04-25 03:28:51.094637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.876 [2024-04-25 03:28:51.094665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.876 qpair failed and we were unable to recover it. 00:28:16.876 [2024-04-25 03:28:51.094861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.876 [2024-04-25 03:28:51.095146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.876 [2024-04-25 03:28:51.095173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.876 qpair failed and we were unable to recover it. 00:28:16.876 [2024-04-25 03:28:51.095390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.876 [2024-04-25 03:28:51.095595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.876 [2024-04-25 03:28:51.095623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.876 qpair failed and we were unable to recover it. 00:28:16.876 [2024-04-25 03:28:51.095889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.876 [2024-04-25 03:28:51.096338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.876 [2024-04-25 03:28:51.096399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.876 qpair failed and we were unable to recover it. 00:28:16.876 [2024-04-25 03:28:51.096642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.876 [2024-04-25 03:28:51.096872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.876 [2024-04-25 03:28:51.096900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.876 qpair failed and we were unable to recover it. 00:28:16.876 [2024-04-25 03:28:51.097125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.876 [2024-04-25 03:28:51.097491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.876 [2024-04-25 03:28:51.097542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.876 qpair failed and we were unable to recover it. 00:28:16.876 [2024-04-25 03:28:51.097769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.876 [2024-04-25 03:28:51.097988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.876 [2024-04-25 03:28:51.098016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.876 qpair failed and we were unable to recover it. 00:28:16.876 [2024-04-25 03:28:51.098244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.876 [2024-04-25 03:28:51.098435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.876 [2024-04-25 03:28:51.098462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.876 qpair failed and we were unable to recover it. 00:28:16.876 [2024-04-25 03:28:51.098715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.877 [2024-04-25 03:28:51.098930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.877 [2024-04-25 03:28:51.098957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.877 qpair failed and we were unable to recover it. 00:28:16.877 [2024-04-25 03:28:51.099144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.877 [2024-04-25 03:28:51.099384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.877 [2024-04-25 03:28:51.099408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.877 qpair failed and we were unable to recover it. 00:28:16.877 [2024-04-25 03:28:51.099611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.877 [2024-04-25 03:28:51.099829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.877 [2024-04-25 03:28:51.099857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.877 qpair failed and we were unable to recover it. 00:28:16.877 [2024-04-25 03:28:51.100076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.877 [2024-04-25 03:28:51.100266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.877 [2024-04-25 03:28:51.100294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.877 qpair failed and we were unable to recover it. 00:28:16.877 [2024-04-25 03:28:51.100516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.877 [2024-04-25 03:28:51.100773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.877 [2024-04-25 03:28:51.100801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.877 qpair failed and we were unable to recover it. 00:28:16.877 [2024-04-25 03:28:51.101040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.877 [2024-04-25 03:28:51.101252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.877 [2024-04-25 03:28:51.101277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.877 qpair failed and we were unable to recover it. 00:28:16.877 [2024-04-25 03:28:51.101615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.877 [2024-04-25 03:28:51.101879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.877 [2024-04-25 03:28:51.101907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.877 qpair failed and we were unable to recover it. 00:28:16.877 [2024-04-25 03:28:51.102128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.877 [2024-04-25 03:28:51.102523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.877 [2024-04-25 03:28:51.102575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.877 qpair failed and we were unable to recover it. 00:28:16.877 [2024-04-25 03:28:51.102796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.877 [2024-04-25 03:28:51.103171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.877 [2024-04-25 03:28:51.103216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.877 qpair failed and we were unable to recover it. 00:28:16.877 [2024-04-25 03:28:51.103459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.877 [2024-04-25 03:28:51.103679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.877 [2024-04-25 03:28:51.103704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.877 qpair failed and we were unable to recover it. 00:28:16.877 [2024-04-25 03:28:51.103936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.877 [2024-04-25 03:28:51.104196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.877 [2024-04-25 03:28:51.104234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.877 qpair failed and we were unable to recover it. 00:28:16.877 [2024-04-25 03:28:51.104488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.877 [2024-04-25 03:28:51.104795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.877 [2024-04-25 03:28:51.104823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.877 qpair failed and we were unable to recover it. 00:28:16.877 [2024-04-25 03:28:51.105074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.877 [2024-04-25 03:28:51.105359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.877 [2024-04-25 03:28:51.105382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.877 qpair failed and we were unable to recover it. 00:28:16.877 [2024-04-25 03:28:51.105637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.877 [2024-04-25 03:28:51.105870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.877 [2024-04-25 03:28:51.105897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.877 qpair failed and we were unable to recover it. 00:28:16.877 [2024-04-25 03:28:51.106158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.877 [2024-04-25 03:28:51.106357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.877 [2024-04-25 03:28:51.106381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.877 qpair failed and we were unable to recover it. 00:28:16.877 [2024-04-25 03:28:51.106644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.877 [2024-04-25 03:28:51.106849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.877 [2024-04-25 03:28:51.106877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.877 qpair failed and we were unable to recover it. 00:28:16.877 [2024-04-25 03:28:51.107102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.877 [2024-04-25 03:28:51.107365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.877 [2024-04-25 03:28:51.107392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.877 qpair failed and we were unable to recover it. 00:28:16.877 [2024-04-25 03:28:51.107599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.877 [2024-04-25 03:28:51.107849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.877 [2024-04-25 03:28:51.107877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.877 qpair failed and we were unable to recover it. 00:28:16.877 [2024-04-25 03:28:51.108105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.877 [2024-04-25 03:28:51.108360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.877 [2024-04-25 03:28:51.108408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.877 qpair failed and we were unable to recover it. 00:28:16.877 [2024-04-25 03:28:51.108619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.877 [2024-04-25 03:28:51.108850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.877 [2024-04-25 03:28:51.108879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.877 qpair failed and we were unable to recover it. 00:28:16.877 [2024-04-25 03:28:51.109156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.877 [2024-04-25 03:28:51.109334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.877 [2024-04-25 03:28:51.109361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.877 qpair failed and we were unable to recover it. 00:28:16.877 [2024-04-25 03:28:51.109582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.877 [2024-04-25 03:28:51.109807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.877 [2024-04-25 03:28:51.109834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.877 qpair failed and we were unable to recover it. 00:28:16.877 [2024-04-25 03:28:51.110181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.877 [2024-04-25 03:28:51.110602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.877 [2024-04-25 03:28:51.110661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.877 qpair failed and we were unable to recover it. 00:28:16.877 [2024-04-25 03:28:51.110884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.877 [2024-04-25 03:28:51.111096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.877 [2024-04-25 03:28:51.111124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.877 qpair failed and we were unable to recover it. 00:28:16.877 [2024-04-25 03:28:51.111365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.877 [2024-04-25 03:28:51.111606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.877 [2024-04-25 03:28:51.111649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.877 qpair failed and we were unable to recover it. 00:28:16.877 [2024-04-25 03:28:51.111868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.877 [2024-04-25 03:28:51.112115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.877 [2024-04-25 03:28:51.112142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.877 qpair failed and we were unable to recover it. 00:28:16.877 [2024-04-25 03:28:51.112469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.877 [2024-04-25 03:28:51.112682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.877 [2024-04-25 03:28:51.112710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.877 qpair failed and we were unable to recover it. 00:28:16.877 [2024-04-25 03:28:51.112906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.877 [2024-04-25 03:28:51.113473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.877 [2024-04-25 03:28:51.113529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.877 qpair failed and we were unable to recover it. 00:28:16.877 [2024-04-25 03:28:51.113763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.877 [2024-04-25 03:28:51.113963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.877 [2024-04-25 03:28:51.114008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.877 qpair failed and we were unable to recover it. 00:28:16.877 [2024-04-25 03:28:51.114263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.878 [2024-04-25 03:28:51.114697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.878 [2024-04-25 03:28:51.114729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.878 qpair failed and we were unable to recover it. 00:28:16.878 [2024-04-25 03:28:51.114959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.878 [2024-04-25 03:28:51.115191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.878 [2024-04-25 03:28:51.115239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.878 qpair failed and we were unable to recover it. 00:28:16.878 [2024-04-25 03:28:51.115467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.878 [2024-04-25 03:28:51.115712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.878 [2024-04-25 03:28:51.115740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.878 qpair failed and we were unable to recover it. 00:28:16.878 [2024-04-25 03:28:51.115962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.878 [2024-04-25 03:28:51.116177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.878 [2024-04-25 03:28:51.116205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.878 qpair failed and we were unable to recover it. 00:28:16.878 [2024-04-25 03:28:51.116477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.878 [2024-04-25 03:28:51.116734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.878 [2024-04-25 03:28:51.116762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.878 qpair failed and we were unable to recover it. 00:28:16.878 [2024-04-25 03:28:51.116978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.878 [2024-04-25 03:28:51.117396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.878 [2024-04-25 03:28:51.117449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.878 qpair failed and we were unable to recover it. 00:28:16.878 [2024-04-25 03:28:51.117680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.878 [2024-04-25 03:28:51.117876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.878 [2024-04-25 03:28:51.117903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.878 qpair failed and we were unable to recover it. 00:28:16.878 [2024-04-25 03:28:51.118116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.878 [2024-04-25 03:28:51.118339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.878 [2024-04-25 03:28:51.118367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.878 qpair failed and we were unable to recover it. 00:28:16.878 [2024-04-25 03:28:51.118568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.878 [2024-04-25 03:28:51.118796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.878 [2024-04-25 03:28:51.118824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.878 qpair failed and we were unable to recover it. 00:28:16.878 [2024-04-25 03:28:51.119086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.878 [2024-04-25 03:28:51.119323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.878 [2024-04-25 03:28:51.119353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.878 qpair failed and we were unable to recover it. 00:28:16.878 [2024-04-25 03:28:51.119599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.878 [2024-04-25 03:28:51.119863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.878 [2024-04-25 03:28:51.119891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.878 qpair failed and we were unable to recover it. 00:28:16.878 [2024-04-25 03:28:51.120146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.878 [2024-04-25 03:28:51.120461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.878 [2024-04-25 03:28:51.120496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.878 qpair failed and we were unable to recover it. 00:28:16.878 [2024-04-25 03:28:51.120726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.878 [2024-04-25 03:28:51.120942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.878 [2024-04-25 03:28:51.120970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.878 qpair failed and we were unable to recover it. 00:28:16.878 [2024-04-25 03:28:51.121160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.878 [2024-04-25 03:28:51.121384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.878 [2024-04-25 03:28:51.121409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.878 qpair failed and we were unable to recover it. 00:28:16.878 [2024-04-25 03:28:51.121711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.878 [2024-04-25 03:28:51.121953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.878 [2024-04-25 03:28:51.121981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.878 qpair failed and we were unable to recover it. 00:28:16.878 [2024-04-25 03:28:51.122224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.878 [2024-04-25 03:28:51.122607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.878 [2024-04-25 03:28:51.122667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.878 qpair failed and we were unable to recover it. 00:28:16.878 [2024-04-25 03:28:51.122892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.878 [2024-04-25 03:28:51.123138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.878 [2024-04-25 03:28:51.123179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.878 qpair failed and we were unable to recover it. 00:28:16.878 [2024-04-25 03:28:51.123595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.878 [2024-04-25 03:28:51.123877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.878 [2024-04-25 03:28:51.123904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.878 qpair failed and we were unable to recover it. 00:28:16.878 [2024-04-25 03:28:51.124105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.878 [2024-04-25 03:28:51.124354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.878 [2024-04-25 03:28:51.124382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.878 qpair failed and we were unable to recover it. 00:28:16.878 [2024-04-25 03:28:51.124625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.878 [2024-04-25 03:28:51.124844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.878 [2024-04-25 03:28:51.124872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.878 qpair failed and we were unable to recover it. 00:28:16.878 [2024-04-25 03:28:51.125088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.878 [2024-04-25 03:28:51.125323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.878 [2024-04-25 03:28:51.125348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.878 qpair failed and we were unable to recover it. 00:28:16.878 [2024-04-25 03:28:51.125590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.878 [2024-04-25 03:28:51.125827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.878 [2024-04-25 03:28:51.125852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.878 qpair failed and we were unable to recover it. 00:28:16.878 [2024-04-25 03:28:51.126046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.878 [2024-04-25 03:28:51.126249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.878 [2024-04-25 03:28:51.126273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.878 qpair failed and we were unable to recover it. 00:28:16.878 [2024-04-25 03:28:51.126490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.878 [2024-04-25 03:28:51.126720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.878 [2024-04-25 03:28:51.126748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.878 qpair failed and we were unable to recover it. 00:28:16.878 [2024-04-25 03:28:51.126988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.878 [2024-04-25 03:28:51.127418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.878 [2024-04-25 03:28:51.127468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.878 qpair failed and we were unable to recover it. 00:28:16.878 [2024-04-25 03:28:51.127773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.878 [2024-04-25 03:28:51.128024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.878 [2024-04-25 03:28:51.128075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.878 qpair failed and we were unable to recover it. 00:28:16.878 [2024-04-25 03:28:51.128322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.878 [2024-04-25 03:28:51.128540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.878 [2024-04-25 03:28:51.128567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.878 qpair failed and we were unable to recover it. 00:28:16.878 [2024-04-25 03:28:51.128802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.878 [2024-04-25 03:28:51.128989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.878 [2024-04-25 03:28:51.129017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.878 qpair failed and we were unable to recover it. 00:28:16.878 [2024-04-25 03:28:51.129224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.878 [2024-04-25 03:28:51.129533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.878 [2024-04-25 03:28:51.129561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.878 qpair failed and we were unable to recover it. 00:28:16.878 [2024-04-25 03:28:51.129794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.878 [2024-04-25 03:28:51.130139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.879 [2024-04-25 03:28:51.130187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.879 qpair failed and we were unable to recover it. 00:28:16.879 [2024-04-25 03:28:51.130430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.879 [2024-04-25 03:28:51.130644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.879 [2024-04-25 03:28:51.130672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.879 qpair failed and we were unable to recover it. 00:28:16.879 [2024-04-25 03:28:51.130914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.879 [2024-04-25 03:28:51.131331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.879 [2024-04-25 03:28:51.131391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.879 qpair failed and we were unable to recover it. 00:28:16.879 [2024-04-25 03:28:51.131658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.879 [2024-04-25 03:28:51.131882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.879 [2024-04-25 03:28:51.131910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.879 qpair failed and we were unable to recover it. 00:28:16.879 [2024-04-25 03:28:51.132351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.879 [2024-04-25 03:28:51.132641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.879 [2024-04-25 03:28:51.132669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.879 qpair failed and we were unable to recover it. 00:28:16.879 [2024-04-25 03:28:51.132865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.879 [2024-04-25 03:28:51.133201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.879 [2024-04-25 03:28:51.133265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.879 qpair failed and we were unable to recover it. 00:28:16.879 [2024-04-25 03:28:51.133525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.879 [2024-04-25 03:28:51.133794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.879 [2024-04-25 03:28:51.133822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.879 qpair failed and we were unable to recover it. 00:28:16.879 [2024-04-25 03:28:51.134054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.879 [2024-04-25 03:28:51.134450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.879 [2024-04-25 03:28:51.134504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.879 qpair failed and we were unable to recover it. 00:28:16.879 [2024-04-25 03:28:51.134747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.879 [2024-04-25 03:28:51.134957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.879 [2024-04-25 03:28:51.135002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.879 qpair failed and we were unable to recover it. 00:28:16.879 [2024-04-25 03:28:51.135217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.879 [2024-04-25 03:28:51.135531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.879 [2024-04-25 03:28:51.135583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.879 qpair failed and we were unable to recover it. 00:28:16.879 [2024-04-25 03:28:51.135784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.879 [2024-04-25 03:28:51.136025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.879 [2024-04-25 03:28:51.136054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.879 qpair failed and we were unable to recover it. 00:28:16.879 [2024-04-25 03:28:51.136286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.879 [2024-04-25 03:28:51.136701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.879 [2024-04-25 03:28:51.136729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.879 qpair failed and we were unable to recover it. 00:28:16.879 [2024-04-25 03:28:51.136927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.879 [2024-04-25 03:28:51.137329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.879 [2024-04-25 03:28:51.137387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.879 qpair failed and we were unable to recover it. 00:28:16.879 [2024-04-25 03:28:51.137640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.879 [2024-04-25 03:28:51.137840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.879 [2024-04-25 03:28:51.137875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.879 qpair failed and we were unable to recover it. 00:28:16.879 [2024-04-25 03:28:51.138094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.879 [2024-04-25 03:28:51.138444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.879 [2024-04-25 03:28:51.138506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.879 qpair failed and we were unable to recover it. 00:28:16.879 [2024-04-25 03:28:51.138775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.879 [2024-04-25 03:28:51.139015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.879 [2024-04-25 03:28:51.139047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.879 qpair failed and we were unable to recover it. 00:28:16.879 [2024-04-25 03:28:51.139250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.879 [2024-04-25 03:28:51.139640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.879 [2024-04-25 03:28:51.139697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.879 qpair failed and we were unable to recover it. 00:28:16.879 [2024-04-25 03:28:51.139949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.879 [2024-04-25 03:28:51.140221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.879 [2024-04-25 03:28:51.140265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.879 qpair failed and we were unable to recover it. 00:28:16.879 [2024-04-25 03:28:51.140484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.879 [2024-04-25 03:28:51.140730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.879 [2024-04-25 03:28:51.140759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.879 qpair failed and we were unable to recover it. 00:28:16.879 [2024-04-25 03:28:51.140965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.879 [2024-04-25 03:28:51.141177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.879 [2024-04-25 03:28:51.141204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.879 qpair failed and we were unable to recover it. 00:28:16.879 [2024-04-25 03:28:51.141624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.879 [2024-04-25 03:28:51.141905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.879 [2024-04-25 03:28:51.141937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.879 qpair failed and we were unable to recover it. 00:28:16.879 [2024-04-25 03:28:51.142155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.879 [2024-04-25 03:28:51.142346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.879 [2024-04-25 03:28:51.142370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.879 qpair failed and we were unable to recover it. 00:28:16.879 [2024-04-25 03:28:51.142610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.879 [2024-04-25 03:28:51.142836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.879 [2024-04-25 03:28:51.142868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.879 qpair failed and we were unable to recover it. 00:28:16.879 [2024-04-25 03:28:51.143154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.879 [2024-04-25 03:28:51.143473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.879 [2024-04-25 03:28:51.143534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.879 qpair failed and we were unable to recover it. 00:28:16.879 [2024-04-25 03:28:51.143771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.879 [2024-04-25 03:28:51.143988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.879 [2024-04-25 03:28:51.144016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.879 qpair failed and we were unable to recover it. 00:28:16.879 [2024-04-25 03:28:51.144256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.879 [2024-04-25 03:28:51.144685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.879 [2024-04-25 03:28:51.144713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.879 qpair failed and we were unable to recover it. 00:28:16.879 [2024-04-25 03:28:51.144928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.879 [2024-04-25 03:28:51.145346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.879 [2024-04-25 03:28:51.145401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.879 qpair failed and we were unable to recover it. 00:28:16.879 [2024-04-25 03:28:51.145648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.879 [2024-04-25 03:28:51.145850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.879 [2024-04-25 03:28:51.145875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.879 qpair failed and we were unable to recover it. 00:28:16.879 [2024-04-25 03:28:51.146202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.879 [2024-04-25 03:28:51.146496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.879 [2024-04-25 03:28:51.146520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.879 qpair failed and we were unable to recover it. 00:28:16.879 [2024-04-25 03:28:51.146751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.879 [2024-04-25 03:28:51.146980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.880 [2024-04-25 03:28:51.147008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.880 qpair failed and we were unable to recover it. 00:28:16.880 [2024-04-25 03:28:51.147231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.880 [2024-04-25 03:28:51.147454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.880 [2024-04-25 03:28:51.147492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.880 qpair failed and we were unable to recover it. 00:28:16.880 [2024-04-25 03:28:51.147727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.880 [2024-04-25 03:28:51.147963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.880 [2024-04-25 03:28:51.148000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.880 qpair failed and we were unable to recover it. 00:28:16.880 [2024-04-25 03:28:51.148408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.880 [2024-04-25 03:28:51.148678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.880 [2024-04-25 03:28:51.148714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.880 qpair failed and we were unable to recover it. 00:28:16.880 [2024-04-25 03:28:51.148948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.880 [2024-04-25 03:28:51.149353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.880 [2024-04-25 03:28:51.149408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.880 qpair failed and we were unable to recover it. 00:28:16.880 [2024-04-25 03:28:51.149636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.880 [2024-04-25 03:28:51.149855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.880 [2024-04-25 03:28:51.149882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.880 qpair failed and we were unable to recover it. 00:28:16.880 [2024-04-25 03:28:51.150135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.880 [2024-04-25 03:28:51.150415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.880 [2024-04-25 03:28:51.150442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.880 qpair failed and we were unable to recover it. 00:28:16.880 [2024-04-25 03:28:51.150638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.880 [2024-04-25 03:28:51.150861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.880 [2024-04-25 03:28:51.150885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.880 qpair failed and we were unable to recover it. 00:28:16.880 [2024-04-25 03:28:51.151088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.880 [2024-04-25 03:28:51.151445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.880 [2024-04-25 03:28:51.151495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.880 qpair failed and we were unable to recover it. 00:28:16.880 [2024-04-25 03:28:51.151740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.880 [2024-04-25 03:28:51.151926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.880 [2024-04-25 03:28:51.151954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.880 qpair failed and we were unable to recover it. 00:28:16.880 [2024-04-25 03:28:51.152221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.880 [2024-04-25 03:28:51.152562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.880 [2024-04-25 03:28:51.152613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.880 qpair failed and we were unable to recover it. 00:28:16.880 [2024-04-25 03:28:51.152831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.880 [2024-04-25 03:28:51.153070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.880 [2024-04-25 03:28:51.153097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.880 qpair failed and we were unable to recover it. 00:28:16.880 [2024-04-25 03:28:51.153361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.880 [2024-04-25 03:28:51.153574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.880 [2024-04-25 03:28:51.153603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.880 qpair failed and we were unable to recover it. 00:28:16.880 [2024-04-25 03:28:51.153848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.880 [2024-04-25 03:28:51.154165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.880 [2024-04-25 03:28:51.154217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.880 qpair failed and we were unable to recover it. 00:28:16.880 [2024-04-25 03:28:51.154518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.880 [2024-04-25 03:28:51.154720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.880 [2024-04-25 03:28:51.154748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.880 qpair failed and we were unable to recover it. 00:28:16.880 [2024-04-25 03:28:51.155019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.880 [2024-04-25 03:28:51.155355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.880 [2024-04-25 03:28:51.155425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.880 qpair failed and we were unable to recover it. 00:28:16.880 [2024-04-25 03:28:51.155639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.880 [2024-04-25 03:28:51.155856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.880 [2024-04-25 03:28:51.155883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.880 qpair failed and we were unable to recover it. 00:28:16.880 [2024-04-25 03:28:51.156125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.880 [2024-04-25 03:28:51.156489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.880 [2024-04-25 03:28:51.156543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.880 qpair failed and we were unable to recover it. 00:28:16.880 [2024-04-25 03:28:51.156792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.880 [2024-04-25 03:28:51.156957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.880 [2024-04-25 03:28:51.156982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.880 qpair failed and we were unable to recover it. 00:28:16.880 [2024-04-25 03:28:51.157163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.880 [2024-04-25 03:28:51.157491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.880 [2024-04-25 03:28:51.157543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.880 qpair failed and we were unable to recover it. 00:28:16.880 [2024-04-25 03:28:51.157790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.880 [2024-04-25 03:28:51.158153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.880 [2024-04-25 03:28:51.158214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.880 qpair failed and we were unable to recover it. 00:28:16.880 [2024-04-25 03:28:51.158438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.880 [2024-04-25 03:28:51.158663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.880 [2024-04-25 03:28:51.158691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.880 qpair failed and we were unable to recover it. 00:28:16.880 [2024-04-25 03:28:51.158957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.880 [2024-04-25 03:28:51.159288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.880 [2024-04-25 03:28:51.159341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.880 qpair failed and we were unable to recover it. 00:28:16.880 [2024-04-25 03:28:51.159660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.880 [2024-04-25 03:28:51.159906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.880 [2024-04-25 03:28:51.159936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.881 qpair failed and we were unable to recover it. 00:28:16.881 [2024-04-25 03:28:51.160135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.881 [2024-04-25 03:28:51.160351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.881 [2024-04-25 03:28:51.160379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.881 qpair failed and we were unable to recover it. 00:28:16.881 [2024-04-25 03:28:51.160605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.881 [2024-04-25 03:28:51.160832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.881 [2024-04-25 03:28:51.160860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.881 qpair failed and we were unable to recover it. 00:28:16.881 [2024-04-25 03:28:51.161105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.881 [2024-04-25 03:28:51.161374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.881 [2024-04-25 03:28:51.161419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.881 qpair failed and we were unable to recover it. 00:28:16.881 [2024-04-25 03:28:51.161654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.881 [2024-04-25 03:28:51.161877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.881 [2024-04-25 03:28:51.161901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.881 qpair failed and we were unable to recover it. 00:28:16.881 [2024-04-25 03:28:51.162118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.881 [2024-04-25 03:28:51.162495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.881 [2024-04-25 03:28:51.162545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.881 qpair failed and we were unable to recover it. 00:28:16.881 [2024-04-25 03:28:51.162762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.881 [2024-04-25 03:28:51.163137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.881 [2024-04-25 03:28:51.163184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.881 qpair failed and we were unable to recover it. 00:28:16.881 [2024-04-25 03:28:51.163400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.881 [2024-04-25 03:28:51.163593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.881 [2024-04-25 03:28:51.163617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.881 qpair failed and we were unable to recover it. 00:28:16.881 [2024-04-25 03:28:51.163879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.881 [2024-04-25 03:28:51.164126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.881 [2024-04-25 03:28:51.164172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.881 qpair failed and we were unable to recover it. 00:28:16.881 [2024-04-25 03:28:51.164367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.881 [2024-04-25 03:28:51.164610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.881 [2024-04-25 03:28:51.164642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.881 qpair failed and we were unable to recover it. 00:28:16.881 [2024-04-25 03:28:51.164888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.881 [2024-04-25 03:28:51.165205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.881 [2024-04-25 03:28:51.165264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.881 qpair failed and we were unable to recover it. 00:28:16.881 [2024-04-25 03:28:51.165540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.881 [2024-04-25 03:28:51.165799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.881 [2024-04-25 03:28:51.165831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.881 qpair failed and we were unable to recover it. 00:28:16.881 [2024-04-25 03:28:51.166266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.881 [2024-04-25 03:28:51.166545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.881 [2024-04-25 03:28:51.166573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.881 qpair failed and we were unable to recover it. 00:28:16.881 [2024-04-25 03:28:51.166826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.881 [2024-04-25 03:28:51.167227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.881 [2024-04-25 03:28:51.167273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.881 qpair failed and we were unable to recover it. 00:28:16.881 [2024-04-25 03:28:51.167496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.881 [2024-04-25 03:28:51.167706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.881 [2024-04-25 03:28:51.167734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.881 qpair failed and we were unable to recover it. 00:28:16.881 [2024-04-25 03:28:51.167965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.881 [2024-04-25 03:28:51.168219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.881 [2024-04-25 03:28:51.168247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.881 qpair failed and we were unable to recover it. 00:28:16.881 [2024-04-25 03:28:51.168462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.881 [2024-04-25 03:28:51.168769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.881 [2024-04-25 03:28:51.168809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.881 qpair failed and we were unable to recover it. 00:28:16.881 [2024-04-25 03:28:51.169038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.881 [2024-04-25 03:28:51.169321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.881 [2024-04-25 03:28:51.169374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.881 qpair failed and we were unable to recover it. 00:28:16.881 [2024-04-25 03:28:51.169596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.881 [2024-04-25 03:28:51.169854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.881 [2024-04-25 03:28:51.169882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.881 qpair failed and we were unable to recover it. 00:28:16.881 [2024-04-25 03:28:51.170117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.881 [2024-04-25 03:28:51.170553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.881 [2024-04-25 03:28:51.170603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.881 qpair failed and we were unable to recover it. 00:28:16.881 [2024-04-25 03:28:51.170806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.881 [2024-04-25 03:28:51.171025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.881 [2024-04-25 03:28:51.171050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.881 qpair failed and we were unable to recover it. 00:28:16.881 [2024-04-25 03:28:51.171280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.881 [2024-04-25 03:28:51.171493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.881 [2024-04-25 03:28:51.171517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.881 qpair failed and we were unable to recover it. 00:28:16.881 [2024-04-25 03:28:51.171730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.881 [2024-04-25 03:28:51.171972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.881 [2024-04-25 03:28:51.172014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.881 qpair failed and we were unable to recover it. 00:28:16.881 [2024-04-25 03:28:51.172255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.881 [2024-04-25 03:28:51.172478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.881 [2024-04-25 03:28:51.172517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.881 qpair failed and we were unable to recover it. 00:28:16.881 [2024-04-25 03:28:51.172762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.881 [2024-04-25 03:28:51.172984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.881 [2024-04-25 03:28:51.173012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.881 qpair failed and we were unable to recover it. 00:28:16.881 [2024-04-25 03:28:51.173210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.881 [2024-04-25 03:28:51.173420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.881 [2024-04-25 03:28:51.173447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.881 qpair failed and we were unable to recover it. 00:28:16.881 [2024-04-25 03:28:51.173667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.881 [2024-04-25 03:28:51.173913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.881 [2024-04-25 03:28:51.173964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.881 qpair failed and we were unable to recover it. 00:28:16.881 [2024-04-25 03:28:51.174250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.881 [2024-04-25 03:28:51.174661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.881 [2024-04-25 03:28:51.174710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.881 qpair failed and we were unable to recover it. 00:28:16.881 [2024-04-25 03:28:51.174934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.881 [2024-04-25 03:28:51.175153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.881 [2024-04-25 03:28:51.175181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.881 qpair failed and we were unable to recover it. 00:28:16.881 [2024-04-25 03:28:51.175391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.881 [2024-04-25 03:28:51.175650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.881 [2024-04-25 03:28:51.175690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.881 qpair failed and we were unable to recover it. 00:28:16.882 [2024-04-25 03:28:51.175938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.882 [2024-04-25 03:28:51.176227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.882 [2024-04-25 03:28:51.176273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.882 qpair failed and we were unable to recover it. 00:28:16.882 [2024-04-25 03:28:51.176485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.882 [2024-04-25 03:28:51.176689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.882 [2024-04-25 03:28:51.176714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.882 qpair failed and we were unable to recover it. 00:28:16.882 [2024-04-25 03:28:51.176945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.882 [2024-04-25 03:28:51.177123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.882 [2024-04-25 03:28:51.177155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.882 qpair failed and we were unable to recover it. 00:28:16.882 [2024-04-25 03:28:51.177345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.882 [2024-04-25 03:28:51.177566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.882 [2024-04-25 03:28:51.177593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.882 qpair failed and we were unable to recover it. 00:28:16.882 [2024-04-25 03:28:51.177851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.882 [2024-04-25 03:28:51.178105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.882 [2024-04-25 03:28:51.178153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.882 qpair failed and we were unable to recover it. 00:28:16.882 [2024-04-25 03:28:51.178476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.882 [2024-04-25 03:28:51.178739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.882 [2024-04-25 03:28:51.178767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.882 qpair failed and we were unable to recover it. 00:28:16.882 [2024-04-25 03:28:51.178953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.882 [2024-04-25 03:28:51.179437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.882 [2024-04-25 03:28:51.179484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.882 qpair failed and we were unable to recover it. 00:28:16.882 [2024-04-25 03:28:51.179714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.882 [2024-04-25 03:28:51.179930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.882 [2024-04-25 03:28:51.179958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.882 qpair failed and we were unable to recover it. 00:28:16.882 [2024-04-25 03:28:51.180176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.882 [2024-04-25 03:28:51.180449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.882 [2024-04-25 03:28:51.180497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.882 qpair failed and we were unable to recover it. 00:28:16.882 [2024-04-25 03:28:51.180788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.882 [2024-04-25 03:28:51.181021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.882 [2024-04-25 03:28:51.181048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.882 qpair failed and we were unable to recover it. 00:28:16.882 [2024-04-25 03:28:51.181312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.882 [2024-04-25 03:28:51.181589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.882 [2024-04-25 03:28:51.181626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.882 qpair failed and we were unable to recover it. 00:28:16.882 [2024-04-25 03:28:51.181890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.882 [2024-04-25 03:28:51.182166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.882 [2024-04-25 03:28:51.182194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.882 qpair failed and we were unable to recover it. 00:28:16.882 [2024-04-25 03:28:51.182446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.882 [2024-04-25 03:28:51.182724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.882 [2024-04-25 03:28:51.182752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.882 qpair failed and we were unable to recover it. 00:28:16.882 [2024-04-25 03:28:51.182975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.882 [2024-04-25 03:28:51.183193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.882 [2024-04-25 03:28:51.183226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.882 qpair failed and we were unable to recover it. 00:28:16.882 [2024-04-25 03:28:51.183495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.882 [2024-04-25 03:28:51.183780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.882 [2024-04-25 03:28:51.183820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.882 qpair failed and we were unable to recover it. 00:28:16.882 [2024-04-25 03:28:51.184101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.882 [2024-04-25 03:28:51.184354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.882 [2024-04-25 03:28:51.184400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.882 qpair failed and we were unable to recover it. 00:28:16.882 [2024-04-25 03:28:51.184632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.882 [2024-04-25 03:28:51.184866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.882 [2024-04-25 03:28:51.184894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.882 qpair failed and we were unable to recover it. 00:28:16.882 [2024-04-25 03:28:51.185104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.882 [2024-04-25 03:28:51.185305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.882 [2024-04-25 03:28:51.185330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.882 qpair failed and we were unable to recover it. 00:28:16.882 [2024-04-25 03:28:51.185578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.882 [2024-04-25 03:28:51.185812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.882 [2024-04-25 03:28:51.185840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.882 qpair failed and we were unable to recover it. 00:28:16.882 [2024-04-25 03:28:51.186061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.882 [2024-04-25 03:28:51.186275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.882 [2024-04-25 03:28:51.186302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.882 qpair failed and we were unable to recover it. 00:28:16.882 [2024-04-25 03:28:51.186549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.882 [2024-04-25 03:28:51.186760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.882 [2024-04-25 03:28:51.186789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.882 qpair failed and we were unable to recover it. 00:28:16.882 [2024-04-25 03:28:51.186997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.882 [2024-04-25 03:28:51.187241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.882 [2024-04-25 03:28:51.187288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.882 qpair failed and we were unable to recover it. 00:28:16.882 [2024-04-25 03:28:51.187532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.882 [2024-04-25 03:28:51.187790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.882 [2024-04-25 03:28:51.187819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.882 qpair failed and we were unable to recover it. 00:28:16.882 [2024-04-25 03:28:51.187997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.882 [2024-04-25 03:28:51.188246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.882 [2024-04-25 03:28:51.188286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.882 qpair failed and we were unable to recover it. 00:28:16.882 [2024-04-25 03:28:51.188488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.882 [2024-04-25 03:28:51.188747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.882 [2024-04-25 03:28:51.188775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.882 qpair failed and we were unable to recover it. 00:28:16.882 [2024-04-25 03:28:51.188988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.882 [2024-04-25 03:28:51.189253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.882 [2024-04-25 03:28:51.189281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.882 qpair failed and we were unable to recover it. 00:28:16.882 [2024-04-25 03:28:51.189556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.882 [2024-04-25 03:28:51.189824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.882 [2024-04-25 03:28:51.189852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.882 qpair failed and we were unable to recover it. 00:28:16.882 [2024-04-25 03:28:51.190084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.882 [2024-04-25 03:28:51.190258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.882 [2024-04-25 03:28:51.190286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.882 qpair failed and we were unable to recover it. 00:28:16.882 [2024-04-25 03:28:51.190509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.882 [2024-04-25 03:28:51.190761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.882 [2024-04-25 03:28:51.190786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.882 qpair failed and we were unable to recover it. 00:28:16.883 [2024-04-25 03:28:51.190987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.883 [2024-04-25 03:28:51.191196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.883 [2024-04-25 03:28:51.191224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.883 qpair failed and we were unable to recover it. 00:28:16.883 [2024-04-25 03:28:51.191499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.883 [2024-04-25 03:28:51.191760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.883 [2024-04-25 03:28:51.191789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.883 qpair failed and we were unable to recover it. 00:28:16.883 [2024-04-25 03:28:51.191977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.883 [2024-04-25 03:28:51.192210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.883 [2024-04-25 03:28:51.192253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.883 qpair failed and we were unable to recover it. 00:28:16.883 [2024-04-25 03:28:51.192482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.883 [2024-04-25 03:28:51.192736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.883 [2024-04-25 03:28:51.192769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.883 qpair failed and we were unable to recover it. 00:28:16.883 [2024-04-25 03:28:51.193020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.883 [2024-04-25 03:28:51.193228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.883 [2024-04-25 03:28:51.193281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.883 qpair failed and we were unable to recover it. 00:28:16.883 [2024-04-25 03:28:51.193520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.883 [2024-04-25 03:28:51.193749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.883 [2024-04-25 03:28:51.193778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.883 qpair failed and we were unable to recover it. 00:28:16.883 [2024-04-25 03:28:51.193966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.883 [2024-04-25 03:28:51.194169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.883 [2024-04-25 03:28:51.194196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.883 qpair failed and we were unable to recover it. 00:28:16.883 [2024-04-25 03:28:51.194411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.883 [2024-04-25 03:28:51.194654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.883 [2024-04-25 03:28:51.194693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.883 qpair failed and we were unable to recover it. 00:28:16.883 [2024-04-25 03:28:51.194889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.883 [2024-04-25 03:28:51.195058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.883 [2024-04-25 03:28:51.195083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.883 qpair failed and we were unable to recover it. 00:28:16.883 [2024-04-25 03:28:51.195305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.883 [2024-04-25 03:28:51.195546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.883 [2024-04-25 03:28:51.195573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.883 qpair failed and we were unable to recover it. 00:28:16.883 [2024-04-25 03:28:51.195809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.883 [2024-04-25 03:28:51.196016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.883 [2024-04-25 03:28:51.196040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.883 qpair failed and we were unable to recover it. 00:28:16.883 [2024-04-25 03:28:51.196274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.883 [2024-04-25 03:28:51.196492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.883 [2024-04-25 03:28:51.196526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.883 qpair failed and we were unable to recover it. 00:28:16.883 [2024-04-25 03:28:51.196743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.883 [2024-04-25 03:28:51.196955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.883 [2024-04-25 03:28:51.196984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.883 qpair failed and we were unable to recover it. 00:28:16.883 [2024-04-25 03:28:51.197291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.883 [2024-04-25 03:28:51.197529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.883 [2024-04-25 03:28:51.197558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.883 qpair failed and we were unable to recover it. 00:28:16.883 [2024-04-25 03:28:51.197784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.883 [2024-04-25 03:28:51.198044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.883 [2024-04-25 03:28:51.198071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.883 qpair failed and we were unable to recover it. 00:28:16.883 [2024-04-25 03:28:51.198300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.883 [2024-04-25 03:28:51.198481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.883 [2024-04-25 03:28:51.198505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.883 qpair failed and we were unable to recover it. 00:28:16.883 [2024-04-25 03:28:51.198719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.883 [2024-04-25 03:28:51.198938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.883 [2024-04-25 03:28:51.198963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.883 qpair failed and we were unable to recover it. 00:28:16.883 [2024-04-25 03:28:51.199316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.883 [2024-04-25 03:28:51.199534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.883 [2024-04-25 03:28:51.199561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.883 qpair failed and we were unable to recover it. 00:28:16.883 [2024-04-25 03:28:51.199791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.883 [2024-04-25 03:28:51.200006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.883 [2024-04-25 03:28:51.200037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.883 qpair failed and we were unable to recover it. 00:28:16.883 [2024-04-25 03:28:51.200266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.883 [2024-04-25 03:28:51.200453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.883 [2024-04-25 03:28:51.200480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.883 qpair failed and we were unable to recover it. 00:28:16.883 [2024-04-25 03:28:51.200667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.883 [2024-04-25 03:28:51.200884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.883 [2024-04-25 03:28:51.200912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.883 qpair failed and we were unable to recover it. 00:28:16.883 [2024-04-25 03:28:51.201154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.883 [2024-04-25 03:28:51.201401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.883 [2024-04-25 03:28:51.201426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.883 qpair failed and we were unable to recover it. 00:28:16.883 [2024-04-25 03:28:51.201657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.883 [2024-04-25 03:28:51.201903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.883 [2024-04-25 03:28:51.201930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.883 qpair failed and we were unable to recover it. 00:28:16.883 [2024-04-25 03:28:51.202139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.883 [2024-04-25 03:28:51.202358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.883 [2024-04-25 03:28:51.202385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.883 qpair failed and we were unable to recover it. 00:28:16.883 [2024-04-25 03:28:51.202603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.883 [2024-04-25 03:28:51.202806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.883 [2024-04-25 03:28:51.202832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.883 qpair failed and we were unable to recover it. 00:28:16.883 [2024-04-25 03:28:51.203028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.883 [2024-04-25 03:28:51.203220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.883 [2024-04-25 03:28:51.203245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.883 qpair failed and we were unable to recover it. 00:28:16.883 [2024-04-25 03:28:51.203485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.883 [2024-04-25 03:28:51.203701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.883 [2024-04-25 03:28:51.203732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.883 qpair failed and we were unable to recover it. 00:28:16.883 [2024-04-25 03:28:51.203967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.883 [2024-04-25 03:28:51.204170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.883 [2024-04-25 03:28:51.204196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.883 qpair failed and we were unable to recover it. 00:28:16.883 [2024-04-25 03:28:51.204392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.883 [2024-04-25 03:28:51.204592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.883 [2024-04-25 03:28:51.204617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.883 qpair failed and we were unable to recover it. 00:28:16.883 [2024-04-25 03:28:51.204824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.884 [2024-04-25 03:28:51.204992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.884 [2024-04-25 03:28:51.205018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.884 qpair failed and we were unable to recover it. 00:28:16.884 [2024-04-25 03:28:51.205226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.884 [2024-04-25 03:28:51.205423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.884 [2024-04-25 03:28:51.205448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.884 qpair failed and we were unable to recover it. 00:28:16.884 [2024-04-25 03:28:51.205648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.884 [2024-04-25 03:28:51.205884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.884 [2024-04-25 03:28:51.205909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.884 qpair failed and we were unable to recover it. 00:28:16.884 [2024-04-25 03:28:51.206127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.884 [2024-04-25 03:28:51.206359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.884 [2024-04-25 03:28:51.206384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.884 qpair failed and we were unable to recover it. 00:28:16.884 [2024-04-25 03:28:51.206637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.884 [2024-04-25 03:28:51.206812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.884 [2024-04-25 03:28:51.206837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.884 qpair failed and we were unable to recover it. 00:28:16.884 [2024-04-25 03:28:51.207070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.884 [2024-04-25 03:28:51.207269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.884 [2024-04-25 03:28:51.207296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.884 qpair failed and we were unable to recover it. 00:28:16.884 [2024-04-25 03:28:51.207507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.884 [2024-04-25 03:28:51.207673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.884 [2024-04-25 03:28:51.207698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.884 qpair failed and we were unable to recover it. 00:28:16.884 [2024-04-25 03:28:51.207900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.884 [2024-04-25 03:28:51.208120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.884 [2024-04-25 03:28:51.208145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.884 qpair failed and we were unable to recover it. 00:28:16.884 [2024-04-25 03:28:51.208320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.884 [2024-04-25 03:28:51.208518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.884 [2024-04-25 03:28:51.208542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.884 qpair failed and we were unable to recover it. 00:28:16.884 [2024-04-25 03:28:51.208718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.884 [2024-04-25 03:28:51.208893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.884 [2024-04-25 03:28:51.208917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.884 qpair failed and we were unable to recover it. 00:28:16.884 [2024-04-25 03:28:51.209085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.884 [2024-04-25 03:28:51.209262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.884 [2024-04-25 03:28:51.209292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.884 qpair failed and we were unable to recover it. 00:28:16.884 [2024-04-25 03:28:51.209530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.884 [2024-04-25 03:28:51.209733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.884 [2024-04-25 03:28:51.209758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.884 qpair failed and we were unable to recover it. 00:28:16.884 [2024-04-25 03:28:51.209946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.884 [2024-04-25 03:28:51.210135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.884 [2024-04-25 03:28:51.210160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.884 qpair failed and we were unable to recover it. 00:28:16.884 [2024-04-25 03:28:51.210335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.884 [2024-04-25 03:28:51.210542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.884 [2024-04-25 03:28:51.210567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.884 qpair failed and we were unable to recover it. 00:28:16.884 [2024-04-25 03:28:51.210743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.884 [2024-04-25 03:28:51.210908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.884 [2024-04-25 03:28:51.210944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.884 qpair failed and we were unable to recover it. 00:28:16.884 [2024-04-25 03:28:51.211142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.884 [2024-04-25 03:28:51.211332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.884 [2024-04-25 03:28:51.211357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.884 qpair failed and we were unable to recover it. 00:28:16.884 [2024-04-25 03:28:51.211557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.884 [2024-04-25 03:28:51.211760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.884 [2024-04-25 03:28:51.211786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.884 qpair failed and we were unable to recover it. 00:28:16.884 [2024-04-25 03:28:51.211957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.884 [2024-04-25 03:28:51.212152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.884 [2024-04-25 03:28:51.212177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.884 qpair failed and we were unable to recover it. 00:28:16.884 [2024-04-25 03:28:51.212427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.884 [2024-04-25 03:28:51.212654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.884 [2024-04-25 03:28:51.212679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.884 qpair failed and we were unable to recover it. 00:28:16.884 [2024-04-25 03:28:51.212874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.884 [2024-04-25 03:28:51.213105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.884 [2024-04-25 03:28:51.213130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.884 qpair failed and we were unable to recover it. 00:28:16.884 [2024-04-25 03:28:51.213334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.884 [2024-04-25 03:28:51.213534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.884 [2024-04-25 03:28:51.213561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.884 qpair failed and we were unable to recover it. 00:28:16.884 [2024-04-25 03:28:51.213764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.884 [2024-04-25 03:28:51.213942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.884 [2024-04-25 03:28:51.213968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.884 qpair failed and we were unable to recover it. 00:28:16.884 [2024-04-25 03:28:51.214171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.884 [2024-04-25 03:28:51.214367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.884 [2024-04-25 03:28:51.214391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.884 qpair failed and we were unable to recover it. 00:28:16.884 [2024-04-25 03:28:51.214560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.884 [2024-04-25 03:28:51.214759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.884 [2024-04-25 03:28:51.214784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.884 qpair failed and we were unable to recover it. 00:28:16.884 [2024-04-25 03:28:51.214949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.884 [2024-04-25 03:28:51.215173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.884 [2024-04-25 03:28:51.215198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.884 qpair failed and we were unable to recover it. 00:28:16.884 [2024-04-25 03:28:51.215396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.885 [2024-04-25 03:28:51.215566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.885 [2024-04-25 03:28:51.215595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:16.885 qpair failed and we were unable to recover it. 00:28:16.885 [2024-04-25 03:28:51.215810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.885 [2024-04-25 03:28:51.216055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.885 [2024-04-25 03:28:51.216082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.885 qpair failed and we were unable to recover it. 00:28:16.885 [2024-04-25 03:28:51.216262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.885 [2024-04-25 03:28:51.216471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.885 [2024-04-25 03:28:51.216496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.885 qpair failed and we were unable to recover it. 00:28:16.885 [2024-04-25 03:28:51.216728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.885 [2024-04-25 03:28:51.216926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.885 [2024-04-25 03:28:51.216951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.885 qpair failed and we were unable to recover it. 00:28:16.885 [2024-04-25 03:28:51.217152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.885 [2024-04-25 03:28:51.217331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.885 [2024-04-25 03:28:51.217358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.885 qpair failed and we were unable to recover it. 00:28:16.885 [2024-04-25 03:28:51.217555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.885 [2024-04-25 03:28:51.217751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.885 [2024-04-25 03:28:51.217777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.885 qpair failed and we were unable to recover it. 00:28:16.885 [2024-04-25 03:28:51.217981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.885 [2024-04-25 03:28:51.218173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.885 [2024-04-25 03:28:51.218197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.885 qpair failed and we were unable to recover it. 00:28:16.885 [2024-04-25 03:28:51.218379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.885 [2024-04-25 03:28:51.218584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.885 [2024-04-25 03:28:51.218609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.885 qpair failed and we were unable to recover it. 00:28:16.885 [2024-04-25 03:28:51.218840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.885 [2024-04-25 03:28:51.219040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.885 [2024-04-25 03:28:51.219065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.885 qpair failed and we were unable to recover it. 00:28:16.885 [2024-04-25 03:28:51.219236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.885 [2024-04-25 03:28:51.219414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.885 [2024-04-25 03:28:51.219441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.885 qpair failed and we were unable to recover it. 00:28:16.885 [2024-04-25 03:28:51.219652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.885 [2024-04-25 03:28:51.219848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.885 [2024-04-25 03:28:51.219875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.885 qpair failed and we were unable to recover it. 00:28:16.885 [2024-04-25 03:28:51.220087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.885 [2024-04-25 03:28:51.220309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.885 [2024-04-25 03:28:51.220335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.885 qpair failed and we were unable to recover it. 00:28:16.885 [2024-04-25 03:28:51.220543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.885 [2024-04-25 03:28:51.220741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.885 [2024-04-25 03:28:51.220768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.885 qpair failed and we were unable to recover it. 00:28:16.885 [2024-04-25 03:28:51.220934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.885 [2024-04-25 03:28:51.221134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.885 [2024-04-25 03:28:51.221159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.885 qpair failed and we were unable to recover it. 00:28:16.885 [2024-04-25 03:28:51.221325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.885 [2024-04-25 03:28:51.221520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.885 [2024-04-25 03:28:51.221545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.885 qpair failed and we were unable to recover it. 00:28:16.885 [2024-04-25 03:28:51.221771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.885 [2024-04-25 03:28:51.221992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.885 [2024-04-25 03:28:51.222018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.885 qpair failed and we were unable to recover it. 00:28:16.885 [2024-04-25 03:28:51.222217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.885 [2024-04-25 03:28:51.222416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.885 [2024-04-25 03:28:51.222441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.885 qpair failed and we were unable to recover it. 00:28:16.885 [2024-04-25 03:28:51.222637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.885 [2024-04-25 03:28:51.222832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.885 [2024-04-25 03:28:51.222857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.885 qpair failed and we were unable to recover it. 00:28:16.885 [2024-04-25 03:28:51.223077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.885 [2024-04-25 03:28:51.223251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.885 [2024-04-25 03:28:51.223278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.885 qpair failed and we were unable to recover it. 00:28:16.885 [2024-04-25 03:28:51.223479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.885 [2024-04-25 03:28:51.223649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.885 [2024-04-25 03:28:51.223675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.885 qpair failed and we were unable to recover it. 00:28:16.885 [2024-04-25 03:28:51.223840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.885 [2024-04-25 03:28:51.224003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.885 [2024-04-25 03:28:51.224030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.885 qpair failed and we were unable to recover it. 00:28:16.885 [2024-04-25 03:28:51.224242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.885 [2024-04-25 03:28:51.224425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.885 [2024-04-25 03:28:51.224450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.885 qpair failed and we were unable to recover it. 00:28:16.885 [2024-04-25 03:28:51.224620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.885 [2024-04-25 03:28:51.224860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.885 [2024-04-25 03:28:51.224886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.885 qpair failed and we were unable to recover it. 00:28:16.885 [2024-04-25 03:28:51.225140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.885 [2024-04-25 03:28:51.225446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.885 [2024-04-25 03:28:51.225474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.885 qpair failed and we were unable to recover it. 00:28:16.885 [2024-04-25 03:28:51.225673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.885 [2024-04-25 03:28:51.225873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.885 [2024-04-25 03:28:51.225898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.885 qpair failed and we were unable to recover it. 00:28:16.885 [2024-04-25 03:28:51.226140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.885 [2024-04-25 03:28:51.226337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.885 [2024-04-25 03:28:51.226362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.885 qpair failed and we were unable to recover it. 00:28:16.885 [2024-04-25 03:28:51.226583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.885 [2024-04-25 03:28:51.226761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.885 [2024-04-25 03:28:51.226787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.885 qpair failed and we were unable to recover it. 00:28:16.885 [2024-04-25 03:28:51.226985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.885 [2024-04-25 03:28:51.227197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.885 [2024-04-25 03:28:51.227222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.885 qpair failed and we were unable to recover it. 00:28:16.885 [2024-04-25 03:28:51.227423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.885 [2024-04-25 03:28:51.227620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.885 [2024-04-25 03:28:51.227651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.885 qpair failed and we were unable to recover it. 00:28:16.886 [2024-04-25 03:28:51.227830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.886 [2024-04-25 03:28:51.228022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.886 [2024-04-25 03:28:51.228052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.886 qpair failed and we were unable to recover it. 00:28:16.886 [2024-04-25 03:28:51.228245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.886 [2024-04-25 03:28:51.228411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.886 [2024-04-25 03:28:51.228438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.886 qpair failed and we were unable to recover it. 00:28:16.886 [2024-04-25 03:28:51.228684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.886 [2024-04-25 03:28:51.228854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.886 [2024-04-25 03:28:51.228880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.886 qpair failed and we were unable to recover it. 00:28:16.886 [2024-04-25 03:28:51.229076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.886 [2024-04-25 03:28:51.229242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.886 [2024-04-25 03:28:51.229268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.886 qpair failed and we were unable to recover it. 00:28:16.886 [2024-04-25 03:28:51.229465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.886 [2024-04-25 03:28:51.229642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.886 [2024-04-25 03:28:51.229669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.886 qpair failed and we were unable to recover it. 00:28:16.886 [2024-04-25 03:28:51.229900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.886 [2024-04-25 03:28:51.230093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.886 [2024-04-25 03:28:51.230118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.886 qpair failed and we were unable to recover it. 00:28:16.886 [2024-04-25 03:28:51.230288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.886 [2024-04-25 03:28:51.230485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.886 [2024-04-25 03:28:51.230525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.886 qpair failed and we were unable to recover it. 00:28:16.886 [2024-04-25 03:28:51.230762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.886 [2024-04-25 03:28:51.230938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.886 [2024-04-25 03:28:51.230964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.886 qpair failed and we were unable to recover it. 00:28:16.886 [2024-04-25 03:28:51.231160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.886 [2024-04-25 03:28:51.231365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.886 [2024-04-25 03:28:51.231391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.886 qpair failed and we were unable to recover it. 00:28:16.886 [2024-04-25 03:28:51.231587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.886 [2024-04-25 03:28:51.231759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.886 [2024-04-25 03:28:51.231786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.886 qpair failed and we were unable to recover it. 00:28:16.886 [2024-04-25 03:28:51.231955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.886 [2024-04-25 03:28:51.232140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.886 [2024-04-25 03:28:51.232165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.886 qpair failed and we were unable to recover it. 00:28:16.886 [2024-04-25 03:28:51.232362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.886 [2024-04-25 03:28:51.232536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.886 [2024-04-25 03:28:51.232561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.886 qpair failed and we were unable to recover it. 00:28:16.886 [2024-04-25 03:28:51.232780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.886 [2024-04-25 03:28:51.232956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.886 [2024-04-25 03:28:51.232982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.886 qpair failed and we were unable to recover it. 00:28:16.886 [2024-04-25 03:28:51.233217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.886 [2024-04-25 03:28:51.233439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.886 [2024-04-25 03:28:51.233465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.886 qpair failed and we were unable to recover it. 00:28:16.886 [2024-04-25 03:28:51.233642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.886 [2024-04-25 03:28:51.233809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.886 [2024-04-25 03:28:51.233835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.886 qpair failed and we were unable to recover it. 00:28:16.886 [2024-04-25 03:28:51.234017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.886 [2024-04-25 03:28:51.234216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.886 [2024-04-25 03:28:51.234241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.886 qpair failed and we were unable to recover it. 00:28:16.886 [2024-04-25 03:28:51.234487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.886 [2024-04-25 03:28:51.234677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.886 [2024-04-25 03:28:51.234714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.886 qpair failed and we were unable to recover it. 00:28:16.886 [2024-04-25 03:28:51.234927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.886 [2024-04-25 03:28:51.235159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.886 [2024-04-25 03:28:51.235184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.886 qpair failed and we were unable to recover it. 00:28:16.886 [2024-04-25 03:28:51.235405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.886 [2024-04-25 03:28:51.235625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.886 [2024-04-25 03:28:51.235662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.886 qpair failed and we were unable to recover it. 00:28:16.886 [2024-04-25 03:28:51.235892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.886 [2024-04-25 03:28:51.236089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.886 [2024-04-25 03:28:51.236115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.886 qpair failed and we were unable to recover it. 00:28:16.886 [2024-04-25 03:28:51.236310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.886 [2024-04-25 03:28:51.236534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.886 [2024-04-25 03:28:51.236560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.886 qpair failed and we were unable to recover it. 00:28:16.886 [2024-04-25 03:28:51.236731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.886 [2024-04-25 03:28:51.236907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.886 [2024-04-25 03:28:51.236933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.886 qpair failed and we were unable to recover it. 00:28:16.886 [2024-04-25 03:28:51.237137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.886 [2024-04-25 03:28:51.237350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.886 [2024-04-25 03:28:51.237375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.886 qpair failed and we were unable to recover it. 00:28:16.886 [2024-04-25 03:28:51.237596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.886 [2024-04-25 03:28:51.237773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.886 [2024-04-25 03:28:51.237798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.886 qpair failed and we were unable to recover it. 00:28:16.886 [2024-04-25 03:28:51.237998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.886 [2024-04-25 03:28:51.238191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.886 [2024-04-25 03:28:51.238217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.886 qpair failed and we were unable to recover it. 00:28:16.886 [2024-04-25 03:28:51.238399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.886 [2024-04-25 03:28:51.238602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.886 [2024-04-25 03:28:51.238633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.886 qpair failed and we were unable to recover it. 00:28:16.886 [2024-04-25 03:28:51.238834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.886 [2024-04-25 03:28:51.239073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.886 [2024-04-25 03:28:51.239130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.886 qpair failed and we were unable to recover it. 00:28:16.886 [2024-04-25 03:28:51.239383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.886 [2024-04-25 03:28:51.239554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.886 [2024-04-25 03:28:51.239580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.886 qpair failed and we were unable to recover it. 00:28:16.886 [2024-04-25 03:28:51.239803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.886 [2024-04-25 03:28:51.239975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.886 [2024-04-25 03:28:51.240000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.887 qpair failed and we were unable to recover it. 00:28:16.887 [2024-04-25 03:28:51.240217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.887 [2024-04-25 03:28:51.240595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.887 [2024-04-25 03:28:51.240621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.887 qpair failed and we were unable to recover it. 00:28:16.887 [2024-04-25 03:28:51.240821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.887 [2024-04-25 03:28:51.240988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.887 [2024-04-25 03:28:51.241014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.887 qpair failed and we were unable to recover it. 00:28:16.887 [2024-04-25 03:28:51.241203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.887 [2024-04-25 03:28:51.241407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.887 [2024-04-25 03:28:51.241434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.887 qpair failed and we were unable to recover it. 00:28:16.887 [2024-04-25 03:28:51.241658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.887 [2024-04-25 03:28:51.241892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.887 [2024-04-25 03:28:51.241927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.887 qpair failed and we were unable to recover it. 00:28:16.887 [2024-04-25 03:28:51.242165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.887 [2024-04-25 03:28:51.242397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.887 [2024-04-25 03:28:51.242422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.887 qpair failed and we were unable to recover it. 00:28:16.887 [2024-04-25 03:28:51.243748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.887 [2024-04-25 03:28:51.243984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.887 [2024-04-25 03:28:51.244015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.887 qpair failed and we were unable to recover it. 00:28:16.887 [2024-04-25 03:28:51.244244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.887 [2024-04-25 03:28:51.244414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.887 [2024-04-25 03:28:51.244440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.887 qpair failed and we were unable to recover it. 00:28:16.887 [2024-04-25 03:28:51.244675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.887 [2024-04-25 03:28:51.244872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.887 [2024-04-25 03:28:51.244902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.887 qpair failed and we were unable to recover it. 00:28:16.887 [2024-04-25 03:28:51.245145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.887 [2024-04-25 03:28:51.245374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.887 [2024-04-25 03:28:51.245422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.887 qpair failed and we were unable to recover it. 00:28:16.887 [2024-04-25 03:28:51.245640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.887 [2024-04-25 03:28:51.245839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.887 [2024-04-25 03:28:51.245865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.887 qpair failed and we were unable to recover it. 00:28:16.887 [2024-04-25 03:28:51.246039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.887 [2024-04-25 03:28:51.246237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.887 [2024-04-25 03:28:51.246263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.887 qpair failed and we were unable to recover it. 00:28:16.887 [2024-04-25 03:28:51.246518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.887 [2024-04-25 03:28:51.246743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.887 [2024-04-25 03:28:51.246773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.887 qpair failed and we were unable to recover it. 00:28:16.887 [2024-04-25 03:28:51.246994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.887 [2024-04-25 03:28:51.247162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.887 [2024-04-25 03:28:51.247189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.887 qpair failed and we were unable to recover it. 00:28:16.887 [2024-04-25 03:28:51.247369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.887 [2024-04-25 03:28:51.247593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.887 [2024-04-25 03:28:51.247619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.887 qpair failed and we were unable to recover it. 00:28:16.887 [2024-04-25 03:28:51.247874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.887 [2024-04-25 03:28:51.248096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.887 [2024-04-25 03:28:51.248122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.887 qpair failed and we were unable to recover it. 00:28:16.887 [2024-04-25 03:28:51.248341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.887 [2024-04-25 03:28:51.248581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.887 [2024-04-25 03:28:51.248606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.887 qpair failed and we were unable to recover it. 00:28:16.887 [2024-04-25 03:28:51.248810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.887 [2024-04-25 03:28:51.249051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.887 [2024-04-25 03:28:51.249096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.887 qpair failed and we were unable to recover it. 00:28:16.887 [2024-04-25 03:28:51.250187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.887 [2024-04-25 03:28:51.250606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.887 [2024-04-25 03:28:51.250642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.887 qpair failed and we were unable to recover it. 00:28:16.887 [2024-04-25 03:28:51.250828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.887 [2024-04-25 03:28:51.250999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.887 [2024-04-25 03:28:51.251024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.887 qpair failed and we were unable to recover it. 00:28:16.887 [2024-04-25 03:28:51.251245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.887 [2024-04-25 03:28:51.251542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.887 [2024-04-25 03:28:51.251612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.887 qpair failed and we were unable to recover it. 00:28:16.887 [2024-04-25 03:28:51.251827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.887 [2024-04-25 03:28:51.252024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.887 [2024-04-25 03:28:51.252050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.887 qpair failed and we were unable to recover it. 00:28:16.887 [2024-04-25 03:28:51.252253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.887 [2024-04-25 03:28:51.252451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.887 [2024-04-25 03:28:51.252477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.887 qpair failed and we were unable to recover it. 00:28:16.887 [2024-04-25 03:28:51.252683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.887 [2024-04-25 03:28:51.252910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.887 [2024-04-25 03:28:51.252962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.887 qpair failed and we were unable to recover it. 00:28:16.887 [2024-04-25 03:28:51.253237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.887 [2024-04-25 03:28:51.253477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.887 [2024-04-25 03:28:51.253522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.887 qpair failed and we were unable to recover it. 00:28:16.887 [2024-04-25 03:28:51.253727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.887 [2024-04-25 03:28:51.253929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.887 [2024-04-25 03:28:51.253955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.887 qpair failed and we were unable to recover it. 00:28:16.887 [2024-04-25 03:28:51.254195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.887 [2024-04-25 03:28:51.254992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.887 [2024-04-25 03:28:51.255023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.887 qpair failed and we were unable to recover it. 00:28:16.887 [2024-04-25 03:28:51.255240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.887 [2024-04-25 03:28:51.255448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.887 [2024-04-25 03:28:51.255473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.887 qpair failed and we were unable to recover it. 00:28:16.887 [2024-04-25 03:28:51.255687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.887 [2024-04-25 03:28:51.255896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.887 [2024-04-25 03:28:51.255940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.887 qpair failed and we were unable to recover it. 00:28:16.887 [2024-04-25 03:28:51.256195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.887 [2024-04-25 03:28:51.256424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.887 [2024-04-25 03:28:51.256456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.888 qpair failed and we were unable to recover it. 00:28:16.888 [2024-04-25 03:28:51.256673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.888 [2024-04-25 03:28:51.256870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.888 [2024-04-25 03:28:51.256896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.888 qpair failed and we were unable to recover it. 00:28:16.888 [2024-04-25 03:28:51.257160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.888 [2024-04-25 03:28:51.257376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.888 [2024-04-25 03:28:51.257406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.888 qpair failed and we were unable to recover it. 00:28:16.888 [2024-04-25 03:28:51.257625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.888 [2024-04-25 03:28:51.257871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.888 [2024-04-25 03:28:51.257898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.888 qpair failed and we were unable to recover it. 00:28:16.888 [2024-04-25 03:28:51.258127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.888 [2024-04-25 03:28:51.258298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.888 [2024-04-25 03:28:51.258334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.888 qpair failed and we were unable to recover it. 00:28:16.888 [2024-04-25 03:28:51.258547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.888 [2024-04-25 03:28:51.258747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.888 [2024-04-25 03:28:51.258774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.888 qpair failed and we were unable to recover it. 00:28:16.888 [2024-04-25 03:28:51.258950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.888 [2024-04-25 03:28:51.259143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.888 [2024-04-25 03:28:51.259186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.888 qpair failed and we were unable to recover it. 00:28:16.888 [2024-04-25 03:28:51.259425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.888 [2024-04-25 03:28:51.259604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.888 [2024-04-25 03:28:51.259658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.888 qpair failed and we were unable to recover it. 00:28:16.888 [2024-04-25 03:28:51.259861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.888 [2024-04-25 03:28:51.260069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.888 [2024-04-25 03:28:51.260094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.888 qpair failed and we were unable to recover it. 00:28:16.888 [2024-04-25 03:28:51.260358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.888 [2024-04-25 03:28:51.260593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.888 [2024-04-25 03:28:51.260620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.888 qpair failed and we were unable to recover it. 00:28:16.888 [2024-04-25 03:28:51.260827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.888 [2024-04-25 03:28:51.260991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.888 [2024-04-25 03:28:51.261049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.888 qpair failed and we were unable to recover it. 00:28:16.888 [2024-04-25 03:28:51.261259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.888 [2024-04-25 03:28:51.261450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.888 [2024-04-25 03:28:51.261479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.888 qpair failed and we were unable to recover it. 00:28:16.888 [2024-04-25 03:28:51.261710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.888 [2024-04-25 03:28:51.261929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.888 [2024-04-25 03:28:51.261957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.888 qpair failed and we were unable to recover it. 00:28:16.888 [2024-04-25 03:28:51.262195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.888 [2024-04-25 03:28:51.262406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.888 [2024-04-25 03:28:51.262442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.888 qpair failed and we were unable to recover it. 00:28:16.888 [2024-04-25 03:28:51.262666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.888 [2024-04-25 03:28:51.262840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.888 [2024-04-25 03:28:51.262866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.888 qpair failed and we were unable to recover it. 00:28:16.888 [2024-04-25 03:28:51.263088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.888 [2024-04-25 03:28:51.263306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.888 [2024-04-25 03:28:51.263335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.888 qpair failed and we were unable to recover it. 00:28:16.888 [2024-04-25 03:28:51.263549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.888 [2024-04-25 03:28:51.263747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.888 [2024-04-25 03:28:51.263774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.888 qpair failed and we were unable to recover it. 00:28:16.888 [2024-04-25 03:28:51.263993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.888 [2024-04-25 03:28:51.264232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.888 [2024-04-25 03:28:51.264276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.888 qpair failed and we were unable to recover it. 00:28:16.888 [2024-04-25 03:28:51.264563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.888 [2024-04-25 03:28:51.264752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.888 [2024-04-25 03:28:51.264778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.888 qpair failed and we were unable to recover it. 00:28:16.888 [2024-04-25 03:28:51.265004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.888 [2024-04-25 03:28:51.265220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.888 [2024-04-25 03:28:51.265249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.888 qpair failed and we were unable to recover it. 00:28:16.888 [2024-04-25 03:28:51.265466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.888 [2024-04-25 03:28:51.265713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.888 [2024-04-25 03:28:51.265739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.888 qpair failed and we were unable to recover it. 00:28:16.888 [2024-04-25 03:28:51.265912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.888 [2024-04-25 03:28:51.266235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.888 [2024-04-25 03:28:51.266280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.888 qpair failed and we were unable to recover it. 00:28:16.888 [2024-04-25 03:28:51.266569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.888 [2024-04-25 03:28:51.266743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.888 [2024-04-25 03:28:51.266771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.888 qpair failed and we were unable to recover it. 00:28:16.888 [2024-04-25 03:28:51.266959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.888 [2024-04-25 03:28:51.267137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.888 [2024-04-25 03:28:51.267164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.888 qpair failed and we were unable to recover it. 00:28:16.888 [2024-04-25 03:28:51.267507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.888 [2024-04-25 03:28:51.267706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.888 [2024-04-25 03:28:51.267733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.888 qpair failed and we were unable to recover it. 00:28:16.888 [2024-04-25 03:28:51.267908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.888 [2024-04-25 03:28:51.268183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.888 [2024-04-25 03:28:51.268230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.888 qpair failed and we were unable to recover it. 00:28:16.888 [2024-04-25 03:28:51.268447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.888 [2024-04-25 03:28:51.268722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.888 [2024-04-25 03:28:51.268749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.888 qpair failed and we were unable to recover it. 00:28:16.888 [2024-04-25 03:28:51.268933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.888 [2024-04-25 03:28:51.269184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.888 [2024-04-25 03:28:51.269212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.888 qpair failed and we were unable to recover it. 00:28:16.888 [2024-04-25 03:28:51.269543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.888 [2024-04-25 03:28:51.269789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.888 [2024-04-25 03:28:51.269815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.888 qpair failed and we were unable to recover it. 00:28:16.888 [2024-04-25 03:28:51.270033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.888 [2024-04-25 03:28:51.270483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.889 [2024-04-25 03:28:51.270535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.889 qpair failed and we were unable to recover it. 00:28:16.889 [2024-04-25 03:28:51.270773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.889 [2024-04-25 03:28:51.270987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.889 [2024-04-25 03:28:51.271014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.889 qpair failed and we were unable to recover it. 00:28:16.889 [2024-04-25 03:28:51.271189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.889 [2024-04-25 03:28:51.271393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.889 [2024-04-25 03:28:51.271422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.889 qpair failed and we were unable to recover it. 00:28:16.889 [2024-04-25 03:28:51.271641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.889 [2024-04-25 03:28:51.271844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.889 [2024-04-25 03:28:51.271869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.889 qpair failed and we were unable to recover it. 00:28:16.889 [2024-04-25 03:28:51.272086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.889 [2024-04-25 03:28:51.272280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.889 [2024-04-25 03:28:51.272326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.889 qpair failed and we were unable to recover it. 00:28:16.889 [2024-04-25 03:28:51.272570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.889 [2024-04-25 03:28:51.272790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.889 [2024-04-25 03:28:51.272816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.889 qpair failed and we were unable to recover it. 00:28:16.889 [2024-04-25 03:28:51.273089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.889 [2024-04-25 03:28:51.273365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.889 [2024-04-25 03:28:51.273393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.889 qpair failed and we were unable to recover it. 00:28:16.889 [2024-04-25 03:28:51.273663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.889 [2024-04-25 03:28:51.273849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.889 [2024-04-25 03:28:51.273874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.889 qpair failed and we were unable to recover it. 00:28:16.889 [2024-04-25 03:28:51.274090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.889 [2024-04-25 03:28:51.274320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.889 [2024-04-25 03:28:51.274350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.889 qpair failed and we were unable to recover it. 00:28:16.889 [2024-04-25 03:28:51.274571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.889 [2024-04-25 03:28:51.274783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.889 [2024-04-25 03:28:51.274810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.889 qpair failed and we were unable to recover it. 00:28:16.889 [2024-04-25 03:28:51.275259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.889 [2024-04-25 03:28:51.275486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.889 [2024-04-25 03:28:51.275513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.889 qpair failed and we were unable to recover it. 00:28:16.889 [2024-04-25 03:28:51.275742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.889 [2024-04-25 03:28:51.275915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.889 [2024-04-25 03:28:51.275945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.889 qpair failed and we were unable to recover it. 00:28:16.889 [2024-04-25 03:28:51.276146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.889 [2024-04-25 03:28:51.276310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.889 [2024-04-25 03:28:51.276336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.889 qpair failed and we were unable to recover it. 00:28:16.889 [2024-04-25 03:28:51.276538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.889 [2024-04-25 03:28:51.276744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.889 [2024-04-25 03:28:51.276771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.889 qpair failed and we were unable to recover it. 00:28:16.889 [2024-04-25 03:28:51.276942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.889 [2024-04-25 03:28:51.277125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.889 [2024-04-25 03:28:51.277151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.889 qpair failed and we were unable to recover it. 00:28:16.889 [2024-04-25 03:28:51.277377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.889 [2024-04-25 03:28:51.277572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.889 [2024-04-25 03:28:51.277598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.889 qpair failed and we were unable to recover it. 00:28:16.889 [2024-04-25 03:28:51.277782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.889 [2024-04-25 03:28:51.277976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.889 [2024-04-25 03:28:51.278030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.889 qpair failed and we were unable to recover it. 00:28:16.889 [2024-04-25 03:28:51.278304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.889 [2024-04-25 03:28:51.278543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.889 [2024-04-25 03:28:51.278584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.889 qpair failed and we were unable to recover it. 00:28:16.889 [2024-04-25 03:28:51.278778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.889 [2024-04-25 03:28:51.279001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.889 [2024-04-25 03:28:51.279031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.889 qpair failed and we were unable to recover it. 00:28:16.889 [2024-04-25 03:28:51.279285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.889 [2024-04-25 03:28:51.279553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.889 [2024-04-25 03:28:51.279579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.889 qpair failed and we were unable to recover it. 00:28:16.889 [2024-04-25 03:28:51.279766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.889 [2024-04-25 03:28:51.279979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.889 [2024-04-25 03:28:51.280021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.889 qpair failed and we were unable to recover it. 00:28:16.889 [2024-04-25 03:28:51.280291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.889 [2024-04-25 03:28:51.280528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.889 [2024-04-25 03:28:51.280554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.889 qpair failed and we were unable to recover it. 00:28:16.889 [2024-04-25 03:28:51.280724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.889 [2024-04-25 03:28:51.280887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.889 [2024-04-25 03:28:51.280914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.889 qpair failed and we were unable to recover it. 00:28:16.889 [2024-04-25 03:28:51.281142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.889 [2024-04-25 03:28:51.281367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.889 [2024-04-25 03:28:51.281393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.889 qpair failed and we were unable to recover it. 00:28:16.889 [2024-04-25 03:28:51.281595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.889 [2024-04-25 03:28:51.281774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.889 [2024-04-25 03:28:51.281800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.889 qpair failed and we were unable to recover it. 00:28:16.889 [2024-04-25 03:28:51.281984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.889 [2024-04-25 03:28:51.282223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.890 [2024-04-25 03:28:51.282248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.890 qpair failed and we were unable to recover it. 00:28:16.890 [2024-04-25 03:28:51.282471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.890 [2024-04-25 03:28:51.282672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.890 [2024-04-25 03:28:51.282703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.890 qpair failed and we were unable to recover it. 00:28:16.890 [2024-04-25 03:28:51.282878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.890 [2024-04-25 03:28:51.283052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.890 [2024-04-25 03:28:51.283078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.890 qpair failed and we were unable to recover it. 00:28:16.890 [2024-04-25 03:28:51.283300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.890 [2024-04-25 03:28:51.283556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.890 [2024-04-25 03:28:51.283581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.890 qpair failed and we were unable to recover it. 00:28:16.890 [2024-04-25 03:28:51.283776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.890 [2024-04-25 03:28:51.283955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.890 [2024-04-25 03:28:51.283994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.890 qpair failed and we were unable to recover it. 00:28:16.890 [2024-04-25 03:28:51.284208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.890 [2024-04-25 03:28:51.284484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.890 [2024-04-25 03:28:51.284509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.890 qpair failed and we were unable to recover it. 00:28:16.890 [2024-04-25 03:28:51.284743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.890 [2024-04-25 03:28:51.284920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.890 [2024-04-25 03:28:51.284944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.890 qpair failed and we were unable to recover it. 00:28:16.890 [2024-04-25 03:28:51.285160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.890 [2024-04-25 03:28:51.285489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.890 [2024-04-25 03:28:51.285535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.890 qpair failed and we were unable to recover it. 00:28:16.890 [2024-04-25 03:28:51.285745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.890 [2024-04-25 03:28:51.285954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.890 [2024-04-25 03:28:51.285999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.890 qpair failed and we were unable to recover it. 00:28:16.890 [2024-04-25 03:28:51.286258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.890 [2024-04-25 03:28:51.286521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.890 [2024-04-25 03:28:51.286549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.890 qpair failed and we were unable to recover it. 00:28:16.890 [2024-04-25 03:28:51.286755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.890 [2024-04-25 03:28:51.286999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.890 [2024-04-25 03:28:51.287044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.890 qpair failed and we were unable to recover it. 00:28:16.890 [2024-04-25 03:28:51.287330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.890 [2024-04-25 03:28:51.287575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.890 [2024-04-25 03:28:51.287611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.890 qpair failed and we were unable to recover it. 00:28:16.890 [2024-04-25 03:28:51.287811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.890 [2024-04-25 03:28:51.288004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.890 [2024-04-25 03:28:51.288031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.890 qpair failed and we were unable to recover it. 00:28:16.890 [2024-04-25 03:28:51.288313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.890 [2024-04-25 03:28:51.288505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.890 [2024-04-25 03:28:51.288529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.890 qpair failed and we were unable to recover it. 00:28:16.890 [2024-04-25 03:28:51.288738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.890 [2024-04-25 03:28:51.288986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.890 [2024-04-25 03:28:51.289029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.890 qpair failed and we were unable to recover it. 00:28:16.890 [2024-04-25 03:28:51.289291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.890 [2024-04-25 03:28:51.289530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.890 [2024-04-25 03:28:51.289555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.890 qpair failed and we were unable to recover it. 00:28:16.890 [2024-04-25 03:28:51.289752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.890 [2024-04-25 03:28:51.289974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.890 [2024-04-25 03:28:51.290002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.890 qpair failed and we were unable to recover it. 00:28:16.890 [2024-04-25 03:28:51.290213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.890 [2024-04-25 03:28:51.290480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.890 [2024-04-25 03:28:51.290525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.890 qpair failed and we were unable to recover it. 00:28:16.890 [2024-04-25 03:28:51.290754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.890 [2024-04-25 03:28:51.291001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.890 [2024-04-25 03:28:51.291047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.890 qpair failed and we were unable to recover it. 00:28:16.890 [2024-04-25 03:28:51.291303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.890 [2024-04-25 03:28:51.291511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.890 [2024-04-25 03:28:51.291535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.890 qpair failed and we were unable to recover it. 00:28:16.890 [2024-04-25 03:28:51.291733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.890 [2024-04-25 03:28:51.291932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.890 [2024-04-25 03:28:51.291961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.890 qpair failed and we were unable to recover it. 00:28:16.890 [2024-04-25 03:28:51.292217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.890 [2024-04-25 03:28:51.292445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.890 [2024-04-25 03:28:51.292474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.890 qpair failed and we were unable to recover it. 00:28:16.890 [2024-04-25 03:28:51.292698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.890 [2024-04-25 03:28:51.292909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.890 [2024-04-25 03:28:51.292936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.890 qpair failed and we were unable to recover it. 00:28:16.890 [2024-04-25 03:28:51.293198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.890 [2024-04-25 03:28:51.293436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.890 [2024-04-25 03:28:51.293461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.890 qpair failed and we were unable to recover it. 00:28:16.890 [2024-04-25 03:28:51.293717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.890 [2024-04-25 03:28:51.293940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.890 [2024-04-25 03:28:51.293967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.890 qpair failed and we were unable to recover it. 00:28:16.890 [2024-04-25 03:28:51.294205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.890 [2024-04-25 03:28:51.294479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.890 [2024-04-25 03:28:51.294530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.890 qpair failed and we were unable to recover it. 00:28:16.890 [2024-04-25 03:28:51.294777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.890 [2024-04-25 03:28:51.295012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.890 [2024-04-25 03:28:51.295039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.890 qpair failed and we were unable to recover it. 00:28:16.890 [2024-04-25 03:28:51.295313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.890 [2024-04-25 03:28:51.295586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.890 [2024-04-25 03:28:51.295611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.890 qpair failed and we were unable to recover it. 00:28:16.890 [2024-04-25 03:28:51.295788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.890 [2024-04-25 03:28:51.296058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.890 [2024-04-25 03:28:51.296112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.890 qpair failed and we were unable to recover it. 00:28:16.890 [2024-04-25 03:28:51.296322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.891 [2024-04-25 03:28:51.296524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.891 [2024-04-25 03:28:51.296549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.891 qpair failed and we were unable to recover it. 00:28:16.891 [2024-04-25 03:28:51.296734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.891 [2024-04-25 03:28:51.296938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.891 [2024-04-25 03:28:51.296971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.891 qpair failed and we were unable to recover it. 00:28:16.891 [2024-04-25 03:28:51.297249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.891 [2024-04-25 03:28:51.297561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.891 [2024-04-25 03:28:51.297604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.891 qpair failed and we were unable to recover it. 00:28:16.891 [2024-04-25 03:28:51.297819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.891 [2024-04-25 03:28:51.298019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.891 [2024-04-25 03:28:51.298048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.891 qpair failed and we were unable to recover it. 00:28:16.891 [2024-04-25 03:28:51.298292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.891 [2024-04-25 03:28:51.298548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.891 [2024-04-25 03:28:51.298586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.891 qpair failed and we were unable to recover it. 00:28:16.891 [2024-04-25 03:28:51.298791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.891 [2024-04-25 03:28:51.299005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.891 [2024-04-25 03:28:51.299050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.891 qpair failed and we were unable to recover it. 00:28:16.891 [2024-04-25 03:28:51.299339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.891 [2024-04-25 03:28:51.299587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.891 [2024-04-25 03:28:51.299619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.891 qpair failed and we were unable to recover it. 00:28:16.891 [2024-04-25 03:28:51.299817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.891 [2024-04-25 03:28:51.299986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.891 [2024-04-25 03:28:51.300026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.891 qpair failed and we were unable to recover it. 00:28:16.891 [2024-04-25 03:28:51.300280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.891 [2024-04-25 03:28:51.300509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.891 [2024-04-25 03:28:51.300533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.891 qpair failed and we were unable to recover it. 00:28:16.891 [2024-04-25 03:28:51.300799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.891 [2024-04-25 03:28:51.301052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.891 [2024-04-25 03:28:51.301080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.891 qpair failed and we were unable to recover it. 00:28:16.891 [2024-04-25 03:28:51.301335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.891 [2024-04-25 03:28:51.301597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.891 [2024-04-25 03:28:51.301623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.891 qpair failed and we were unable to recover it. 00:28:16.891 [2024-04-25 03:28:51.301818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.891 [2024-04-25 03:28:51.302062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.891 [2024-04-25 03:28:51.302109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.891 qpair failed and we were unable to recover it. 00:28:16.891 [2024-04-25 03:28:51.302328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.891 [2024-04-25 03:28:51.302564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.891 [2024-04-25 03:28:51.302589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.891 qpair failed and we were unable to recover it. 00:28:16.891 [2024-04-25 03:28:51.302779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.891 [2024-04-25 03:28:51.303045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.891 [2024-04-25 03:28:51.303089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.891 qpair failed and we were unable to recover it. 00:28:16.891 [2024-04-25 03:28:51.303343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.891 [2024-04-25 03:28:51.303595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.891 [2024-04-25 03:28:51.303620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.891 qpair failed and we were unable to recover it. 00:28:16.891 [2024-04-25 03:28:51.303798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.891 [2024-04-25 03:28:51.304043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.891 [2024-04-25 03:28:51.304089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.891 qpair failed and we were unable to recover it. 00:28:16.891 [2024-04-25 03:28:51.304348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.891 [2024-04-25 03:28:51.304584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.891 [2024-04-25 03:28:51.304609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.891 qpair failed and we were unable to recover it. 00:28:16.891 [2024-04-25 03:28:51.304829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.891 [2024-04-25 03:28:51.305060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.891 [2024-04-25 03:28:51.305087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.891 qpair failed and we were unable to recover it. 00:28:16.891 [2024-04-25 03:28:51.305309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.891 [2024-04-25 03:28:51.305540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.891 [2024-04-25 03:28:51.305568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.891 qpair failed and we were unable to recover it. 00:28:16.891 [2024-04-25 03:28:51.305801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.891 [2024-04-25 03:28:51.306031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.891 [2024-04-25 03:28:51.306061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.891 qpair failed and we were unable to recover it. 00:28:16.891 [2024-04-25 03:28:51.306327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.891 [2024-04-25 03:28:51.306513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.891 [2024-04-25 03:28:51.306539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.891 qpair failed and we were unable to recover it. 00:28:16.891 [2024-04-25 03:28:51.306787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.891 [2024-04-25 03:28:51.307043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.891 [2024-04-25 03:28:51.307088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.891 qpair failed and we were unable to recover it. 00:28:16.891 [2024-04-25 03:28:51.307508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.891 [2024-04-25 03:28:51.307744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.891 [2024-04-25 03:28:51.307768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.891 qpair failed and we were unable to recover it. 00:28:16.891 [2024-04-25 03:28:51.307997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.891 [2024-04-25 03:28:51.308278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.891 [2024-04-25 03:28:51.308315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.891 qpair failed and we were unable to recover it. 00:28:16.891 [2024-04-25 03:28:51.308565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.891 [2024-04-25 03:28:51.308768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.891 [2024-04-25 03:28:51.308792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.891 qpair failed and we were unable to recover it. 00:28:16.891 [2024-04-25 03:28:51.309078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.891 [2024-04-25 03:28:51.309361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.891 [2024-04-25 03:28:51.309406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.891 qpair failed and we were unable to recover it. 00:28:16.891 [2024-04-25 03:28:51.309634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.891 [2024-04-25 03:28:51.309808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.891 [2024-04-25 03:28:51.309833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.891 qpair failed and we were unable to recover it. 00:28:16.891 [2024-04-25 03:28:51.310083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.891 [2024-04-25 03:28:51.310390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.891 [2024-04-25 03:28:51.310436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.891 qpair failed and we were unable to recover it. 00:28:16.891 [2024-04-25 03:28:51.310686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.891 [2024-04-25 03:28:51.310860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.891 [2024-04-25 03:28:51.310884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.891 qpair failed and we were unable to recover it. 00:28:16.892 [2024-04-25 03:28:51.311127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.892 [2024-04-25 03:28:51.311392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.892 [2024-04-25 03:28:51.311438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.892 qpair failed and we were unable to recover it. 00:28:16.892 [2024-04-25 03:28:51.311700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.892 [2024-04-25 03:28:51.311922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.892 [2024-04-25 03:28:51.311949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.892 qpair failed and we were unable to recover it. 00:28:16.892 [2024-04-25 03:28:51.312216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.892 [2024-04-25 03:28:51.312476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.892 [2024-04-25 03:28:51.312521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.892 qpair failed and we were unable to recover it. 00:28:16.892 [2024-04-25 03:28:51.312724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.892 [2024-04-25 03:28:51.312957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.892 [2024-04-25 03:28:51.313002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.892 qpair failed and we were unable to recover it. 00:28:16.892 [2024-04-25 03:28:51.313289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.892 [2024-04-25 03:28:51.313525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.892 [2024-04-25 03:28:51.313549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.892 qpair failed and we were unable to recover it. 00:28:16.892 [2024-04-25 03:28:51.313778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.892 [2024-04-25 03:28:51.314041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.892 [2024-04-25 03:28:51.314085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.892 qpair failed and we were unable to recover it. 00:28:16.892 [2024-04-25 03:28:51.314362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.892 [2024-04-25 03:28:51.314601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.892 [2024-04-25 03:28:51.314626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.892 qpair failed and we were unable to recover it. 00:28:16.892 [2024-04-25 03:28:51.314843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.892 [2024-04-25 03:28:51.315120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.892 [2024-04-25 03:28:51.315165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.892 qpair failed and we were unable to recover it. 00:28:16.892 [2024-04-25 03:28:51.315427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.892 [2024-04-25 03:28:51.315700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.892 [2024-04-25 03:28:51.315725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.892 qpair failed and we were unable to recover it. 00:28:16.892 [2024-04-25 03:28:51.315932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.892 [2024-04-25 03:28:51.316156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.892 [2024-04-25 03:28:51.316201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.892 qpair failed and we were unable to recover it. 00:28:16.892 [2024-04-25 03:28:51.316468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.892 [2024-04-25 03:28:51.316689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.892 [2024-04-25 03:28:51.316715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.892 qpair failed and we were unable to recover it. 00:28:16.892 [2024-04-25 03:28:51.316916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.892 [2024-04-25 03:28:51.317121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.892 [2024-04-25 03:28:51.317146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.892 qpair failed and we were unable to recover it. 00:28:16.892 [2024-04-25 03:28:51.317335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.892 [2024-04-25 03:28:51.317529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.892 [2024-04-25 03:28:51.317554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.892 qpair failed and we were unable to recover it. 00:28:16.892 [2024-04-25 03:28:51.317776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.892 [2024-04-25 03:28:51.317940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.892 [2024-04-25 03:28:51.317965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.892 qpair failed and we were unable to recover it. 00:28:16.892 [2024-04-25 03:28:51.318194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.892 [2024-04-25 03:28:51.318365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.892 [2024-04-25 03:28:51.318390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.892 qpair failed and we were unable to recover it. 00:28:16.892 [2024-04-25 03:28:51.318589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.892 [2024-04-25 03:28:51.318771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.892 [2024-04-25 03:28:51.318796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.892 qpair failed and we were unable to recover it. 00:28:16.892 [2024-04-25 03:28:51.318997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.892 [2024-04-25 03:28:51.319168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.892 [2024-04-25 03:28:51.319194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.892 qpair failed and we were unable to recover it. 00:28:16.892 [2024-04-25 03:28:51.319394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.892 [2024-04-25 03:28:51.319592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.892 [2024-04-25 03:28:51.319619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.892 qpair failed and we were unable to recover it. 00:28:16.892 [2024-04-25 03:28:51.319828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.892 [2024-04-25 03:28:51.320025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.892 [2024-04-25 03:28:51.320051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.892 qpair failed and we were unable to recover it. 00:28:16.892 [2024-04-25 03:28:51.320280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.892 [2024-04-25 03:28:51.320472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.892 [2024-04-25 03:28:51.320498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.892 qpair failed and we were unable to recover it. 00:28:16.892 [2024-04-25 03:28:51.320696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.892 [2024-04-25 03:28:51.320893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.892 [2024-04-25 03:28:51.320918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.892 qpair failed and we were unable to recover it. 00:28:16.892 [2024-04-25 03:28:51.321121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.892 [2024-04-25 03:28:51.321314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.892 [2024-04-25 03:28:51.321340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.892 qpair failed and we were unable to recover it. 00:28:16.892 [2024-04-25 03:28:51.321532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.892 [2024-04-25 03:28:51.321753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.892 [2024-04-25 03:28:51.321778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.892 qpair failed and we were unable to recover it. 00:28:16.892 [2024-04-25 03:28:51.322002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.892 [2024-04-25 03:28:51.322196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.892 [2024-04-25 03:28:51.322220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.892 qpair failed and we were unable to recover it. 00:28:16.892 [2024-04-25 03:28:51.322490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.892 [2024-04-25 03:28:51.322679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.892 [2024-04-25 03:28:51.322706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.892 qpair failed and we were unable to recover it. 00:28:16.892 [2024-04-25 03:28:51.322896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.892 [2024-04-25 03:28:51.323062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.892 [2024-04-25 03:28:51.323087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.892 qpair failed and we were unable to recover it. 00:28:16.892 [2024-04-25 03:28:51.323301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.892 [2024-04-25 03:28:51.323540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.892 [2024-04-25 03:28:51.323565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.892 qpair failed and we were unable to recover it. 00:28:16.892 [2024-04-25 03:28:51.323765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.892 [2024-04-25 03:28:51.324055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.892 [2024-04-25 03:28:51.324079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.892 qpair failed and we were unable to recover it. 00:28:16.892 [2024-04-25 03:28:51.324286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.893 [2024-04-25 03:28:51.324520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.893 [2024-04-25 03:28:51.324545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.893 qpair failed and we were unable to recover it. 00:28:16.893 [2024-04-25 03:28:51.324736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.893 [2024-04-25 03:28:51.324931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.893 [2024-04-25 03:28:51.324955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.893 qpair failed and we were unable to recover it. 00:28:16.893 [2024-04-25 03:28:51.325243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.893 [2024-04-25 03:28:51.325490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.893 [2024-04-25 03:28:51.325514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.893 qpair failed and we were unable to recover it. 00:28:16.893 [2024-04-25 03:28:51.325845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.893 [2024-04-25 03:28:51.326091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.893 [2024-04-25 03:28:51.326130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.893 qpair failed and we were unable to recover it. 00:28:16.893 [2024-04-25 03:28:51.326390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.893 [2024-04-25 03:28:51.326609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.893 [2024-04-25 03:28:51.326668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.893 qpair failed and we were unable to recover it. 00:28:16.893 [2024-04-25 03:28:51.326925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.893 [2024-04-25 03:28:51.327158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.893 [2024-04-25 03:28:51.327182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.893 qpair failed and we were unable to recover it. 00:28:16.893 [2024-04-25 03:28:51.327352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.893 [2024-04-25 03:28:51.327654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.893 [2024-04-25 03:28:51.327680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.893 qpair failed and we were unable to recover it. 00:28:16.893 [2024-04-25 03:28:51.327921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.893 [2024-04-25 03:28:51.328091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.893 [2024-04-25 03:28:51.328130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.893 qpair failed and we were unable to recover it. 00:28:16.893 [2024-04-25 03:28:51.328305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.893 [2024-04-25 03:28:51.328549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.893 [2024-04-25 03:28:51.328574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.893 qpair failed and we were unable to recover it. 00:28:16.893 [2024-04-25 03:28:51.328843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.893 [2024-04-25 03:28:51.329056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.893 [2024-04-25 03:28:51.329080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.893 qpair failed and we were unable to recover it. 00:28:16.893 [2024-04-25 03:28:51.329288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.893 [2024-04-25 03:28:51.329491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.893 [2024-04-25 03:28:51.329515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.893 qpair failed and we were unable to recover it. 00:28:16.893 [2024-04-25 03:28:51.329723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.893 [2024-04-25 03:28:51.329946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.893 [2024-04-25 03:28:51.329970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.893 qpair failed and we were unable to recover it. 00:28:16.893 [2024-04-25 03:28:51.330234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.893 [2024-04-25 03:28:51.330533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.893 [2024-04-25 03:28:51.330578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.893 qpair failed and we were unable to recover it. 00:28:16.893 [2024-04-25 03:28:51.330863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.893 [2024-04-25 03:28:51.331063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.893 [2024-04-25 03:28:51.331087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.893 qpair failed and we were unable to recover it. 00:28:16.893 [2024-04-25 03:28:51.331283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.893 [2024-04-25 03:28:51.331479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.893 [2024-04-25 03:28:51.331504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.893 qpair failed and we were unable to recover it. 00:28:16.893 [2024-04-25 03:28:51.331731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.893 [2024-04-25 03:28:51.331938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.893 [2024-04-25 03:28:51.331962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.893 qpair failed and we were unable to recover it. 00:28:16.893 [2024-04-25 03:28:51.332163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.893 [2024-04-25 03:28:51.332350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.893 [2024-04-25 03:28:51.332374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.893 qpair failed and we were unable to recover it. 00:28:16.893 [2024-04-25 03:28:51.332586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.893 [2024-04-25 03:28:51.332821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.893 [2024-04-25 03:28:51.332846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.893 qpair failed and we were unable to recover it. 00:28:16.893 [2024-04-25 03:28:51.333074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.893 [2024-04-25 03:28:51.333264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.893 [2024-04-25 03:28:51.333288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.893 qpair failed and we were unable to recover it. 00:28:16.893 [2024-04-25 03:28:51.333514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.893 [2024-04-25 03:28:51.333735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.893 [2024-04-25 03:28:51.333760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.893 qpair failed and we were unable to recover it. 00:28:16.893 [2024-04-25 03:28:51.333928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.893 [2024-04-25 03:28:51.334118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.893 [2024-04-25 03:28:51.334143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.893 qpair failed and we were unable to recover it. 00:28:16.893 [2024-04-25 03:28:51.334348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.893 [2024-04-25 03:28:51.334570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.893 [2024-04-25 03:28:51.334595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.893 qpair failed and we were unable to recover it. 00:28:16.893 [2024-04-25 03:28:51.334880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.893 [2024-04-25 03:28:51.335082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.893 [2024-04-25 03:28:51.335106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.893 qpair failed and we were unable to recover it. 00:28:16.893 [2024-04-25 03:28:51.335370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.893 [2024-04-25 03:28:51.335595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.893 [2024-04-25 03:28:51.335631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.893 qpair failed and we were unable to recover it. 00:28:16.893 [2024-04-25 03:28:51.335834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.893 [2024-04-25 03:28:51.336008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.894 [2024-04-25 03:28:51.336031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.894 qpair failed and we were unable to recover it. 00:28:16.894 [2024-04-25 03:28:51.336269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.894 [2024-04-25 03:28:51.336463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.894 [2024-04-25 03:28:51.336488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.894 qpair failed and we were unable to recover it. 00:28:16.894 [2024-04-25 03:28:51.336653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.894 [2024-04-25 03:28:51.336856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.894 [2024-04-25 03:28:51.336880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.894 qpair failed and we were unable to recover it. 00:28:16.894 [2024-04-25 03:28:51.337096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.894 [2024-04-25 03:28:51.337298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.894 [2024-04-25 03:28:51.337322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.894 qpair failed and we were unable to recover it. 00:28:16.894 [2024-04-25 03:28:51.337555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.894 [2024-04-25 03:28:51.337773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.894 [2024-04-25 03:28:51.337799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.894 qpair failed and we were unable to recover it. 00:28:16.894 [2024-04-25 03:28:51.337995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.894 [2024-04-25 03:28:51.338205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.894 [2024-04-25 03:28:51.338230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.894 qpair failed and we were unable to recover it. 00:28:16.894 [2024-04-25 03:28:51.338463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.894 [2024-04-25 03:28:51.338639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.894 [2024-04-25 03:28:51.338665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.894 qpair failed and we were unable to recover it. 00:28:16.894 [2024-04-25 03:28:51.338840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.894 [2024-04-25 03:28:51.339053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.894 [2024-04-25 03:28:51.339076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.894 qpair failed and we were unable to recover it. 00:28:16.894 [2024-04-25 03:28:51.339293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.894 [2024-04-25 03:28:51.339517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.894 [2024-04-25 03:28:51.339541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.894 qpair failed and we were unable to recover it. 00:28:16.894 [2024-04-25 03:28:51.339741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.894 [2024-04-25 03:28:51.339939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.894 [2024-04-25 03:28:51.339964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.894 qpair failed and we were unable to recover it. 00:28:16.894 [2024-04-25 03:28:51.340170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.894 [2024-04-25 03:28:51.340455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.894 [2024-04-25 03:28:51.340479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.894 qpair failed and we were unable to recover it. 00:28:16.894 [2024-04-25 03:28:51.340679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.894 [2024-04-25 03:28:51.340859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.894 [2024-04-25 03:28:51.340883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.894 qpair failed and we were unable to recover it. 00:28:16.894 [2024-04-25 03:28:51.341064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.894 [2024-04-25 03:28:51.341293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.894 [2024-04-25 03:28:51.341318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.894 qpair failed and we were unable to recover it. 00:28:16.894 [2024-04-25 03:28:51.341496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.894 [2024-04-25 03:28:51.341724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.894 [2024-04-25 03:28:51.341750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.894 qpair failed and we were unable to recover it. 00:28:16.894 [2024-04-25 03:28:51.341985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.894 [2024-04-25 03:28:51.342180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.894 [2024-04-25 03:28:51.342204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.894 qpair failed and we were unable to recover it. 00:28:16.894 [2024-04-25 03:28:51.342404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.894 [2024-04-25 03:28:51.342590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.894 [2024-04-25 03:28:51.342620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.894 qpair failed and we were unable to recover it. 00:28:16.894 [2024-04-25 03:28:51.342838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.894 [2024-04-25 03:28:51.343035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.894 [2024-04-25 03:28:51.343060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.894 qpair failed and we were unable to recover it. 00:28:16.894 [2024-04-25 03:28:51.343259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.894 [2024-04-25 03:28:51.343510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.894 [2024-04-25 03:28:51.343535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.894 qpair failed and we were unable to recover it. 00:28:16.894 [2024-04-25 03:28:51.343750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.894 [2024-04-25 03:28:51.343928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.894 [2024-04-25 03:28:51.343953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.894 qpair failed and we were unable to recover it. 00:28:16.894 [2024-04-25 03:28:51.344149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.894 [2024-04-25 03:28:51.344359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.894 [2024-04-25 03:28:51.344387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.894 qpair failed and we were unable to recover it. 00:28:16.894 [2024-04-25 03:28:51.344625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.894 [2024-04-25 03:28:51.344831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.894 [2024-04-25 03:28:51.344856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.894 qpair failed and we were unable to recover it. 00:28:16.894 [2024-04-25 03:28:51.345071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.894 [2024-04-25 03:28:51.345279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.894 [2024-04-25 03:28:51.345304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.894 qpair failed and we were unable to recover it. 00:28:16.894 [2024-04-25 03:28:51.345514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.894 [2024-04-25 03:28:51.345722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.894 [2024-04-25 03:28:51.345748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.894 qpair failed and we were unable to recover it. 00:28:16.894 [2024-04-25 03:28:51.345945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.894 [2024-04-25 03:28:51.346167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.894 [2024-04-25 03:28:51.346192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.894 qpair failed and we were unable to recover it. 00:28:16.894 [2024-04-25 03:28:51.346435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.894 [2024-04-25 03:28:51.346691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.894 [2024-04-25 03:28:51.346717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.894 qpair failed and we were unable to recover it. 00:28:16.894 [2024-04-25 03:28:51.346953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.894 [2024-04-25 03:28:51.347140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.894 [2024-04-25 03:28:51.347164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.894 qpair failed and we were unable to recover it. 00:28:16.894 [2024-04-25 03:28:51.347381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.894 [2024-04-25 03:28:51.347605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.894 [2024-04-25 03:28:51.347636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.894 qpair failed and we were unable to recover it. 00:28:16.894 [2024-04-25 03:28:51.347837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.894 [2024-04-25 03:28:51.348050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.894 [2024-04-25 03:28:51.348076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.894 qpair failed and we were unable to recover it. 00:28:16.894 [2024-04-25 03:28:51.348305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.894 [2024-04-25 03:28:51.348547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.894 [2024-04-25 03:28:51.348592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.894 qpair failed and we were unable to recover it. 00:28:16.894 [2024-04-25 03:28:51.348890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.894 [2024-04-25 03:28:51.349166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.895 [2024-04-25 03:28:51.349191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.895 qpair failed and we were unable to recover it. 00:28:16.895 [2024-04-25 03:28:51.349413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.895 [2024-04-25 03:28:51.349641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.895 [2024-04-25 03:28:51.349666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.895 qpair failed and we were unable to recover it. 00:28:16.895 [2024-04-25 03:28:51.349854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.895 [2024-04-25 03:28:51.350041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.895 [2024-04-25 03:28:51.350065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.895 qpair failed and we were unable to recover it. 00:28:16.895 [2024-04-25 03:28:51.350261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.895 [2024-04-25 03:28:51.350488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.895 [2024-04-25 03:28:51.350513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.895 qpair failed and we were unable to recover it. 00:28:16.895 [2024-04-25 03:28:51.350736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.895 [2024-04-25 03:28:51.350922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.895 [2024-04-25 03:28:51.350962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.895 qpair failed and we were unable to recover it. 00:28:16.895 [2024-04-25 03:28:51.351236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.895 [2024-04-25 03:28:51.351434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.895 [2024-04-25 03:28:51.351458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.895 qpair failed and we were unable to recover it. 00:28:16.895 [2024-04-25 03:28:51.351648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.895 [2024-04-25 03:28:51.351880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.895 [2024-04-25 03:28:51.351904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.895 qpair failed and we were unable to recover it. 00:28:16.895 [2024-04-25 03:28:51.352115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.895 [2024-04-25 03:28:51.352351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.895 [2024-04-25 03:28:51.352375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.895 qpair failed and we were unable to recover it. 00:28:16.895 [2024-04-25 03:28:51.352570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.895 [2024-04-25 03:28:51.352834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.895 [2024-04-25 03:28:51.352860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.895 qpair failed and we were unable to recover it. 00:28:16.895 [2024-04-25 03:28:51.353061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.895 [2024-04-25 03:28:51.353531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.895 [2024-04-25 03:28:51.353584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.895 qpair failed and we were unable to recover it. 00:28:16.895 [2024-04-25 03:28:51.353858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.895 [2024-04-25 03:28:51.354086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.895 [2024-04-25 03:28:51.354111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.895 qpair failed and we were unable to recover it. 00:28:16.895 [2024-04-25 03:28:51.354348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.895 [2024-04-25 03:28:51.354512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.895 [2024-04-25 03:28:51.354535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.895 qpair failed and we were unable to recover it. 00:28:16.895 [2024-04-25 03:28:51.354743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.895 [2024-04-25 03:28:51.354947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.895 [2024-04-25 03:28:51.354971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.895 qpair failed and we were unable to recover it. 00:28:16.895 [2024-04-25 03:28:51.355168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.895 [2024-04-25 03:28:51.355374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.895 [2024-04-25 03:28:51.355402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.895 qpair failed and we were unable to recover it. 00:28:16.895 [2024-04-25 03:28:51.355608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.895 [2024-04-25 03:28:51.355820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.895 [2024-04-25 03:28:51.355845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.895 qpair failed and we were unable to recover it. 00:28:16.895 [2024-04-25 03:28:51.356044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.895 [2024-04-25 03:28:51.356244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.895 [2024-04-25 03:28:51.356275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.895 qpair failed and we were unable to recover it. 00:28:16.895 [2024-04-25 03:28:51.356471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.895 [2024-04-25 03:28:51.356668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.895 [2024-04-25 03:28:51.356693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.895 qpair failed and we were unable to recover it. 00:28:16.895 [2024-04-25 03:28:51.356866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.895 [2024-04-25 03:28:51.357051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.895 [2024-04-25 03:28:51.357075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.895 qpair failed and we were unable to recover it. 00:28:16.895 [2024-04-25 03:28:51.357242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.895 [2024-04-25 03:28:51.357451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.895 [2024-04-25 03:28:51.357476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.895 qpair failed and we were unable to recover it. 00:28:16.895 [2024-04-25 03:28:51.357710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.895 [2024-04-25 03:28:51.357905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.895 [2024-04-25 03:28:51.357931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.895 qpair failed and we were unable to recover it. 00:28:16.895 [2024-04-25 03:28:51.358257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.895 [2024-04-25 03:28:51.358521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.895 [2024-04-25 03:28:51.358544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.895 qpair failed and we were unable to recover it. 00:28:16.895 [2024-04-25 03:28:51.358757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.895 [2024-04-25 03:28:51.358965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.895 [2024-04-25 03:28:51.358991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.895 qpair failed and we were unable to recover it. 00:28:16.895 [2024-04-25 03:28:51.359213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.895 [2024-04-25 03:28:51.359448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.895 [2024-04-25 03:28:51.359473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.895 qpair failed and we were unable to recover it. 00:28:16.895 [2024-04-25 03:28:51.359707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.895 [2024-04-25 03:28:51.359907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.895 [2024-04-25 03:28:51.359935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.895 qpair failed and we were unable to recover it. 00:28:16.895 [2024-04-25 03:28:51.360123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.895 [2024-04-25 03:28:51.360307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.895 [2024-04-25 03:28:51.360331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.895 qpair failed and we were unable to recover it. 00:28:16.895 [2024-04-25 03:28:51.360545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.895 [2024-04-25 03:28:51.360742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.895 [2024-04-25 03:28:51.360768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.895 qpair failed and we were unable to recover it. 00:28:16.895 [2024-04-25 03:28:51.360989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.895 [2024-04-25 03:28:51.361178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.895 [2024-04-25 03:28:51.361203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.895 qpair failed and we were unable to recover it. 00:28:16.895 [2024-04-25 03:28:51.361437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.895 [2024-04-25 03:28:51.361619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.895 [2024-04-25 03:28:51.361649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.895 qpair failed and we were unable to recover it. 00:28:16.895 [2024-04-25 03:28:51.361847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.895 [2024-04-25 03:28:51.362043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.895 [2024-04-25 03:28:51.362069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.895 qpair failed and we were unable to recover it. 00:28:16.895 [2024-04-25 03:28:51.362269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.895 [2024-04-25 03:28:51.362484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.896 [2024-04-25 03:28:51.362511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.896 qpair failed and we were unable to recover it. 00:28:16.896 [2024-04-25 03:28:51.362762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.896 [2024-04-25 03:28:51.362926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.896 [2024-04-25 03:28:51.362951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.896 qpair failed and we were unable to recover it. 00:28:16.896 [2024-04-25 03:28:51.363163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.896 [2024-04-25 03:28:51.363357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.896 [2024-04-25 03:28:51.363382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.896 qpair failed and we were unable to recover it. 00:28:16.896 [2024-04-25 03:28:51.363585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.896 [2024-04-25 03:28:51.363798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.896 [2024-04-25 03:28:51.363829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.896 qpair failed and we were unable to recover it. 00:28:16.896 [2024-04-25 03:28:51.364028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.896 [2024-04-25 03:28:51.364238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.896 [2024-04-25 03:28:51.364266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:16.896 qpair failed and we were unable to recover it. 00:28:17.168 [2024-04-25 03:28:51.364463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.168 [2024-04-25 03:28:51.364652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.168 [2024-04-25 03:28:51.364688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.168 qpair failed and we were unable to recover it. 00:28:17.168 [2024-04-25 03:28:51.364919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.168 [2024-04-25 03:28:51.365175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.168 [2024-04-25 03:28:51.365202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.168 qpair failed and we were unable to recover it. 00:28:17.168 [2024-04-25 03:28:51.365403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.168 [2024-04-25 03:28:51.365606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.168 [2024-04-25 03:28:51.365643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.168 qpair failed and we were unable to recover it. 00:28:17.168 [2024-04-25 03:28:51.365842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.168 [2024-04-25 03:28:51.366008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.168 [2024-04-25 03:28:51.366049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.168 qpair failed and we were unable to recover it. 00:28:17.168 [2024-04-25 03:28:51.366236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.168 [2024-04-25 03:28:51.366535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.168 [2024-04-25 03:28:51.366561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.168 qpair failed and we were unable to recover it. 00:28:17.168 [2024-04-25 03:28:51.366811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.168 [2024-04-25 03:28:51.367020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.168 [2024-04-25 03:28:51.367045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.168 qpair failed and we were unable to recover it. 00:28:17.168 [2024-04-25 03:28:51.367244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.168 [2024-04-25 03:28:51.367437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.168 [2024-04-25 03:28:51.367461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.168 qpair failed and we were unable to recover it. 00:28:17.168 [2024-04-25 03:28:51.367666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.168 [2024-04-25 03:28:51.367873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.168 [2024-04-25 03:28:51.367899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.168 qpair failed and we were unable to recover it. 00:28:17.168 [2024-04-25 03:28:51.368098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.168 [2024-04-25 03:28:51.368326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.168 [2024-04-25 03:28:51.368351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.168 qpair failed and we were unable to recover it. 00:28:17.168 [2024-04-25 03:28:51.368536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.168 [2024-04-25 03:28:51.368778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.168 [2024-04-25 03:28:51.368823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.168 qpair failed and we were unable to recover it. 00:28:17.168 [2024-04-25 03:28:51.369063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.168 [2024-04-25 03:28:51.369272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.168 [2024-04-25 03:28:51.369297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.168 qpair failed and we were unable to recover it. 00:28:17.168 [2024-04-25 03:28:51.369502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.168 [2024-04-25 03:28:51.369810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.168 [2024-04-25 03:28:51.369838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.168 qpair failed and we were unable to recover it. 00:28:17.168 [2024-04-25 03:28:51.370088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.168 [2024-04-25 03:28:51.370309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.168 [2024-04-25 03:28:51.370333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.168 qpair failed and we were unable to recover it. 00:28:17.168 [2024-04-25 03:28:51.370540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.168 [2024-04-25 03:28:51.370775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.168 [2024-04-25 03:28:51.370800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.168 qpair failed and we were unable to recover it. 00:28:17.168 [2024-04-25 03:28:51.371029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.168 [2024-04-25 03:28:51.371226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.168 [2024-04-25 03:28:51.371251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.168 qpair failed and we were unable to recover it. 00:28:17.168 [2024-04-25 03:28:51.371437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.168 [2024-04-25 03:28:51.371638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.168 [2024-04-25 03:28:51.371663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.168 qpair failed and we were unable to recover it. 00:28:17.168 [2024-04-25 03:28:51.371862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.168 [2024-04-25 03:28:51.372091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.168 [2024-04-25 03:28:51.372115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.168 qpair failed and we were unable to recover it. 00:28:17.168 [2024-04-25 03:28:51.372306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.168 [2024-04-25 03:28:51.372583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.168 [2024-04-25 03:28:51.372607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.168 qpair failed and we were unable to recover it. 00:28:17.168 [2024-04-25 03:28:51.372827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.168 [2024-04-25 03:28:51.373008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.168 [2024-04-25 03:28:51.373032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.168 qpair failed and we were unable to recover it. 00:28:17.168 [2024-04-25 03:28:51.373244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.168 [2024-04-25 03:28:51.373477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.168 [2024-04-25 03:28:51.373502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.168 qpair failed and we were unable to recover it. 00:28:17.168 [2024-04-25 03:28:51.373686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.168 [2024-04-25 03:28:51.373914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.168 [2024-04-25 03:28:51.373937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.168 qpair failed and we were unable to recover it. 00:28:17.168 [2024-04-25 03:28:51.374158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.168 [2024-04-25 03:28:51.374388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.168 [2024-04-25 03:28:51.374412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.168 qpair failed and we were unable to recover it. 00:28:17.168 [2024-04-25 03:28:51.374659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.168 [2024-04-25 03:28:51.374877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.168 [2024-04-25 03:28:51.374901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.168 qpair failed and we were unable to recover it. 00:28:17.168 [2024-04-25 03:28:51.375096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.168 [2024-04-25 03:28:51.375312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.168 [2024-04-25 03:28:51.375336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.168 qpair failed and we were unable to recover it. 00:28:17.168 [2024-04-25 03:28:51.375553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.168 [2024-04-25 03:28:51.375765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.168 [2024-04-25 03:28:51.375790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.168 qpair failed and we were unable to recover it. 00:28:17.168 [2024-04-25 03:28:51.375959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.168 [2024-04-25 03:28:51.376184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.168 [2024-04-25 03:28:51.376209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.168 qpair failed and we were unable to recover it. 00:28:17.169 [2024-04-25 03:28:51.376401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.169 [2024-04-25 03:28:51.376575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.169 [2024-04-25 03:28:51.376599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.169 qpair failed and we were unable to recover it. 00:28:17.169 [2024-04-25 03:28:51.376801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.169 [2024-04-25 03:28:51.377032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.169 [2024-04-25 03:28:51.377056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.169 qpair failed and we were unable to recover it. 00:28:17.169 [2024-04-25 03:28:51.377268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.169 [2024-04-25 03:28:51.377438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.169 [2024-04-25 03:28:51.377462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.169 qpair failed and we were unable to recover it. 00:28:17.169 [2024-04-25 03:28:51.377708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.169 [2024-04-25 03:28:51.377890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.169 [2024-04-25 03:28:51.377914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.169 qpair failed and we were unable to recover it. 00:28:17.169 [2024-04-25 03:28:51.378135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.169 [2024-04-25 03:28:51.378388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.169 [2024-04-25 03:28:51.378427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.169 qpair failed and we were unable to recover it. 00:28:17.169 [2024-04-25 03:28:51.378637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.169 [2024-04-25 03:28:51.378849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.169 [2024-04-25 03:28:51.378874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.169 qpair failed and we were unable to recover it. 00:28:17.169 [2024-04-25 03:28:51.379074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.169 [2024-04-25 03:28:51.379270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.169 [2024-04-25 03:28:51.379296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.169 qpair failed and we were unable to recover it. 00:28:17.169 [2024-04-25 03:28:51.379563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.169 [2024-04-25 03:28:51.379802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.169 [2024-04-25 03:28:51.379827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.169 qpair failed and we were unable to recover it. 00:28:17.169 [2024-04-25 03:28:51.380031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.169 [2024-04-25 03:28:51.380281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.169 [2024-04-25 03:28:51.380321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.169 qpair failed and we were unable to recover it. 00:28:17.169 [2024-04-25 03:28:51.380562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.169 [2024-04-25 03:28:51.380778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.169 [2024-04-25 03:28:51.380806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.169 qpair failed and we were unable to recover it. 00:28:17.169 [2024-04-25 03:28:51.381050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.169 [2024-04-25 03:28:51.381400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.169 [2024-04-25 03:28:51.381445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.169 qpair failed and we were unable to recover it. 00:28:17.169 [2024-04-25 03:28:51.381705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.169 [2024-04-25 03:28:51.381893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.169 [2024-04-25 03:28:51.381918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.169 qpair failed and we were unable to recover it. 00:28:17.169 [2024-04-25 03:28:51.382093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.169 [2024-04-25 03:28:51.382329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.169 [2024-04-25 03:28:51.382353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.169 qpair failed and we were unable to recover it. 00:28:17.169 [2024-04-25 03:28:51.382589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.169 [2024-04-25 03:28:51.382787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.169 [2024-04-25 03:28:51.382812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.169 qpair failed and we were unable to recover it. 00:28:17.169 [2024-04-25 03:28:51.383012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.169 [2024-04-25 03:28:51.383222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.169 [2024-04-25 03:28:51.383246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.169 qpair failed and we were unable to recover it. 00:28:17.169 [2024-04-25 03:28:51.383440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.169 [2024-04-25 03:28:51.383686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.169 [2024-04-25 03:28:51.383711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.169 qpair failed and we were unable to recover it. 00:28:17.169 [2024-04-25 03:28:51.383897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.169 [2024-04-25 03:28:51.384111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.169 [2024-04-25 03:28:51.384136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.169 qpair failed and we were unable to recover it. 00:28:17.169 [2024-04-25 03:28:51.384378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.169 [2024-04-25 03:28:51.384574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.169 [2024-04-25 03:28:51.384599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.169 qpair failed and we were unable to recover it. 00:28:17.169 [2024-04-25 03:28:51.384769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.169 [2024-04-25 03:28:51.384966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.169 [2024-04-25 03:28:51.384992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.169 qpair failed and we were unable to recover it. 00:28:17.169 [2024-04-25 03:28:51.385229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.169 [2024-04-25 03:28:51.385473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.169 [2024-04-25 03:28:51.385513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.169 qpair failed and we were unable to recover it. 00:28:17.169 [2024-04-25 03:28:51.385723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.169 [2024-04-25 03:28:51.385960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.169 [2024-04-25 03:28:51.385984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.169 qpair failed and we were unable to recover it. 00:28:17.169 [2024-04-25 03:28:51.386196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.169 [2024-04-25 03:28:51.386393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.169 [2024-04-25 03:28:51.386417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.169 qpair failed and we were unable to recover it. 00:28:17.169 [2024-04-25 03:28:51.386639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.169 [2024-04-25 03:28:51.386856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.169 [2024-04-25 03:28:51.386880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.169 qpair failed and we were unable to recover it. 00:28:17.169 [2024-04-25 03:28:51.387138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.169 [2024-04-25 03:28:51.387335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.169 [2024-04-25 03:28:51.387361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.169 qpair failed and we were unable to recover it. 00:28:17.169 [2024-04-25 03:28:51.387563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.169 [2024-04-25 03:28:51.387756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.169 [2024-04-25 03:28:51.387782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.169 qpair failed and we were unable to recover it. 00:28:17.169 [2024-04-25 03:28:51.388004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.169 [2024-04-25 03:28:51.388235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.169 [2024-04-25 03:28:51.388260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.169 qpair failed and we were unable to recover it. 00:28:17.169 [2024-04-25 03:28:51.388455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.169 [2024-04-25 03:28:51.388663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.169 [2024-04-25 03:28:51.388693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.169 qpair failed and we were unable to recover it. 00:28:17.169 [2024-04-25 03:28:51.388925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.169 [2024-04-25 03:28:51.389105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.169 [2024-04-25 03:28:51.389130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.169 qpair failed and we were unable to recover it. 00:28:17.169 [2024-04-25 03:28:51.389330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.169 [2024-04-25 03:28:51.389554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.170 [2024-04-25 03:28:51.389579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.170 qpair failed and we were unable to recover it. 00:28:17.170 [2024-04-25 03:28:51.389812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.170 [2024-04-25 03:28:51.389995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.170 [2024-04-25 03:28:51.390020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.170 qpair failed and we were unable to recover it. 00:28:17.170 [2024-04-25 03:28:51.390309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.170 [2024-04-25 03:28:51.390576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.170 [2024-04-25 03:28:51.390605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.170 qpair failed and we were unable to recover it. 00:28:17.170 [2024-04-25 03:28:51.390815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.170 [2024-04-25 03:28:51.391049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.170 [2024-04-25 03:28:51.391074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.170 qpair failed and we were unable to recover it. 00:28:17.170 [2024-04-25 03:28:51.391295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.170 [2024-04-25 03:28:51.391505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.170 [2024-04-25 03:28:51.391534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.170 qpair failed and we were unable to recover it. 00:28:17.170 [2024-04-25 03:28:51.391760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.170 [2024-04-25 03:28:51.391971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.170 [2024-04-25 03:28:51.391998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.170 qpair failed and we were unable to recover it. 00:28:17.170 [2024-04-25 03:28:51.392224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.170 [2024-04-25 03:28:51.392462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.170 [2024-04-25 03:28:51.392490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.170 qpair failed and we were unable to recover it. 00:28:17.170 [2024-04-25 03:28:51.392736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.170 [2024-04-25 03:28:51.392962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.170 [2024-04-25 03:28:51.392989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.170 qpair failed and we were unable to recover it. 00:28:17.170 [2024-04-25 03:28:51.393402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.170 [2024-04-25 03:28:51.393668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.170 [2024-04-25 03:28:51.393694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.170 qpair failed and we were unable to recover it. 00:28:17.170 [2024-04-25 03:28:51.393918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.170 [2024-04-25 03:28:51.394135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.170 [2024-04-25 03:28:51.394164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.170 qpair failed and we were unable to recover it. 00:28:17.170 [2024-04-25 03:28:51.394388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.170 [2024-04-25 03:28:51.394595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.170 [2024-04-25 03:28:51.394619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.170 qpair failed and we were unable to recover it. 00:28:17.170 [2024-04-25 03:28:51.394925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.170 [2024-04-25 03:28:51.395219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.170 [2024-04-25 03:28:51.395247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.170 qpair failed and we were unable to recover it. 00:28:17.170 [2024-04-25 03:28:51.395442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.170 [2024-04-25 03:28:51.395689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.170 [2024-04-25 03:28:51.395717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.170 qpair failed and we were unable to recover it. 00:28:17.170 [2024-04-25 03:28:51.395966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.170 [2024-04-25 03:28:51.396183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.170 [2024-04-25 03:28:51.396228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.170 qpair failed and we were unable to recover it. 00:28:17.170 [2024-04-25 03:28:51.396475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.170 [2024-04-25 03:28:51.396719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.170 [2024-04-25 03:28:51.396747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.170 qpair failed and we were unable to recover it. 00:28:17.170 [2024-04-25 03:28:51.396939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.170 [2024-04-25 03:28:51.397153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.170 [2024-04-25 03:28:51.397181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.170 qpair failed and we were unable to recover it. 00:28:17.170 [2024-04-25 03:28:51.397400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.170 [2024-04-25 03:28:51.397625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.170 [2024-04-25 03:28:51.397680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.170 qpair failed and we were unable to recover it. 00:28:17.170 [2024-04-25 03:28:51.397909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.170 [2024-04-25 03:28:51.398151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.170 [2024-04-25 03:28:51.398179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.170 qpair failed and we were unable to recover it. 00:28:17.170 [2024-04-25 03:28:51.398369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.170 [2024-04-25 03:28:51.398610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.170 [2024-04-25 03:28:51.398645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.170 qpair failed and we were unable to recover it. 00:28:17.170 [2024-04-25 03:28:51.398846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.170 [2024-04-25 03:28:51.399036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.170 [2024-04-25 03:28:51.399062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.170 qpair failed and we were unable to recover it. 00:28:17.170 [2024-04-25 03:28:51.399284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.170 [2024-04-25 03:28:51.399470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.170 [2024-04-25 03:28:51.399497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.170 qpair failed and we were unable to recover it. 00:28:17.170 [2024-04-25 03:28:51.399758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.170 [2024-04-25 03:28:51.399954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.170 [2024-04-25 03:28:51.399979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.170 qpair failed and we were unable to recover it. 00:28:17.170 [2024-04-25 03:28:51.400205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.170 [2024-04-25 03:28:51.400446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.170 [2024-04-25 03:28:51.400474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.170 qpair failed and we were unable to recover it. 00:28:17.170 [2024-04-25 03:28:51.400691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.170 [2024-04-25 03:28:51.400907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.170 [2024-04-25 03:28:51.400934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.170 qpair failed and we were unable to recover it. 00:28:17.170 [2024-04-25 03:28:51.401179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.170 [2024-04-25 03:28:51.401454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.170 [2024-04-25 03:28:51.401498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.170 qpair failed and we were unable to recover it. 00:28:17.170 [2024-04-25 03:28:51.401886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.170 [2024-04-25 03:28:51.402136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.170 [2024-04-25 03:28:51.402175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.170 qpair failed and we were unable to recover it. 00:28:17.170 [2024-04-25 03:28:51.402387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.170 [2024-04-25 03:28:51.402598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.170 [2024-04-25 03:28:51.402634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.170 qpair failed and we were unable to recover it. 00:28:17.170 [2024-04-25 03:28:51.402862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.170 [2024-04-25 03:28:51.403080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.170 [2024-04-25 03:28:51.403108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.170 qpair failed and we were unable to recover it. 00:28:17.170 [2024-04-25 03:28:51.403300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.170 [2024-04-25 03:28:51.403516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.170 [2024-04-25 03:28:51.403543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.170 qpair failed and we were unable to recover it. 00:28:17.170 [2024-04-25 03:28:51.403730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.171 [2024-04-25 03:28:51.403915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.171 [2024-04-25 03:28:51.403945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.171 qpair failed and we were unable to recover it. 00:28:17.171 [2024-04-25 03:28:51.404198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.171 [2024-04-25 03:28:51.404422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.171 [2024-04-25 03:28:51.404449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.171 qpair failed and we were unable to recover it. 00:28:17.171 [2024-04-25 03:28:51.404673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.171 [2024-04-25 03:28:51.404926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.171 [2024-04-25 03:28:51.404953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.171 qpair failed and we were unable to recover it. 00:28:17.171 [2024-04-25 03:28:51.405169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.171 [2024-04-25 03:28:51.405532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.171 [2024-04-25 03:28:51.405584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.171 qpair failed and we were unable to recover it. 00:28:17.171 [2024-04-25 03:28:51.405816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.171 [2024-04-25 03:28:51.406028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.171 [2024-04-25 03:28:51.406055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.171 qpair failed and we were unable to recover it. 00:28:17.171 [2024-04-25 03:28:51.406272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.171 [2024-04-25 03:28:51.406696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.171 [2024-04-25 03:28:51.406725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.171 qpair failed and we were unable to recover it. 00:28:17.171 [2024-04-25 03:28:51.406928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.171 [2024-04-25 03:28:51.407173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.171 [2024-04-25 03:28:51.407197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.171 qpair failed and we were unable to recover it. 00:28:17.171 [2024-04-25 03:28:51.407439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.171 [2024-04-25 03:28:51.407660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.171 [2024-04-25 03:28:51.407689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.171 qpair failed and we were unable to recover it. 00:28:17.171 [2024-04-25 03:28:51.407909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.171 [2024-04-25 03:28:51.408125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.171 [2024-04-25 03:28:51.408152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.171 qpair failed and we were unable to recover it. 00:28:17.171 [2024-04-25 03:28:51.408343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.171 [2024-04-25 03:28:51.408555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.171 [2024-04-25 03:28:51.408584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.171 qpair failed and we were unable to recover it. 00:28:17.171 [2024-04-25 03:28:51.408807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.171 [2024-04-25 03:28:51.409059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.171 [2024-04-25 03:28:51.409086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.171 qpair failed and we were unable to recover it. 00:28:17.171 [2024-04-25 03:28:51.409336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.171 [2024-04-25 03:28:51.409553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.171 [2024-04-25 03:28:51.409580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.171 qpair failed and we were unable to recover it. 00:28:17.171 [2024-04-25 03:28:51.409842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.171 [2024-04-25 03:28:51.410064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.171 [2024-04-25 03:28:51.410091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.171 qpair failed and we were unable to recover it. 00:28:17.171 [2024-04-25 03:28:51.410308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.171 [2024-04-25 03:28:51.410526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.171 [2024-04-25 03:28:51.410554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.171 qpair failed and we were unable to recover it. 00:28:17.171 [2024-04-25 03:28:51.410785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.171 [2024-04-25 03:28:51.410956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.171 [2024-04-25 03:28:51.410981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.171 qpair failed and we were unable to recover it. 00:28:17.171 [2024-04-25 03:28:51.411225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.171 [2024-04-25 03:28:51.411441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.171 [2024-04-25 03:28:51.411466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.171 qpair failed and we were unable to recover it. 00:28:17.171 [2024-04-25 03:28:51.411696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.171 [2024-04-25 03:28:51.411914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.171 [2024-04-25 03:28:51.411941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.171 qpair failed and we were unable to recover it. 00:28:17.171 [2024-04-25 03:28:51.412131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.171 [2024-04-25 03:28:51.412348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.171 [2024-04-25 03:28:51.412375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.171 qpair failed and we were unable to recover it. 00:28:17.171 [2024-04-25 03:28:51.412646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.171 [2024-04-25 03:28:51.412875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.171 [2024-04-25 03:28:51.412903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.171 qpair failed and we were unable to recover it. 00:28:17.171 [2024-04-25 03:28:51.413089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.171 [2024-04-25 03:28:51.413329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.171 [2024-04-25 03:28:51.413357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.171 qpair failed and we were unable to recover it. 00:28:17.171 [2024-04-25 03:28:51.413548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.171 [2024-04-25 03:28:51.413799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.171 [2024-04-25 03:28:51.413827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.171 qpair failed and we were unable to recover it. 00:28:17.171 [2024-04-25 03:28:51.414072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.171 [2024-04-25 03:28:51.414259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.171 [2024-04-25 03:28:51.414288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.171 qpair failed and we were unable to recover it. 00:28:17.171 [2024-04-25 03:28:51.414473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.171 [2024-04-25 03:28:51.414647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.171 [2024-04-25 03:28:51.414681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.171 qpair failed and we were unable to recover it. 00:28:17.171 [2024-04-25 03:28:51.414858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.171 [2024-04-25 03:28:51.415049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.171 [2024-04-25 03:28:51.415088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.171 qpair failed and we were unable to recover it. 00:28:17.171 [2024-04-25 03:28:51.415345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.171 [2024-04-25 03:28:51.415539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.171 [2024-04-25 03:28:51.415567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.171 qpair failed and we were unable to recover it. 00:28:17.171 [2024-04-25 03:28:51.415797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.171 [2024-04-25 03:28:51.416018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.171 [2024-04-25 03:28:51.416045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.171 qpair failed and we were unable to recover it. 00:28:17.171 [2024-04-25 03:28:51.416270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.171 [2024-04-25 03:28:51.416460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.171 [2024-04-25 03:28:51.416485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.171 qpair failed and we were unable to recover it. 00:28:17.171 [2024-04-25 03:28:51.416875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.171 [2024-04-25 03:28:51.417102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.171 [2024-04-25 03:28:51.417130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.171 qpair failed and we were unable to recover it. 00:28:17.171 [2024-04-25 03:28:51.417338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.171 [2024-04-25 03:28:51.417554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.171 [2024-04-25 03:28:51.417583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.171 qpair failed and we were unable to recover it. 00:28:17.172 [2024-04-25 03:28:51.417802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.172 [2024-04-25 03:28:51.417999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.172 [2024-04-25 03:28:51.418023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.172 qpair failed and we were unable to recover it. 00:28:17.172 [2024-04-25 03:28:51.418202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.172 [2024-04-25 03:28:51.418416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.172 [2024-04-25 03:28:51.418445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.172 qpair failed and we were unable to recover it. 00:28:17.172 [2024-04-25 03:28:51.418668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.172 [2024-04-25 03:28:51.418858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.172 [2024-04-25 03:28:51.418888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.172 qpair failed and we were unable to recover it. 00:28:17.172 [2024-04-25 03:28:51.419133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.172 [2024-04-25 03:28:51.419348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.172 [2024-04-25 03:28:51.419375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.172 qpair failed and we were unable to recover it. 00:28:17.172 [2024-04-25 03:28:51.419600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.172 [2024-04-25 03:28:51.419852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.172 [2024-04-25 03:28:51.419880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.172 qpair failed and we were unable to recover it. 00:28:17.172 [2024-04-25 03:28:51.420128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.172 [2024-04-25 03:28:51.420385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.172 [2024-04-25 03:28:51.420412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.172 qpair failed and we were unable to recover it. 00:28:17.172 [2024-04-25 03:28:51.420600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.172 [2024-04-25 03:28:51.420822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.172 [2024-04-25 03:28:51.420849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.172 qpair failed and we were unable to recover it. 00:28:17.172 [2024-04-25 03:28:51.421098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.172 [2024-04-25 03:28:51.421327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.172 [2024-04-25 03:28:51.421354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.172 qpair failed and we were unable to recover it. 00:28:17.172 [2024-04-25 03:28:51.421602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.172 [2024-04-25 03:28:51.421819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.172 [2024-04-25 03:28:51.421847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.172 qpair failed and we were unable to recover it. 00:28:17.172 [2024-04-25 03:28:51.422089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.172 [2024-04-25 03:28:51.422288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.172 [2024-04-25 03:28:51.422313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.172 qpair failed and we were unable to recover it. 00:28:17.172 [2024-04-25 03:28:51.422505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.172 [2024-04-25 03:28:51.422755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.172 [2024-04-25 03:28:51.422783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.172 qpair failed and we were unable to recover it. 00:28:17.172 [2024-04-25 03:28:51.423026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.172 [2024-04-25 03:28:51.423268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.172 [2024-04-25 03:28:51.423296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.172 qpair failed and we were unable to recover it. 00:28:17.172 [2024-04-25 03:28:51.423510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.172 [2024-04-25 03:28:51.423727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.172 [2024-04-25 03:28:51.423754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.172 qpair failed and we were unable to recover it. 00:28:17.172 [2024-04-25 03:28:51.423969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.172 [2024-04-25 03:28:51.424218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.172 [2024-04-25 03:28:51.424245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.172 qpair failed and we were unable to recover it. 00:28:17.172 [2024-04-25 03:28:51.424438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.172 [2024-04-25 03:28:51.424661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.172 [2024-04-25 03:28:51.424689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.172 qpair failed and we were unable to recover it. 00:28:17.172 [2024-04-25 03:28:51.424938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.172 [2024-04-25 03:28:51.425149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.172 [2024-04-25 03:28:51.425177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.172 qpair failed and we were unable to recover it. 00:28:17.172 [2024-04-25 03:28:51.425396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.172 [2024-04-25 03:28:51.425580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.172 [2024-04-25 03:28:51.425609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.172 qpair failed and we were unable to recover it. 00:28:17.172 [2024-04-25 03:28:51.425809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.172 [2024-04-25 03:28:51.426061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.172 [2024-04-25 03:28:51.426090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.172 qpair failed and we were unable to recover it. 00:28:17.172 [2024-04-25 03:28:51.426299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.172 [2024-04-25 03:28:51.426514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.172 [2024-04-25 03:28:51.426547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.172 qpair failed and we were unable to recover it. 00:28:17.172 [2024-04-25 03:28:51.426779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.172 [2024-04-25 03:28:51.426998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.172 [2024-04-25 03:28:51.427025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.172 qpair failed and we were unable to recover it. 00:28:17.172 [2024-04-25 03:28:51.427242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.172 [2024-04-25 03:28:51.427456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.172 [2024-04-25 03:28:51.427483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.172 qpair failed and we were unable to recover it. 00:28:17.172 [2024-04-25 03:28:51.427716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.172 [2024-04-25 03:28:51.427910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.172 [2024-04-25 03:28:51.427935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.172 qpair failed and we were unable to recover it. 00:28:17.172 [2024-04-25 03:28:51.428179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.172 [2024-04-25 03:28:51.428355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.172 [2024-04-25 03:28:51.428383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.172 qpair failed and we were unable to recover it. 00:28:17.172 [2024-04-25 03:28:51.428638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.172 [2024-04-25 03:28:51.428827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.173 [2024-04-25 03:28:51.428855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.173 qpair failed and we were unable to recover it. 00:28:17.173 [2024-04-25 03:28:51.429040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.173 [2024-04-25 03:28:51.429280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.173 [2024-04-25 03:28:51.429320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.173 qpair failed and we were unable to recover it. 00:28:17.173 [2024-04-25 03:28:51.429538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.173 [2024-04-25 03:28:51.429724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.173 [2024-04-25 03:28:51.429752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.173 qpair failed and we were unable to recover it. 00:28:17.173 [2024-04-25 03:28:51.429969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.173 [2024-04-25 03:28:51.430181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.173 [2024-04-25 03:28:51.430205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.173 qpair failed and we were unable to recover it. 00:28:17.173 [2024-04-25 03:28:51.430440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.173 [2024-04-25 03:28:51.430639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.173 [2024-04-25 03:28:51.430669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.173 qpair failed and we were unable to recover it. 00:28:17.173 [2024-04-25 03:28:51.430917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.173 [2024-04-25 03:28:51.431126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.173 [2024-04-25 03:28:51.431160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.173 qpair failed and we were unable to recover it. 00:28:17.173 [2024-04-25 03:28:51.431377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.173 [2024-04-25 03:28:51.431580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.173 [2024-04-25 03:28:51.431606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.173 qpair failed and we were unable to recover it. 00:28:17.173 [2024-04-25 03:28:51.431882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.173 [2024-04-25 03:28:51.432222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.173 [2024-04-25 03:28:51.432250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.173 qpair failed and we were unable to recover it. 00:28:17.173 [2024-04-25 03:28:51.432659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.173 [2024-04-25 03:28:51.432901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.173 [2024-04-25 03:28:51.432928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.173 qpair failed and we were unable to recover it. 00:28:17.173 [2024-04-25 03:28:51.433171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.173 [2024-04-25 03:28:51.433384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.173 [2024-04-25 03:28:51.433412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.173 qpair failed and we were unable to recover it. 00:28:17.173 [2024-04-25 03:28:51.433658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.173 [2024-04-25 03:28:51.433909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.173 [2024-04-25 03:28:51.433936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.173 qpair failed and we were unable to recover it. 00:28:17.173 [2024-04-25 03:28:51.434164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.173 [2024-04-25 03:28:51.434384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.173 [2024-04-25 03:28:51.434407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.173 qpair failed and we were unable to recover it. 00:28:17.173 [2024-04-25 03:28:51.434640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.173 [2024-04-25 03:28:51.434878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.173 [2024-04-25 03:28:51.434905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.173 qpair failed and we were unable to recover it. 00:28:17.173 [2024-04-25 03:28:51.435089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.173 [2024-04-25 03:28:51.435307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.173 [2024-04-25 03:28:51.435334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.173 qpair failed and we were unable to recover it. 00:28:17.173 [2024-04-25 03:28:51.435552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.173 [2024-04-25 03:28:51.435798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.173 [2024-04-25 03:28:51.435826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.173 qpair failed and we were unable to recover it. 00:28:17.173 [2024-04-25 03:28:51.436071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.173 [2024-04-25 03:28:51.436281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.173 [2024-04-25 03:28:51.436313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.173 qpair failed and we were unable to recover it. 00:28:17.173 [2024-04-25 03:28:51.436536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.173 [2024-04-25 03:28:51.436735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.173 [2024-04-25 03:28:51.436765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.173 qpair failed and we were unable to recover it. 00:28:17.173 [2024-04-25 03:28:51.437004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.173 [2024-04-25 03:28:51.437193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.173 [2024-04-25 03:28:51.437220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.173 qpair failed and we were unable to recover it. 00:28:17.173 [2024-04-25 03:28:51.437478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.173 [2024-04-25 03:28:51.437724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.173 [2024-04-25 03:28:51.437753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.173 qpair failed and we were unable to recover it. 00:28:17.173 [2024-04-25 03:28:51.437932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.173 [2024-04-25 03:28:51.438173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.173 [2024-04-25 03:28:51.438200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.173 qpair failed and we were unable to recover it. 00:28:17.173 [2024-04-25 03:28:51.438453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.173 [2024-04-25 03:28:51.438674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.173 [2024-04-25 03:28:51.438702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.173 qpair failed and we were unable to recover it. 00:28:17.173 [2024-04-25 03:28:51.438899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.173 [2024-04-25 03:28:51.439112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.173 [2024-04-25 03:28:51.439139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.173 qpair failed and we were unable to recover it. 00:28:17.173 [2024-04-25 03:28:51.439384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.173 [2024-04-25 03:28:51.439602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.173 [2024-04-25 03:28:51.439635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.173 qpair failed and we were unable to recover it. 00:28:17.173 [2024-04-25 03:28:51.439878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.173 [2024-04-25 03:28:51.440066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.173 [2024-04-25 03:28:51.440094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.173 qpair failed and we were unable to recover it. 00:28:17.173 [2024-04-25 03:28:51.440304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.173 [2024-04-25 03:28:51.440553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.173 [2024-04-25 03:28:51.440580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.173 qpair failed and we were unable to recover it. 00:28:17.173 [2024-04-25 03:28:51.440781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.173 [2024-04-25 03:28:51.441075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.173 [2024-04-25 03:28:51.441127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.173 qpair failed and we were unable to recover it. 00:28:17.173 [2024-04-25 03:28:51.441353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.173 [2024-04-25 03:28:51.441579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.173 [2024-04-25 03:28:51.441606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.173 qpair failed and we were unable to recover it. 00:28:17.173 [2024-04-25 03:28:51.441808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.173 [2024-04-25 03:28:51.442015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.173 [2024-04-25 03:28:51.442042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.173 qpair failed and we were unable to recover it. 00:28:17.173 [2024-04-25 03:28:51.442359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.173 [2024-04-25 03:28:51.442666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.173 [2024-04-25 03:28:51.442692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.173 qpair failed and we were unable to recover it. 00:28:17.173 [2024-04-25 03:28:51.442918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.174 [2024-04-25 03:28:51.443103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.174 [2024-04-25 03:28:51.443130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.174 qpair failed and we were unable to recover it. 00:28:17.174 [2024-04-25 03:28:51.443350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.174 [2024-04-25 03:28:51.443571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.174 [2024-04-25 03:28:51.443598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.174 qpair failed and we were unable to recover it. 00:28:17.174 [2024-04-25 03:28:51.443793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.174 [2024-04-25 03:28:51.444014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.174 [2024-04-25 03:28:51.444041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.174 qpair failed and we were unable to recover it. 00:28:17.174 [2024-04-25 03:28:51.444265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.174 [2024-04-25 03:28:51.444511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.174 [2024-04-25 03:28:51.444538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.174 qpair failed and we were unable to recover it. 00:28:17.174 [2024-04-25 03:28:51.444731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.174 [2024-04-25 03:28:51.444947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.174 [2024-04-25 03:28:51.444976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.174 qpair failed and we were unable to recover it. 00:28:17.174 [2024-04-25 03:28:51.445188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.174 [2024-04-25 03:28:51.445380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.174 [2024-04-25 03:28:51.445407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.174 qpair failed and we were unable to recover it. 00:28:17.174 [2024-04-25 03:28:51.445634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.174 [2024-04-25 03:28:51.445882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.174 [2024-04-25 03:28:51.445909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.174 qpair failed and we were unable to recover it. 00:28:17.174 [2024-04-25 03:28:51.446159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.174 [2024-04-25 03:28:51.446340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.174 [2024-04-25 03:28:51.446367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.174 qpair failed and we were unable to recover it. 00:28:17.174 [2024-04-25 03:28:51.446581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.174 [2024-04-25 03:28:51.446796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.174 [2024-04-25 03:28:51.446825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.174 qpair failed and we were unable to recover it. 00:28:17.174 [2024-04-25 03:28:51.447045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.174 [2024-04-25 03:28:51.447261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.174 [2024-04-25 03:28:51.447288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.174 qpair failed and we were unable to recover it. 00:28:17.174 [2024-04-25 03:28:51.447605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.174 [2024-04-25 03:28:51.447848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.174 [2024-04-25 03:28:51.447876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.174 qpair failed and we were unable to recover it. 00:28:17.174 [2024-04-25 03:28:51.448098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.174 [2024-04-25 03:28:51.448308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.174 [2024-04-25 03:28:51.448335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.174 qpair failed and we were unable to recover it. 00:28:17.174 [2024-04-25 03:28:51.448527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.174 [2024-04-25 03:28:51.448749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.174 [2024-04-25 03:28:51.448778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.174 qpair failed and we were unable to recover it. 00:28:17.174 [2024-04-25 03:28:51.448973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.174 [2024-04-25 03:28:51.449191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.174 [2024-04-25 03:28:51.449220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.174 qpair failed and we were unable to recover it. 00:28:17.174 [2024-04-25 03:28:51.449438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.174 [2024-04-25 03:28:51.449661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.174 [2024-04-25 03:28:51.449690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.174 qpair failed and we were unable to recover it. 00:28:17.174 [2024-04-25 03:28:51.449916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.174 [2024-04-25 03:28:51.450097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.174 [2024-04-25 03:28:51.450126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.174 qpair failed and we were unable to recover it. 00:28:17.174 [2024-04-25 03:28:51.450346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.174 [2024-04-25 03:28:51.450558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.174 [2024-04-25 03:28:51.450585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.174 qpair failed and we were unable to recover it. 00:28:17.174 [2024-04-25 03:28:51.450819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.174 [2024-04-25 03:28:51.451041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.174 [2024-04-25 03:28:51.451068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.174 qpair failed and we were unable to recover it. 00:28:17.174 [2024-04-25 03:28:51.451310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.174 [2024-04-25 03:28:51.451489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.174 [2024-04-25 03:28:51.451516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.174 qpair failed and we were unable to recover it. 00:28:17.174 [2024-04-25 03:28:51.451741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.174 [2024-04-25 03:28:51.451965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.174 [2024-04-25 03:28:51.451989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.174 qpair failed and we were unable to recover it. 00:28:17.174 [2024-04-25 03:28:51.452215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.174 [2024-04-25 03:28:51.452615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.174 [2024-04-25 03:28:51.452668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.174 qpair failed and we were unable to recover it. 00:28:17.174 [2024-04-25 03:28:51.452921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.174 [2024-04-25 03:28:51.453133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.174 [2024-04-25 03:28:51.453162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.174 qpair failed and we were unable to recover it. 00:28:17.174 [2024-04-25 03:28:51.453408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.174 [2024-04-25 03:28:51.453650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.174 [2024-04-25 03:28:51.453676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.174 qpair failed and we were unable to recover it. 00:28:17.174 [2024-04-25 03:28:51.453882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.174 [2024-04-25 03:28:51.454121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.174 [2024-04-25 03:28:51.454146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.174 qpair failed and we were unable to recover it. 00:28:17.174 [2024-04-25 03:28:51.454368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.174 [2024-04-25 03:28:51.454593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.174 [2024-04-25 03:28:51.454620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.174 qpair failed and we were unable to recover it. 00:28:17.174 [2024-04-25 03:28:51.454871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.174 [2024-04-25 03:28:51.455114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.174 [2024-04-25 03:28:51.455169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.174 qpair failed and we were unable to recover it. 00:28:17.174 [2024-04-25 03:28:51.455555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.174 [2024-04-25 03:28:51.455798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.174 [2024-04-25 03:28:51.455827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.174 qpair failed and we were unable to recover it. 00:28:17.174 [2024-04-25 03:28:51.456138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.174 [2024-04-25 03:28:51.456325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.174 [2024-04-25 03:28:51.456350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.174 qpair failed and we were unable to recover it. 00:28:17.174 [2024-04-25 03:28:51.456561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.174 [2024-04-25 03:28:51.456809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.174 [2024-04-25 03:28:51.456837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.174 qpair failed and we were unable to recover it. 00:28:17.175 [2024-04-25 03:28:51.457037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.175 [2024-04-25 03:28:51.457205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.175 [2024-04-25 03:28:51.457229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.175 qpair failed and we were unable to recover it. 00:28:17.175 [2024-04-25 03:28:51.457480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.175 [2024-04-25 03:28:51.457728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.175 [2024-04-25 03:28:51.457756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.175 qpair failed and we were unable to recover it. 00:28:17.175 [2024-04-25 03:28:51.457977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.175 [2024-04-25 03:28:51.458159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.175 [2024-04-25 03:28:51.458186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.175 qpair failed and we were unable to recover it. 00:28:17.175 [2024-04-25 03:28:51.458428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.175 [2024-04-25 03:28:51.458675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.175 [2024-04-25 03:28:51.458704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.175 qpair failed and we were unable to recover it. 00:28:17.175 [2024-04-25 03:28:51.458935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.175 [2024-04-25 03:28:51.459151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.175 [2024-04-25 03:28:51.459179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.175 qpair failed and we were unable to recover it. 00:28:17.175 [2024-04-25 03:28:51.459360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.175 [2024-04-25 03:28:51.459532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.175 [2024-04-25 03:28:51.459572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.175 qpair failed and we were unable to recover it. 00:28:17.175 [2024-04-25 03:28:51.459799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.175 [2024-04-25 03:28:51.460019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.175 [2024-04-25 03:28:51.460047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.175 qpair failed and we were unable to recover it. 00:28:17.175 [2024-04-25 03:28:51.460281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.175 [2024-04-25 03:28:51.460482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.175 [2024-04-25 03:28:51.460509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.175 qpair failed and we were unable to recover it. 00:28:17.175 [2024-04-25 03:28:51.460720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.175 [2024-04-25 03:28:51.460920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.175 [2024-04-25 03:28:51.460945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.175 qpair failed and we were unable to recover it. 00:28:17.175 [2024-04-25 03:28:51.461141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.175 [2024-04-25 03:28:51.461383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.175 [2024-04-25 03:28:51.461410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.175 qpair failed and we were unable to recover it. 00:28:17.175 [2024-04-25 03:28:51.461607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.175 [2024-04-25 03:28:51.461824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.175 [2024-04-25 03:28:51.461850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.175 qpair failed and we were unable to recover it. 00:28:17.175 [2024-04-25 03:28:51.462068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.175 [2024-04-25 03:28:51.462252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.175 [2024-04-25 03:28:51.462280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.175 qpair failed and we were unable to recover it. 00:28:17.175 [2024-04-25 03:28:51.462525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.175 [2024-04-25 03:28:51.462733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.175 [2024-04-25 03:28:51.462758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.175 qpair failed and we were unable to recover it. 00:28:17.175 [2024-04-25 03:28:51.462978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.175 [2024-04-25 03:28:51.463198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.175 [2024-04-25 03:28:51.463226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.175 qpair failed and we were unable to recover it. 00:28:17.175 [2024-04-25 03:28:51.463454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.175 [2024-04-25 03:28:51.463648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.175 [2024-04-25 03:28:51.463674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.175 qpair failed and we were unable to recover it. 00:28:17.175 [2024-04-25 03:28:51.463849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.175 [2024-04-25 03:28:51.464014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.175 [2024-04-25 03:28:51.464038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.175 qpair failed and we were unable to recover it. 00:28:17.175 [2024-04-25 03:28:51.464203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.175 [2024-04-25 03:28:51.464362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.175 [2024-04-25 03:28:51.464386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.175 qpair failed and we were unable to recover it. 00:28:17.175 [2024-04-25 03:28:51.464604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.175 [2024-04-25 03:28:51.464841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.175 [2024-04-25 03:28:51.464870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.175 qpair failed and we were unable to recover it. 00:28:17.175 [2024-04-25 03:28:51.465097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.175 [2024-04-25 03:28:51.465345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.175 [2024-04-25 03:28:51.465369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.175 qpair failed and we were unable to recover it. 00:28:17.175 [2024-04-25 03:28:51.465596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.175 [2024-04-25 03:28:51.465805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.175 [2024-04-25 03:28:51.465833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.175 qpair failed and we were unable to recover it. 00:28:17.175 [2024-04-25 03:28:51.466057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.175 [2024-04-25 03:28:51.466277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.175 [2024-04-25 03:28:51.466305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.175 qpair failed and we were unable to recover it. 00:28:17.175 [2024-04-25 03:28:51.466520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.175 [2024-04-25 03:28:51.466705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.175 [2024-04-25 03:28:51.466732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.175 qpair failed and we were unable to recover it. 00:28:17.175 [2024-04-25 03:28:51.466935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.175 [2024-04-25 03:28:51.467202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.175 [2024-04-25 03:28:51.467254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.175 qpair failed and we were unable to recover it. 00:28:17.175 [2024-04-25 03:28:51.467476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.175 [2024-04-25 03:28:51.467666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.175 [2024-04-25 03:28:51.467695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.175 qpair failed and we were unable to recover it. 00:28:17.175 [2024-04-25 03:28:51.467942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.175 [2024-04-25 03:28:51.468138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.175 [2024-04-25 03:28:51.468162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.175 qpair failed and we were unable to recover it. 00:28:17.175 [2024-04-25 03:28:51.468356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.175 [2024-04-25 03:28:51.468569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.175 [2024-04-25 03:28:51.468596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.175 qpair failed and we were unable to recover it. 00:28:17.175 [2024-04-25 03:28:51.468824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.175 [2024-04-25 03:28:51.469050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.175 [2024-04-25 03:28:51.469077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.175 qpair failed and we were unable to recover it. 00:28:17.175 [2024-04-25 03:28:51.469315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.175 [2024-04-25 03:28:51.469503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.175 [2024-04-25 03:28:51.469528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.175 qpair failed and we were unable to recover it. 00:28:17.175 [2024-04-25 03:28:51.469696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.175 [2024-04-25 03:28:51.469899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.175 [2024-04-25 03:28:51.469924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.175 qpair failed and we were unable to recover it. 00:28:17.176 [2024-04-25 03:28:51.470119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.176 [2024-04-25 03:28:51.470386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.176 [2024-04-25 03:28:51.470410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.176 qpair failed and we were unable to recover it. 00:28:17.176 [2024-04-25 03:28:51.470609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.176 [2024-04-25 03:28:51.470845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.176 [2024-04-25 03:28:51.470873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.176 qpair failed and we were unable to recover it. 00:28:17.176 [2024-04-25 03:28:51.471097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.176 [2024-04-25 03:28:51.471337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.176 [2024-04-25 03:28:51.471364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.176 qpair failed and we were unable to recover it. 00:28:17.176 [2024-04-25 03:28:51.471609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.176 [2024-04-25 03:28:51.471859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.176 [2024-04-25 03:28:51.471884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.176 qpair failed and we were unable to recover it. 00:28:17.176 [2024-04-25 03:28:51.472122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.176 [2024-04-25 03:28:51.472500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.176 [2024-04-25 03:28:51.472549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.176 qpair failed and we were unable to recover it. 00:28:17.176 [2024-04-25 03:28:51.472792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.176 [2024-04-25 03:28:51.473008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.176 [2024-04-25 03:28:51.473038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.176 qpair failed and we were unable to recover it. 00:28:17.176 [2024-04-25 03:28:51.473227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.176 [2024-04-25 03:28:51.473453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.176 [2024-04-25 03:28:51.473479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.176 qpair failed and we were unable to recover it. 00:28:17.176 [2024-04-25 03:28:51.473685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.176 [2024-04-25 03:28:51.473907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.176 [2024-04-25 03:28:51.473935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.176 qpair failed and we were unable to recover it. 00:28:17.176 [2024-04-25 03:28:51.474186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.176 [2024-04-25 03:28:51.474379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.176 [2024-04-25 03:28:51.474409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.176 qpair failed and we were unable to recover it. 00:28:17.176 [2024-04-25 03:28:51.474641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.176 [2024-04-25 03:28:51.474878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.176 [2024-04-25 03:28:51.474903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.176 qpair failed and we were unable to recover it. 00:28:17.176 [2024-04-25 03:28:51.475074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.176 [2024-04-25 03:28:51.475281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.176 [2024-04-25 03:28:51.475306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.176 qpair failed and we were unable to recover it. 00:28:17.176 [2024-04-25 03:28:51.475527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.176 [2024-04-25 03:28:51.475734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.176 [2024-04-25 03:28:51.475764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.176 qpair failed and we were unable to recover it. 00:28:17.176 [2024-04-25 03:28:51.475981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.176 [2024-04-25 03:28:51.476202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.176 [2024-04-25 03:28:51.476227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.176 qpair failed and we were unable to recover it. 00:28:17.176 [2024-04-25 03:28:51.476458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.176 [2024-04-25 03:28:51.476683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.176 [2024-04-25 03:28:51.476711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.176 qpair failed and we were unable to recover it. 00:28:17.176 [2024-04-25 03:28:51.476897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.176 [2024-04-25 03:28:51.477140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.176 [2024-04-25 03:28:51.477167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.176 qpair failed and we were unable to recover it. 00:28:17.176 [2024-04-25 03:28:51.477385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.176 [2024-04-25 03:28:51.477551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.176 [2024-04-25 03:28:51.477576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.176 qpair failed and we were unable to recover it. 00:28:17.176 [2024-04-25 03:28:51.477824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.176 [2024-04-25 03:28:51.478037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.176 [2024-04-25 03:28:51.478064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.176 qpair failed and we were unable to recover it. 00:28:17.176 [2024-04-25 03:28:51.478431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.176 [2024-04-25 03:28:51.478679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.176 [2024-04-25 03:28:51.478708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.176 qpair failed and we were unable to recover it. 00:28:17.176 [2024-04-25 03:28:51.478930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.176 [2024-04-25 03:28:51.479144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.176 [2024-04-25 03:28:51.479172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.176 qpair failed and we were unable to recover it. 00:28:17.176 [2024-04-25 03:28:51.479380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.176 [2024-04-25 03:28:51.479643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.176 [2024-04-25 03:28:51.479668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.176 qpair failed and we were unable to recover it. 00:28:17.176 [2024-04-25 03:28:51.479839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.176 [2024-04-25 03:28:51.480078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.176 [2024-04-25 03:28:51.480104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.176 qpair failed and we were unable to recover it. 00:28:17.176 [2024-04-25 03:28:51.480304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.176 [2024-04-25 03:28:51.480512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.176 [2024-04-25 03:28:51.480540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.176 qpair failed and we were unable to recover it. 00:28:17.176 [2024-04-25 03:28:51.480764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.176 [2024-04-25 03:28:51.480982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.176 [2024-04-25 03:28:51.481009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.176 qpair failed and we were unable to recover it. 00:28:17.176 [2024-04-25 03:28:51.481257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.176 [2024-04-25 03:28:51.481475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.176 [2024-04-25 03:28:51.481502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.176 qpair failed and we were unable to recover it. 00:28:17.176 [2024-04-25 03:28:51.481721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.176 [2024-04-25 03:28:51.481939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.176 [2024-04-25 03:28:51.481967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.176 qpair failed and we were unable to recover it. 00:28:17.176 [2024-04-25 03:28:51.482216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.176 [2024-04-25 03:28:51.482433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.176 [2024-04-25 03:28:51.482460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.176 qpair failed and we were unable to recover it. 00:28:17.176 [2024-04-25 03:28:51.482680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.176 [2024-04-25 03:28:51.482900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.176 [2024-04-25 03:28:51.482926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.176 qpair failed and we were unable to recover it. 00:28:17.176 [2024-04-25 03:28:51.483123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.176 [2024-04-25 03:28:51.483349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.176 [2024-04-25 03:28:51.483373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.176 qpair failed and we were unable to recover it. 00:28:17.176 [2024-04-25 03:28:51.483570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.176 [2024-04-25 03:28:51.483791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.177 [2024-04-25 03:28:51.483816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.177 qpair failed and we were unable to recover it. 00:28:17.177 [2024-04-25 03:28:51.484015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.177 [2024-04-25 03:28:51.484213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.177 [2024-04-25 03:28:51.484238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.177 qpair failed and we were unable to recover it. 00:28:17.177 [2024-04-25 03:28:51.484436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.177 [2024-04-25 03:28:51.484667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.177 [2024-04-25 03:28:51.484697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.177 qpair failed and we were unable to recover it. 00:28:17.177 [2024-04-25 03:28:51.484912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.177 [2024-04-25 03:28:51.485154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.177 [2024-04-25 03:28:51.485181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.177 qpair failed and we were unable to recover it. 00:28:17.177 [2024-04-25 03:28:51.485401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.177 [2024-04-25 03:28:51.485622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.177 [2024-04-25 03:28:51.485669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.177 qpair failed and we were unable to recover it. 00:28:17.177 [2024-04-25 03:28:51.485916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.177 [2024-04-25 03:28:51.486131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.177 [2024-04-25 03:28:51.486158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.177 qpair failed and we were unable to recover it. 00:28:17.177 [2024-04-25 03:28:51.486382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.177 [2024-04-25 03:28:51.486555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.177 [2024-04-25 03:28:51.486580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.177 qpair failed and we were unable to recover it. 00:28:17.177 [2024-04-25 03:28:51.486782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.177 [2024-04-25 03:28:51.486985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.177 [2024-04-25 03:28:51.487010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.177 qpair failed and we were unable to recover it. 00:28:17.177 [2024-04-25 03:28:51.487373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.177 [2024-04-25 03:28:51.487640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.177 [2024-04-25 03:28:51.487668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.177 qpair failed and we were unable to recover it. 00:28:17.177 [2024-04-25 03:28:51.487914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.177 [2024-04-25 03:28:51.488175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.177 [2024-04-25 03:28:51.488199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.177 qpair failed and we were unable to recover it. 00:28:17.177 [2024-04-25 03:28:51.488396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.177 [2024-04-25 03:28:51.488617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.177 [2024-04-25 03:28:51.488650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.177 qpair failed and we were unable to recover it. 00:28:17.177 [2024-04-25 03:28:51.488881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.177 [2024-04-25 03:28:51.489083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.177 [2024-04-25 03:28:51.489111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.177 qpair failed and we were unable to recover it. 00:28:17.177 [2024-04-25 03:28:51.489325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.177 [2024-04-25 03:28:51.489507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.177 [2024-04-25 03:28:51.489536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.177 qpair failed and we were unable to recover it. 00:28:17.177 [2024-04-25 03:28:51.489757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.177 [2024-04-25 03:28:51.489954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.177 [2024-04-25 03:28:51.489984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.177 qpair failed and we were unable to recover it. 00:28:17.177 [2024-04-25 03:28:51.490236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.177 [2024-04-25 03:28:51.490454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.177 [2024-04-25 03:28:51.490483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.177 qpair failed and we were unable to recover it. 00:28:17.177 [2024-04-25 03:28:51.490670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.177 [2024-04-25 03:28:51.490888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.177 [2024-04-25 03:28:51.490916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.177 qpair failed and we were unable to recover it. 00:28:17.177 [2024-04-25 03:28:51.491156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.177 [2024-04-25 03:28:51.491386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.177 [2024-04-25 03:28:51.491412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.177 qpair failed and we were unable to recover it. 00:28:17.177 [2024-04-25 03:28:51.491586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.177 [2024-04-25 03:28:51.491765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.177 [2024-04-25 03:28:51.491790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.177 qpair failed and we were unable to recover it. 00:28:17.177 [2024-04-25 03:28:51.492034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.177 [2024-04-25 03:28:51.492340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.177 [2024-04-25 03:28:51.492401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.177 qpair failed and we were unable to recover it. 00:28:17.177 [2024-04-25 03:28:51.492609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.177 [2024-04-25 03:28:51.492818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.177 [2024-04-25 03:28:51.492843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.177 qpair failed and we were unable to recover it. 00:28:17.177 [2024-04-25 03:28:51.493038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.177 [2024-04-25 03:28:51.493236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.177 [2024-04-25 03:28:51.493260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.177 qpair failed and we were unable to recover it. 00:28:17.177 [2024-04-25 03:28:51.493497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.177 [2024-04-25 03:28:51.493731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.177 [2024-04-25 03:28:51.493759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.177 qpair failed and we were unable to recover it. 00:28:17.177 [2024-04-25 03:28:51.494004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.177 [2024-04-25 03:28:51.494222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.177 [2024-04-25 03:28:51.494250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.177 qpair failed and we were unable to recover it. 00:28:17.177 [2024-04-25 03:28:51.494500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.177 [2024-04-25 03:28:51.494714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.177 [2024-04-25 03:28:51.494742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.177 qpair failed and we were unable to recover it. 00:28:17.177 [2024-04-25 03:28:51.494963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.177 [2024-04-25 03:28:51.495177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.177 [2024-04-25 03:28:51.495202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.177 qpair failed and we were unable to recover it. 00:28:17.177 [2024-04-25 03:28:51.495417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.178 [2024-04-25 03:28:51.495659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.178 [2024-04-25 03:28:51.495696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.178 qpair failed and we were unable to recover it. 00:28:17.178 [2024-04-25 03:28:51.495910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.178 [2024-04-25 03:28:51.496126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.178 [2024-04-25 03:28:51.496154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.178 qpair failed and we were unable to recover it. 00:28:17.178 [2024-04-25 03:28:51.496344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.178 [2024-04-25 03:28:51.496570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.178 [2024-04-25 03:28:51.496594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.178 qpair failed and we were unable to recover it. 00:28:17.178 [2024-04-25 03:28:51.496825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.178 [2024-04-25 03:28:51.497045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.178 [2024-04-25 03:28:51.497069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.178 qpair failed and we were unable to recover it. 00:28:17.178 [2024-04-25 03:28:51.497297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.178 [2024-04-25 03:28:51.497518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.178 [2024-04-25 03:28:51.497546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.178 qpair failed and we were unable to recover it. 00:28:17.178 [2024-04-25 03:28:51.497761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.178 [2024-04-25 03:28:51.497956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.178 [2024-04-25 03:28:51.497981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.178 qpair failed and we were unable to recover it. 00:28:17.178 [2024-04-25 03:28:51.498146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.178 [2024-04-25 03:28:51.498344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.178 [2024-04-25 03:28:51.498369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.178 qpair failed and we were unable to recover it. 00:28:17.178 [2024-04-25 03:28:51.498565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.178 [2024-04-25 03:28:51.498759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.178 [2024-04-25 03:28:51.498785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.178 qpair failed and we were unable to recover it. 00:28:17.178 [2024-04-25 03:28:51.498996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.178 [2024-04-25 03:28:51.499266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.178 [2024-04-25 03:28:51.499291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.178 qpair failed and we were unable to recover it. 00:28:17.178 [2024-04-25 03:28:51.499492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.178 [2024-04-25 03:28:51.499740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.178 [2024-04-25 03:28:51.499769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.178 qpair failed and we were unable to recover it. 00:28:17.178 [2024-04-25 03:28:51.499978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.178 [2024-04-25 03:28:51.500204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.178 [2024-04-25 03:28:51.500231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.178 qpair failed and we were unable to recover it. 00:28:17.178 [2024-04-25 03:28:51.500451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.178 [2024-04-25 03:28:51.500641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.178 [2024-04-25 03:28:51.500670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.178 qpair failed and we were unable to recover it. 00:28:17.178 [2024-04-25 03:28:51.500891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.178 [2024-04-25 03:28:51.501097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.178 [2024-04-25 03:28:51.501122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.178 qpair failed and we were unable to recover it. 00:28:17.178 [2024-04-25 03:28:51.501293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.178 [2024-04-25 03:28:51.501525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.178 [2024-04-25 03:28:51.501554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.178 qpair failed and we were unable to recover it. 00:28:17.178 [2024-04-25 03:28:51.501781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.178 [2024-04-25 03:28:51.501977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.178 [2024-04-25 03:28:51.502002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.178 qpair failed and we were unable to recover it. 00:28:17.178 [2024-04-25 03:28:51.502229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.178 [2024-04-25 03:28:51.502416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.178 [2024-04-25 03:28:51.502444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.178 qpair failed and we were unable to recover it. 00:28:17.178 [2024-04-25 03:28:51.502633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.178 [2024-04-25 03:28:51.502850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.178 [2024-04-25 03:28:51.502882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.178 qpair failed and we were unable to recover it. 00:28:17.178 [2024-04-25 03:28:51.503122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.178 [2024-04-25 03:28:51.503364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.178 [2024-04-25 03:28:51.503392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.178 qpair failed and we were unable to recover it. 00:28:17.178 [2024-04-25 03:28:51.503613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.178 [2024-04-25 03:28:51.503838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.178 [2024-04-25 03:28:51.503863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.178 qpair failed and we were unable to recover it. 00:28:17.178 [2024-04-25 03:28:51.504061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.178 [2024-04-25 03:28:51.504257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.178 [2024-04-25 03:28:51.504281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.178 qpair failed and we were unable to recover it. 00:28:17.178 [2024-04-25 03:28:51.504481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.178 [2024-04-25 03:28:51.504704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.178 [2024-04-25 03:28:51.504732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.178 qpair failed and we were unable to recover it. 00:28:17.178 [2024-04-25 03:28:51.504911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.178 [2024-04-25 03:28:51.505233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.178 [2024-04-25 03:28:51.505259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.178 qpair failed and we were unable to recover it. 00:28:17.178 [2024-04-25 03:28:51.505484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.178 [2024-04-25 03:28:51.505678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.178 [2024-04-25 03:28:51.505720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.178 qpair failed and we were unable to recover it. 00:28:17.178 [2024-04-25 03:28:51.505941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.178 [2024-04-25 03:28:51.506130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.178 [2024-04-25 03:28:51.506154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.178 qpair failed and we were unable to recover it. 00:28:17.178 [2024-04-25 03:28:51.506354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.178 [2024-04-25 03:28:51.506567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.178 [2024-04-25 03:28:51.506591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.178 qpair failed and we were unable to recover it. 00:28:17.178 [2024-04-25 03:28:51.506792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.178 [2024-04-25 03:28:51.507006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.178 [2024-04-25 03:28:51.507034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.178 qpair failed and we were unable to recover it. 00:28:17.178 [2024-04-25 03:28:51.507249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.178 [2024-04-25 03:28:51.507423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.178 [2024-04-25 03:28:51.507451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.178 qpair failed and we were unable to recover it. 00:28:17.178 [2024-04-25 03:28:51.507645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.178 [2024-04-25 03:28:51.507820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.178 [2024-04-25 03:28:51.507847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.178 qpair failed and we were unable to recover it. 00:28:17.178 [2024-04-25 03:28:51.508026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.178 [2024-04-25 03:28:51.508244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.178 [2024-04-25 03:28:51.508272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.178 qpair failed and we were unable to recover it. 00:28:17.178 [2024-04-25 03:28:51.508485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.179 [2024-04-25 03:28:51.508702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.179 [2024-04-25 03:28:51.508731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.179 qpair failed and we were unable to recover it. 00:28:17.179 [2024-04-25 03:28:51.508955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.179 [2024-04-25 03:28:51.509136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.179 [2024-04-25 03:28:51.509163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.179 qpair failed and we were unable to recover it. 00:28:17.179 [2024-04-25 03:28:51.509352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.179 [2024-04-25 03:28:51.509549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.179 [2024-04-25 03:28:51.509573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.179 qpair failed and we were unable to recover it. 00:28:17.179 [2024-04-25 03:28:51.509773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.179 [2024-04-25 03:28:51.509968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.179 [2024-04-25 03:28:51.509993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.179 qpair failed and we were unable to recover it. 00:28:17.179 [2024-04-25 03:28:51.510241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.179 [2024-04-25 03:28:51.510466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.179 [2024-04-25 03:28:51.510494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.179 qpair failed and we were unable to recover it. 00:28:17.179 [2024-04-25 03:28:51.510684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.179 [2024-04-25 03:28:51.510873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.179 [2024-04-25 03:28:51.510898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.179 qpair failed and we were unable to recover it. 00:28:17.179 [2024-04-25 03:28:51.511118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.179 [2024-04-25 03:28:51.511287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.179 [2024-04-25 03:28:51.511312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.179 qpair failed and we were unable to recover it. 00:28:17.179 [2024-04-25 03:28:51.511524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.179 [2024-04-25 03:28:51.511771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.179 [2024-04-25 03:28:51.511803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.179 qpair failed and we were unable to recover it. 00:28:17.179 [2024-04-25 03:28:51.511995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.179 [2024-04-25 03:28:51.512206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.179 [2024-04-25 03:28:51.512233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.179 qpair failed and we were unable to recover it. 00:28:17.179 [2024-04-25 03:28:51.512478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.179 [2024-04-25 03:28:51.512733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.179 [2024-04-25 03:28:51.512761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.179 qpair failed and we were unable to recover it. 00:28:17.179 [2024-04-25 03:28:51.512943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.179 [2024-04-25 03:28:51.513125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.179 [2024-04-25 03:28:51.513154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.179 qpair failed and we were unable to recover it. 00:28:17.179 [2024-04-25 03:28:51.513402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.179 [2024-04-25 03:28:51.513614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.179 [2024-04-25 03:28:51.513650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.179 qpair failed and we were unable to recover it. 00:28:17.179 [2024-04-25 03:28:51.513844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.179 [2024-04-25 03:28:51.514029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.179 [2024-04-25 03:28:51.514058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.179 qpair failed and we were unable to recover it. 00:28:17.179 [2024-04-25 03:28:51.514271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.179 [2024-04-25 03:28:51.514514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.179 [2024-04-25 03:28:51.514556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.179 qpair failed and we were unable to recover it. 00:28:17.179 [2024-04-25 03:28:51.514788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.179 [2024-04-25 03:28:51.514977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.179 [2024-04-25 03:28:51.515002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.179 qpair failed and we were unable to recover it. 00:28:17.179 [2024-04-25 03:28:51.515259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.179 [2024-04-25 03:28:51.515480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.179 [2024-04-25 03:28:51.515506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.179 qpair failed and we were unable to recover it. 00:28:17.179 [2024-04-25 03:28:51.515703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.179 [2024-04-25 03:28:51.515903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.179 [2024-04-25 03:28:51.515929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.179 qpair failed and we were unable to recover it. 00:28:17.179 [2024-04-25 03:28:51.516127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.179 [2024-04-25 03:28:51.516370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.179 [2024-04-25 03:28:51.516402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.179 qpair failed and we were unable to recover it. 00:28:17.179 [2024-04-25 03:28:51.516619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.179 [2024-04-25 03:28:51.516844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.179 [2024-04-25 03:28:51.516868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.179 qpair failed and we were unable to recover it. 00:28:17.179 [2024-04-25 03:28:51.517095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.179 [2024-04-25 03:28:51.517315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.179 [2024-04-25 03:28:51.517340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.179 qpair failed and we were unable to recover it. 00:28:17.179 [2024-04-25 03:28:51.517558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.179 [2024-04-25 03:28:51.517771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.179 [2024-04-25 03:28:51.517799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.179 qpair failed and we were unable to recover it. 00:28:17.179 [2024-04-25 03:28:51.518022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.179 [2024-04-25 03:28:51.518221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.179 [2024-04-25 03:28:51.518247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.179 qpair failed and we were unable to recover it. 00:28:17.179 [2024-04-25 03:28:51.518457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.179 [2024-04-25 03:28:51.518701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.179 [2024-04-25 03:28:51.518729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.179 qpair failed and we were unable to recover it. 00:28:17.179 [2024-04-25 03:28:51.518957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.179 [2024-04-25 03:28:51.519138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.179 [2024-04-25 03:28:51.519168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.179 qpair failed and we were unable to recover it. 00:28:17.179 [2024-04-25 03:28:51.519411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.179 [2024-04-25 03:28:51.519603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.179 [2024-04-25 03:28:51.519638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.179 qpair failed and we were unable to recover it. 00:28:17.179 [2024-04-25 03:28:51.519870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.179 [2024-04-25 03:28:51.520089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.179 [2024-04-25 03:28:51.520118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.179 qpair failed and we were unable to recover it. 00:28:17.179 [2024-04-25 03:28:51.520427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.179 [2024-04-25 03:28:51.520668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.179 [2024-04-25 03:28:51.520696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.179 qpair failed and we were unable to recover it. 00:28:17.179 [2024-04-25 03:28:51.520916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.179 [2024-04-25 03:28:51.521121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.179 [2024-04-25 03:28:51.521145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.179 qpair failed and we were unable to recover it. 00:28:17.179 [2024-04-25 03:28:51.521327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.179 [2024-04-25 03:28:51.521494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.179 [2024-04-25 03:28:51.521519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.180 qpair failed and we were unable to recover it. 00:28:17.180 [2024-04-25 03:28:51.521744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.180 [2024-04-25 03:28:51.521965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.180 [2024-04-25 03:28:51.521993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.180 qpair failed and we were unable to recover it. 00:28:17.180 [2024-04-25 03:28:51.522199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.180 [2024-04-25 03:28:51.522408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.180 [2024-04-25 03:28:51.522433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.180 qpair failed and we were unable to recover it. 00:28:17.180 [2024-04-25 03:28:51.522655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.180 [2024-04-25 03:28:51.522842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.180 [2024-04-25 03:28:51.522870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.180 qpair failed and we were unable to recover it. 00:28:17.180 [2024-04-25 03:28:51.523109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.180 [2024-04-25 03:28:51.523348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.180 [2024-04-25 03:28:51.523376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.180 qpair failed and we were unable to recover it. 00:28:17.180 [2024-04-25 03:28:51.523589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.180 [2024-04-25 03:28:51.523764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.180 [2024-04-25 03:28:51.523789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.180 qpair failed and we were unable to recover it. 00:28:17.180 [2024-04-25 03:28:51.523955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.180 [2024-04-25 03:28:51.524149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.180 [2024-04-25 03:28:51.524174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.180 qpair failed and we were unable to recover it. 00:28:17.180 [2024-04-25 03:28:51.524392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.180 [2024-04-25 03:28:51.524655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.180 [2024-04-25 03:28:51.524683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.180 qpair failed and we were unable to recover it. 00:28:17.180 [2024-04-25 03:28:51.524926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.180 [2024-04-25 03:28:51.525141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.180 [2024-04-25 03:28:51.525170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.180 qpair failed and we were unable to recover it. 00:28:17.180 [2024-04-25 03:28:51.525408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.180 [2024-04-25 03:28:51.525609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.180 [2024-04-25 03:28:51.525638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.180 qpair failed and we were unable to recover it. 00:28:17.180 [2024-04-25 03:28:51.525846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.180 [2024-04-25 03:28:51.526024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.180 [2024-04-25 03:28:51.526050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.180 qpair failed and we were unable to recover it. 00:28:17.180 [2024-04-25 03:28:51.526297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.180 [2024-04-25 03:28:51.526496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.180 [2024-04-25 03:28:51.526538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.180 qpair failed and we were unable to recover it. 00:28:17.180 [2024-04-25 03:28:51.526736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.180 [2024-04-25 03:28:51.526974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.180 [2024-04-25 03:28:51.526999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.180 qpair failed and we were unable to recover it. 00:28:17.180 [2024-04-25 03:28:51.527205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.180 [2024-04-25 03:28:51.527446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.180 [2024-04-25 03:28:51.527488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.180 qpair failed and we were unable to recover it. 00:28:17.180 [2024-04-25 03:28:51.527691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.180 [2024-04-25 03:28:51.527882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.180 [2024-04-25 03:28:51.527915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.180 qpair failed and we were unable to recover it. 00:28:17.180 [2024-04-25 03:28:51.528103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.180 [2024-04-25 03:28:51.528356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.180 [2024-04-25 03:28:51.528384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.180 qpair failed and we were unable to recover it. 00:28:17.180 [2024-04-25 03:28:51.528598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.180 [2024-04-25 03:28:51.528857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.180 [2024-04-25 03:28:51.528885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.180 qpair failed and we were unable to recover it. 00:28:17.180 [2024-04-25 03:28:51.529091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.180 [2024-04-25 03:28:51.529336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.180 [2024-04-25 03:28:51.529364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.180 qpair failed and we were unable to recover it. 00:28:17.180 [2024-04-25 03:28:51.529602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.180 [2024-04-25 03:28:51.529860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.180 [2024-04-25 03:28:51.529888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.180 qpair failed and we were unable to recover it. 00:28:17.180 [2024-04-25 03:28:51.530083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.180 [2024-04-25 03:28:51.530391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.180 [2024-04-25 03:28:51.530419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.180 qpair failed and we were unable to recover it. 00:28:17.180 [2024-04-25 03:28:51.530643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.180 [2024-04-25 03:28:51.530852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.180 [2024-04-25 03:28:51.530877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.180 qpair failed and we were unable to recover it. 00:28:17.180 [2024-04-25 03:28:51.531078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.180 [2024-04-25 03:28:51.531252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.180 [2024-04-25 03:28:51.531277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.180 qpair failed and we were unable to recover it. 00:28:17.180 [2024-04-25 03:28:51.531498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.180 [2024-04-25 03:28:51.531672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.180 [2024-04-25 03:28:51.531698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.180 qpair failed and we were unable to recover it. 00:28:17.180 [2024-04-25 03:28:51.531948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.180 [2024-04-25 03:28:51.532172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.180 [2024-04-25 03:28:51.532198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.180 qpair failed and we were unable to recover it. 00:28:17.180 [2024-04-25 03:28:51.532396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.180 [2024-04-25 03:28:51.532571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.180 [2024-04-25 03:28:51.532598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.180 qpair failed and we were unable to recover it. 00:28:17.180 [2024-04-25 03:28:51.532799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.180 [2024-04-25 03:28:51.532998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.180 [2024-04-25 03:28:51.533024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.180 qpair failed and we were unable to recover it. 00:28:17.180 [2024-04-25 03:28:51.533270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.180 [2024-04-25 03:28:51.533463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.180 [2024-04-25 03:28:51.533491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.180 qpair failed and we were unable to recover it. 00:28:17.180 [2024-04-25 03:28:51.533688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.180 [2024-04-25 03:28:51.533907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.180 [2024-04-25 03:28:51.533935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.180 qpair failed and we were unable to recover it. 00:28:17.180 [2024-04-25 03:28:51.534118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.180 [2024-04-25 03:28:51.534382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.180 [2024-04-25 03:28:51.534407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.180 qpair failed and we were unable to recover it. 00:28:17.180 [2024-04-25 03:28:51.534602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.180 [2024-04-25 03:28:51.534810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.181 [2024-04-25 03:28:51.534835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.181 qpair failed and we were unable to recover it. 00:28:17.181 [2024-04-25 03:28:51.535000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.181 [2024-04-25 03:28:51.535230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.181 [2024-04-25 03:28:51.535258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.181 qpair failed and we were unable to recover it. 00:28:17.181 [2024-04-25 03:28:51.535477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.181 [2024-04-25 03:28:51.535693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.181 [2024-04-25 03:28:51.535722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.181 qpair failed and we were unable to recover it. 00:28:17.181 [2024-04-25 03:28:51.535932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.181 [2024-04-25 03:28:51.536157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.181 [2024-04-25 03:28:51.536182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.181 qpair failed and we were unable to recover it. 00:28:17.181 [2024-04-25 03:28:51.536397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.181 [2024-04-25 03:28:51.536619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.181 [2024-04-25 03:28:51.536669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.181 qpair failed and we were unable to recover it. 00:28:17.181 [2024-04-25 03:28:51.536893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.181 [2024-04-25 03:28:51.537113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.181 [2024-04-25 03:28:51.537140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.181 qpair failed and we were unable to recover it. 00:28:17.181 [2024-04-25 03:28:51.537390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.181 [2024-04-25 03:28:51.537640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.181 [2024-04-25 03:28:51.537675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.181 qpair failed and we were unable to recover it. 00:28:17.181 [2024-04-25 03:28:51.537914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.181 [2024-04-25 03:28:51.538159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.181 [2024-04-25 03:28:51.538186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.181 qpair failed and we were unable to recover it. 00:28:17.181 [2024-04-25 03:28:51.538386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.181 [2024-04-25 03:28:51.538581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.181 [2024-04-25 03:28:51.538607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.181 qpair failed and we were unable to recover it. 00:28:17.181 [2024-04-25 03:28:51.538849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.181 [2024-04-25 03:28:51.539037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.181 [2024-04-25 03:28:51.539065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.181 qpair failed and we were unable to recover it. 00:28:17.181 [2024-04-25 03:28:51.539279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.181 [2024-04-25 03:28:51.539472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.181 [2024-04-25 03:28:51.539497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.181 qpair failed and we were unable to recover it. 00:28:17.181 [2024-04-25 03:28:51.539698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.181 [2024-04-25 03:28:51.539910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.181 [2024-04-25 03:28:51.539935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.181 qpair failed and we were unable to recover it. 00:28:17.181 [2024-04-25 03:28:51.540123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.181 [2024-04-25 03:28:51.540335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.181 [2024-04-25 03:28:51.540362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.181 qpair failed and we were unable to recover it. 00:28:17.181 [2024-04-25 03:28:51.540574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.181 [2024-04-25 03:28:51.540780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.181 [2024-04-25 03:28:51.540805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.181 qpair failed and we were unable to recover it. 00:28:17.181 [2024-04-25 03:28:51.541003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.181 [2024-04-25 03:28:51.541197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.181 [2024-04-25 03:28:51.541222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.181 qpair failed and we were unable to recover it. 00:28:17.181 [2024-04-25 03:28:51.541410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.181 [2024-04-25 03:28:51.541596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.181 [2024-04-25 03:28:51.541623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.181 qpair failed and we were unable to recover it. 00:28:17.181 [2024-04-25 03:28:51.541851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.181 [2024-04-25 03:28:51.542029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.181 [2024-04-25 03:28:51.542059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.181 qpair failed and we were unable to recover it. 00:28:17.181 [2024-04-25 03:28:51.542300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.181 [2024-04-25 03:28:51.542506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.181 [2024-04-25 03:28:51.542533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.181 qpair failed and we were unable to recover it. 00:28:17.181 [2024-04-25 03:28:51.542731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.181 [2024-04-25 03:28:51.543002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.181 [2024-04-25 03:28:51.543026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.181 qpair failed and we were unable to recover it. 00:28:17.181 [2024-04-25 03:28:51.543221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.181 [2024-04-25 03:28:51.543393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.181 [2024-04-25 03:28:51.543418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.181 qpair failed and we were unable to recover it. 00:28:17.181 [2024-04-25 03:28:51.543587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.181 [2024-04-25 03:28:51.543786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.181 [2024-04-25 03:28:51.543815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.181 qpair failed and we were unable to recover it. 00:28:17.181 [2024-04-25 03:28:51.544050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.181 [2024-04-25 03:28:51.544375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.181 [2024-04-25 03:28:51.544425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.181 qpair failed and we were unable to recover it. 00:28:17.181 [2024-04-25 03:28:51.544644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.181 [2024-04-25 03:28:51.544868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.181 [2024-04-25 03:28:51.544896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.181 qpair failed and we were unable to recover it. 00:28:17.181 [2024-04-25 03:28:51.545110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.181 [2024-04-25 03:28:51.545343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.181 [2024-04-25 03:28:51.545371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.181 qpair failed and we were unable to recover it. 00:28:17.181 [2024-04-25 03:28:51.545616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.181 [2024-04-25 03:28:51.545859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.181 [2024-04-25 03:28:51.545886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.181 qpair failed and we were unable to recover it. 00:28:17.181 [2024-04-25 03:28:51.546107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.181 [2024-04-25 03:28:51.546330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.182 [2024-04-25 03:28:51.546355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.182 qpair failed and we were unable to recover it. 00:28:17.182 [2024-04-25 03:28:51.546551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.182 [2024-04-25 03:28:51.546728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.182 [2024-04-25 03:28:51.546756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.182 qpair failed and we were unable to recover it. 00:28:17.182 [2024-04-25 03:28:51.546957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.182 [2024-04-25 03:28:51.547152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.182 [2024-04-25 03:28:51.547180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.182 qpair failed and we were unable to recover it. 00:28:17.182 [2024-04-25 03:28:51.547400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.182 [2024-04-25 03:28:51.547593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.182 [2024-04-25 03:28:51.547617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.182 qpair failed and we were unable to recover it. 00:28:17.182 [2024-04-25 03:28:51.547798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.182 [2024-04-25 03:28:51.547991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.182 [2024-04-25 03:28:51.548016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.182 qpair failed and we were unable to recover it. 00:28:17.182 [2024-04-25 03:28:51.548198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.182 [2024-04-25 03:28:51.548410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.182 [2024-04-25 03:28:51.548438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.182 qpair failed and we were unable to recover it. 00:28:17.182 [2024-04-25 03:28:51.548658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.182 [2024-04-25 03:28:51.548852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.182 [2024-04-25 03:28:51.548876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.182 qpair failed and we were unable to recover it. 00:28:17.182 [2024-04-25 03:28:51.549097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.182 [2024-04-25 03:28:51.549281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.182 [2024-04-25 03:28:51.549310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.182 qpair failed and we were unable to recover it. 00:28:17.182 [2024-04-25 03:28:51.549555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.182 [2024-04-25 03:28:51.549744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.182 [2024-04-25 03:28:51.549772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.182 qpair failed and we were unable to recover it. 00:28:17.182 [2024-04-25 03:28:51.549993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.182 [2024-04-25 03:28:51.550208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.182 [2024-04-25 03:28:51.550236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.182 qpair failed and we were unable to recover it. 00:28:17.182 [2024-04-25 03:28:51.550478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.182 [2024-04-25 03:28:51.550685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.182 [2024-04-25 03:28:51.550713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.182 qpair failed and we were unable to recover it. 00:28:17.182 [2024-04-25 03:28:51.550927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.182 [2024-04-25 03:28:51.551115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.182 [2024-04-25 03:28:51.551140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.182 qpair failed and we were unable to recover it. 00:28:17.182 [2024-04-25 03:28:51.551309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.182 [2024-04-25 03:28:51.551497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.182 [2024-04-25 03:28:51.551522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.182 qpair failed and we were unable to recover it. 00:28:17.182 [2024-04-25 03:28:51.551742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.182 [2024-04-25 03:28:51.551985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.182 [2024-04-25 03:28:51.552012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.182 qpair failed and we were unable to recover it. 00:28:17.182 [2024-04-25 03:28:51.552265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.182 [2024-04-25 03:28:51.552473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.182 [2024-04-25 03:28:51.552500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.182 qpair failed and we were unable to recover it. 00:28:17.182 [2024-04-25 03:28:51.552712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.182 [2024-04-25 03:28:51.552959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.182 [2024-04-25 03:28:51.552987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.182 qpair failed and we were unable to recover it. 00:28:17.182 [2024-04-25 03:28:51.553214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.182 [2024-04-25 03:28:51.553430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.182 [2024-04-25 03:28:51.553457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.182 qpair failed and we were unable to recover it. 00:28:17.182 [2024-04-25 03:28:51.553679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.182 [2024-04-25 03:28:51.553901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.182 [2024-04-25 03:28:51.553928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.182 qpair failed and we were unable to recover it. 00:28:17.182 [2024-04-25 03:28:51.554122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.182 [2024-04-25 03:28:51.554290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.182 [2024-04-25 03:28:51.554328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.182 qpair failed and we were unable to recover it. 00:28:17.182 [2024-04-25 03:28:51.554516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.182 [2024-04-25 03:28:51.554719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.182 [2024-04-25 03:28:51.554745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.182 qpair failed and we were unable to recover it. 00:28:17.182 [2024-04-25 03:28:51.554931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.182 [2024-04-25 03:28:51.555167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.182 [2024-04-25 03:28:51.555194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.182 qpair failed and we were unable to recover it. 00:28:17.182 [2024-04-25 03:28:51.555426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.182 [2024-04-25 03:28:51.555676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.182 [2024-04-25 03:28:51.555702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.182 qpair failed and we were unable to recover it. 00:28:17.182 [2024-04-25 03:28:51.555862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.182 [2024-04-25 03:28:51.556087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.182 [2024-04-25 03:28:51.556114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.182 qpair failed and we were unable to recover it. 00:28:17.182 [2024-04-25 03:28:51.556333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.182 [2024-04-25 03:28:51.556582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.182 [2024-04-25 03:28:51.556609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.182 qpair failed and we were unable to recover it. 00:28:17.182 [2024-04-25 03:28:51.556834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.182 [2024-04-25 03:28:51.557007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.182 [2024-04-25 03:28:51.557032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.182 qpair failed and we were unable to recover it. 00:28:17.182 [2024-04-25 03:28:51.557241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.182 [2024-04-25 03:28:51.557574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.182 [2024-04-25 03:28:51.557640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.182 qpair failed and we were unable to recover it. 00:28:17.182 [2024-04-25 03:28:51.557861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.182 [2024-04-25 03:28:51.558069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.182 [2024-04-25 03:28:51.558096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.182 qpair failed and we were unable to recover it. 00:28:17.182 [2024-04-25 03:28:51.558337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.182 [2024-04-25 03:28:51.558556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.182 [2024-04-25 03:28:51.558585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.182 qpair failed and we were unable to recover it. 00:28:17.182 [2024-04-25 03:28:51.558825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.182 [2024-04-25 03:28:51.559064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.182 [2024-04-25 03:28:51.559091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.182 qpair failed and we were unable to recover it. 00:28:17.182 [2024-04-25 03:28:51.559347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.183 [2024-04-25 03:28:51.559566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.183 [2024-04-25 03:28:51.559594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.183 qpair failed and we were unable to recover it. 00:28:17.183 [2024-04-25 03:28:51.559806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.183 [2024-04-25 03:28:51.560016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.183 [2024-04-25 03:28:51.560040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.183 qpair failed and we were unable to recover it. 00:28:17.183 [2024-04-25 03:28:51.560269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.183 [2024-04-25 03:28:51.560441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.183 [2024-04-25 03:28:51.560465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.183 qpair failed and we were unable to recover it. 00:28:17.183 [2024-04-25 03:28:51.560680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.183 [2024-04-25 03:28:51.560894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.183 [2024-04-25 03:28:51.560922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.183 qpair failed and we were unable to recover it. 00:28:17.183 [2024-04-25 03:28:51.561147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.183 [2024-04-25 03:28:51.561364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.183 [2024-04-25 03:28:51.561391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.183 qpair failed and we were unable to recover it. 00:28:17.183 [2024-04-25 03:28:51.561603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.183 [2024-04-25 03:28:51.561820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.183 [2024-04-25 03:28:51.561848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.183 qpair failed and we were unable to recover it. 00:28:17.183 [2024-04-25 03:28:51.562097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.183 [2024-04-25 03:28:51.562335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.183 [2024-04-25 03:28:51.562360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.183 qpair failed and we were unable to recover it. 00:28:17.183 [2024-04-25 03:28:51.562585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.183 [2024-04-25 03:28:51.562791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.183 [2024-04-25 03:28:51.562816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.183 qpair failed and we were unable to recover it. 00:28:17.183 [2024-04-25 03:28:51.562994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.183 [2024-04-25 03:28:51.563183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.183 [2024-04-25 03:28:51.563207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.183 qpair failed and we were unable to recover it. 00:28:17.183 [2024-04-25 03:28:51.563448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.183 [2024-04-25 03:28:51.563634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.183 [2024-04-25 03:28:51.563663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.183 qpair failed and we were unable to recover it. 00:28:17.183 [2024-04-25 03:28:51.563902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.183 [2024-04-25 03:28:51.564156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.183 [2024-04-25 03:28:51.564183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.183 qpair failed and we were unable to recover it. 00:28:17.183 [2024-04-25 03:28:51.564404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.183 [2024-04-25 03:28:51.564646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.183 [2024-04-25 03:28:51.564680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.183 qpair failed and we were unable to recover it. 00:28:17.183 [2024-04-25 03:28:51.564903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.183 [2024-04-25 03:28:51.565142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.183 [2024-04-25 03:28:51.565167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.183 qpair failed and we were unable to recover it. 00:28:17.183 [2024-04-25 03:28:51.565363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.183 [2024-04-25 03:28:51.565606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.183 [2024-04-25 03:28:51.565660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.183 qpair failed and we were unable to recover it. 00:28:17.183 [2024-04-25 03:28:51.565838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.183 [2024-04-25 03:28:51.566059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.183 [2024-04-25 03:28:51.566083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.183 qpair failed and we were unable to recover it. 00:28:17.183 [2024-04-25 03:28:51.566328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.183 [2024-04-25 03:28:51.566517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.183 [2024-04-25 03:28:51.566546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.183 qpair failed and we were unable to recover it. 00:28:17.183 [2024-04-25 03:28:51.566801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.183 [2024-04-25 03:28:51.566987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.183 [2024-04-25 03:28:51.567011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.183 qpair failed and we were unable to recover it. 00:28:17.183 [2024-04-25 03:28:51.567238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.183 [2024-04-25 03:28:51.567485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.183 [2024-04-25 03:28:51.567513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.183 qpair failed and we were unable to recover it. 00:28:17.183 [2024-04-25 03:28:51.567759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.183 [2024-04-25 03:28:51.568008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.183 [2024-04-25 03:28:51.568035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.183 qpair failed and we were unable to recover it. 00:28:17.183 [2024-04-25 03:28:51.568262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.183 [2024-04-25 03:28:51.568490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.183 [2024-04-25 03:28:51.568515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.183 qpair failed and we were unable to recover it. 00:28:17.183 [2024-04-25 03:28:51.568742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.183 [2024-04-25 03:28:51.568963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.183 [2024-04-25 03:28:51.568990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.183 qpair failed and we were unable to recover it. 00:28:17.183 [2024-04-25 03:28:51.569203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.183 [2024-04-25 03:28:51.569501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.183 [2024-04-25 03:28:51.569529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.183 qpair failed and we were unable to recover it. 00:28:17.183 [2024-04-25 03:28:51.569762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.183 [2024-04-25 03:28:51.569936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.183 [2024-04-25 03:28:51.569960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.183 qpair failed and we were unable to recover it. 00:28:17.183 [2024-04-25 03:28:51.570128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.183 [2024-04-25 03:28:51.570317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.183 [2024-04-25 03:28:51.570344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.183 qpair failed and we were unable to recover it. 00:28:17.183 [2024-04-25 03:28:51.570598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.183 [2024-04-25 03:28:51.570828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.183 [2024-04-25 03:28:51.570856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.183 qpair failed and we were unable to recover it. 00:28:17.183 [2024-04-25 03:28:51.571102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.183 [2024-04-25 03:28:51.571323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.183 [2024-04-25 03:28:51.571347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.183 qpair failed and we were unable to recover it. 00:28:17.183 [2024-04-25 03:28:51.571573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.183 [2024-04-25 03:28:51.571767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.183 [2024-04-25 03:28:51.571792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.183 qpair failed and we were unable to recover it. 00:28:17.183 [2024-04-25 03:28:51.571985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.183 [2024-04-25 03:28:51.572219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.183 [2024-04-25 03:28:51.572248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.183 qpair failed and we were unable to recover it. 00:28:17.183 [2024-04-25 03:28:51.572461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.183 [2024-04-25 03:28:51.572679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.183 [2024-04-25 03:28:51.572705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.183 qpair failed and we were unable to recover it. 00:28:17.184 [2024-04-25 03:28:51.572898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.184 [2024-04-25 03:28:51.573143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.184 [2024-04-25 03:28:51.573170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.184 qpair failed and we were unable to recover it. 00:28:17.184 [2024-04-25 03:28:51.573353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.184 [2024-04-25 03:28:51.573566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.184 [2024-04-25 03:28:51.573592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.184 qpair failed and we were unable to recover it. 00:28:17.184 [2024-04-25 03:28:51.573843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.184 [2024-04-25 03:28:51.574060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.184 [2024-04-25 03:28:51.574087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.184 qpair failed and we were unable to recover it. 00:28:17.184 [2024-04-25 03:28:51.574343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.184 [2024-04-25 03:28:51.574533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.184 [2024-04-25 03:28:51.574561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.184 qpair failed and we were unable to recover it. 00:28:17.184 [2024-04-25 03:28:51.574773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.184 [2024-04-25 03:28:51.574970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.184 [2024-04-25 03:28:51.574994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.184 qpair failed and we were unable to recover it. 00:28:17.184 [2024-04-25 03:28:51.575194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.184 [2024-04-25 03:28:51.575398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.184 [2024-04-25 03:28:51.575423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.184 qpair failed and we were unable to recover it. 00:28:17.184 [2024-04-25 03:28:51.575645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.184 [2024-04-25 03:28:51.575887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.184 [2024-04-25 03:28:51.575917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.184 qpair failed and we were unable to recover it. 00:28:17.184 [2024-04-25 03:28:51.576135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.184 [2024-04-25 03:28:51.576323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.184 [2024-04-25 03:28:51.576351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.184 qpair failed and we were unable to recover it. 00:28:17.184 [2024-04-25 03:28:51.576604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.184 [2024-04-25 03:28:51.576795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.184 [2024-04-25 03:28:51.576843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.184 qpair failed and we were unable to recover it. 00:28:17.184 [2024-04-25 03:28:51.577063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.184 [2024-04-25 03:28:51.577269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.184 [2024-04-25 03:28:51.577294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.184 qpair failed and we were unable to recover it. 00:28:17.184 [2024-04-25 03:28:51.577468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.184 [2024-04-25 03:28:51.577656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.184 [2024-04-25 03:28:51.577681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.184 qpair failed and we were unable to recover it. 00:28:17.184 [2024-04-25 03:28:51.577938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.184 [2024-04-25 03:28:51.578180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.184 [2024-04-25 03:28:51.578222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.184 qpair failed and we were unable to recover it. 00:28:17.184 [2024-04-25 03:28:51.578408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.184 [2024-04-25 03:28:51.578598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.184 [2024-04-25 03:28:51.578624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.184 qpair failed and we were unable to recover it. 00:28:17.184 [2024-04-25 03:28:51.578812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.184 [2024-04-25 03:28:51.579030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.184 [2024-04-25 03:28:51.579058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.184 qpair failed and we were unable to recover it. 00:28:17.184 [2024-04-25 03:28:51.579277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.184 [2024-04-25 03:28:51.579491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.184 [2024-04-25 03:28:51.579518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.184 qpair failed and we were unable to recover it. 00:28:17.184 [2024-04-25 03:28:51.579767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.184 [2024-04-25 03:28:51.580013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.184 [2024-04-25 03:28:51.580041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.184 qpair failed and we were unable to recover it. 00:28:17.184 [2024-04-25 03:28:51.580295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.184 [2024-04-25 03:28:51.580517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.184 [2024-04-25 03:28:51.580541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.184 qpair failed and we were unable to recover it. 00:28:17.184 [2024-04-25 03:28:51.580776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.184 [2024-04-25 03:28:51.581010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.184 [2024-04-25 03:28:51.581039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.184 qpair failed and we were unable to recover it. 00:28:17.184 [2024-04-25 03:28:51.581262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.184 [2024-04-25 03:28:51.581490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.184 [2024-04-25 03:28:51.581518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.184 qpair failed and we were unable to recover it. 00:28:17.184 [2024-04-25 03:28:51.581731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.184 [2024-04-25 03:28:51.581960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.184 [2024-04-25 03:28:51.581984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.184 qpair failed and we were unable to recover it. 00:28:17.184 [2024-04-25 03:28:51.582177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.184 [2024-04-25 03:28:51.582358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.184 [2024-04-25 03:28:51.582385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.184 qpair failed and we were unable to recover it. 00:28:17.184 [2024-04-25 03:28:51.582591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.184 [2024-04-25 03:28:51.582864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.184 [2024-04-25 03:28:51.582892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.184 qpair failed and we were unable to recover it. 00:28:17.184 [2024-04-25 03:28:51.583115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.184 [2024-04-25 03:28:51.583538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.184 [2024-04-25 03:28:51.583589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.184 qpair failed and we were unable to recover it. 00:28:17.184 [2024-04-25 03:28:51.583811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.184 [2024-04-25 03:28:51.584038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.184 [2024-04-25 03:28:51.584062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.184 qpair failed and we were unable to recover it. 00:28:17.184 [2024-04-25 03:28:51.584260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.184 [2024-04-25 03:28:51.584457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.184 [2024-04-25 03:28:51.584484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.184 qpair failed and we were unable to recover it. 00:28:17.184 [2024-04-25 03:28:51.584659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.184 [2024-04-25 03:28:51.584855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.184 [2024-04-25 03:28:51.584882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.184 qpair failed and we were unable to recover it. 00:28:17.184 [2024-04-25 03:28:51.585067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.184 [2024-04-25 03:28:51.585307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.184 [2024-04-25 03:28:51.585334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.184 qpair failed and we were unable to recover it. 00:28:17.184 [2024-04-25 03:28:51.585587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.184 [2024-04-25 03:28:51.585797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.184 [2024-04-25 03:28:51.585823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.184 qpair failed and we were unable to recover it. 00:28:17.184 [2024-04-25 03:28:51.585995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.184 [2024-04-25 03:28:51.586204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.184 [2024-04-25 03:28:51.586249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.184 qpair failed and we were unable to recover it. 00:28:17.185 [2024-04-25 03:28:51.586475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.185 [2024-04-25 03:28:51.586727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.185 [2024-04-25 03:28:51.586756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.185 qpair failed and we were unable to recover it. 00:28:17.185 [2024-04-25 03:28:51.586940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.185 [2024-04-25 03:28:51.587187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.185 [2024-04-25 03:28:51.587225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.185 qpair failed and we were unable to recover it. 00:28:17.185 [2024-04-25 03:28:51.587452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.185 [2024-04-25 03:28:51.587653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.185 [2024-04-25 03:28:51.587678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.185 qpair failed and we were unable to recover it. 00:28:17.185 [2024-04-25 03:28:51.587849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.185 [2024-04-25 03:28:51.588072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.185 [2024-04-25 03:28:51.588096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.185 qpair failed and we were unable to recover it. 00:28:17.185 [2024-04-25 03:28:51.588356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.185 [2024-04-25 03:28:51.588607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.185 [2024-04-25 03:28:51.588640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.185 qpair failed and we were unable to recover it. 00:28:17.185 [2024-04-25 03:28:51.588860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.185 [2024-04-25 03:28:51.589071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.185 [2024-04-25 03:28:51.589100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.185 qpair failed and we were unable to recover it. 00:28:17.185 [2024-04-25 03:28:51.589500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.185 [2024-04-25 03:28:51.589790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.185 [2024-04-25 03:28:51.589816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.185 qpair failed and we were unable to recover it. 00:28:17.185 [2024-04-25 03:28:51.589980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.185 [2024-04-25 03:28:51.590259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.185 [2024-04-25 03:28:51.590313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.185 qpair failed and we were unable to recover it. 00:28:17.185 [2024-04-25 03:28:51.590541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.185 [2024-04-25 03:28:51.590763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.185 [2024-04-25 03:28:51.590792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.185 qpair failed and we were unable to recover it. 00:28:17.185 [2024-04-25 03:28:51.591010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.185 [2024-04-25 03:28:51.591225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.185 [2024-04-25 03:28:51.591253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.185 qpair failed and we were unable to recover it. 00:28:17.185 [2024-04-25 03:28:51.591670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.185 [2024-04-25 03:28:51.591884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.185 [2024-04-25 03:28:51.591917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.185 qpair failed and we were unable to recover it. 00:28:17.185 [2024-04-25 03:28:51.592166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.185 [2024-04-25 03:28:51.592388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.185 [2024-04-25 03:28:51.592412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.185 qpair failed and we were unable to recover it. 00:28:17.185 [2024-04-25 03:28:51.592665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.185 [2024-04-25 03:28:51.592866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.185 [2024-04-25 03:28:51.592892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.185 qpair failed and we were unable to recover it. 00:28:17.185 [2024-04-25 03:28:51.593114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.185 [2024-04-25 03:28:51.593308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.185 [2024-04-25 03:28:51.593334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.185 qpair failed and we were unable to recover it. 00:28:17.185 [2024-04-25 03:28:51.593508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.185 [2024-04-25 03:28:51.593715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.185 [2024-04-25 03:28:51.593756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.185 qpair failed and we were unable to recover it. 00:28:17.185 [2024-04-25 03:28:51.593972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.185 [2024-04-25 03:28:51.594203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.185 [2024-04-25 03:28:51.594228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.185 qpair failed and we were unable to recover it. 00:28:17.185 [2024-04-25 03:28:51.594402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.185 [2024-04-25 03:28:51.594596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.185 [2024-04-25 03:28:51.594620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.185 qpair failed and we were unable to recover it. 00:28:17.185 [2024-04-25 03:28:51.594857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.185 [2024-04-25 03:28:51.595049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.185 [2024-04-25 03:28:51.595076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.185 qpair failed and we were unable to recover it. 00:28:17.185 [2024-04-25 03:28:51.595297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.185 [2024-04-25 03:28:51.595523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.185 [2024-04-25 03:28:51.595547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.185 qpair failed and we were unable to recover it. 00:28:17.185 [2024-04-25 03:28:51.595755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.185 [2024-04-25 03:28:51.595969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.185 [2024-04-25 03:28:51.595997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.185 qpair failed and we were unable to recover it. 00:28:17.185 [2024-04-25 03:28:51.596249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.185 [2024-04-25 03:28:51.596470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.185 [2024-04-25 03:28:51.596499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.185 qpair failed and we were unable to recover it. 00:28:17.185 [2024-04-25 03:28:51.596720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.185 [2024-04-25 03:28:51.596920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.185 [2024-04-25 03:28:51.596945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.185 qpair failed and we were unable to recover it. 00:28:17.185 [2024-04-25 03:28:51.597178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.185 [2024-04-25 03:28:51.597417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.185 [2024-04-25 03:28:51.597445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.185 qpair failed and we were unable to recover it. 00:28:17.185 [2024-04-25 03:28:51.597667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.185 [2024-04-25 03:28:51.597916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.185 [2024-04-25 03:28:51.597940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.185 qpair failed and we were unable to recover it. 00:28:17.185 [2024-04-25 03:28:51.598143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.185 [2024-04-25 03:28:51.598396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.185 [2024-04-25 03:28:51.598424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.185 qpair failed and we were unable to recover it. 00:28:17.185 [2024-04-25 03:28:51.598642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.185 [2024-04-25 03:28:51.598863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.185 [2024-04-25 03:28:51.598890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.185 qpair failed and we were unable to recover it. 00:28:17.185 [2024-04-25 03:28:51.599141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.185 [2024-04-25 03:28:51.599372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.185 [2024-04-25 03:28:51.599397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.185 qpair failed and we were unable to recover it. 00:28:17.185 [2024-04-25 03:28:51.599607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.185 [2024-04-25 03:28:51.599801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.185 [2024-04-25 03:28:51.599827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.185 qpair failed and we were unable to recover it. 00:28:17.185 [2024-04-25 03:28:51.600047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.185 [2024-04-25 03:28:51.600270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.186 [2024-04-25 03:28:51.600297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.186 qpair failed and we were unable to recover it. 00:28:17.186 [2024-04-25 03:28:51.600489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.186 [2024-04-25 03:28:51.600676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.186 [2024-04-25 03:28:51.600704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.186 qpair failed and we were unable to recover it. 00:28:17.186 [2024-04-25 03:28:51.600921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.186 [2024-04-25 03:28:51.601135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.186 [2024-04-25 03:28:51.601163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.186 qpair failed and we were unable to recover it. 00:28:17.186 [2024-04-25 03:28:51.601350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.186 [2024-04-25 03:28:51.601591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.186 [2024-04-25 03:28:51.601619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.186 qpair failed and we were unable to recover it. 00:28:17.186 [2024-04-25 03:28:51.601851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.186 [2024-04-25 03:28:51.602076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.186 [2024-04-25 03:28:51.602101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.186 qpair failed and we were unable to recover it. 00:28:17.186 [2024-04-25 03:28:51.602346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.186 [2024-04-25 03:28:51.602554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.186 [2024-04-25 03:28:51.602578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.186 qpair failed and we were unable to recover it. 00:28:17.186 [2024-04-25 03:28:51.602795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.186 [2024-04-25 03:28:51.603022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.186 [2024-04-25 03:28:51.603047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.186 qpair failed and we were unable to recover it. 00:28:17.186 [2024-04-25 03:28:51.603298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.186 [2024-04-25 03:28:51.603706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.186 [2024-04-25 03:28:51.603734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.186 qpair failed and we were unable to recover it. 00:28:17.186 [2024-04-25 03:28:51.603955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.186 [2024-04-25 03:28:51.604209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.186 [2024-04-25 03:28:51.604237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.186 qpair failed and we were unable to recover it. 00:28:17.186 [2024-04-25 03:28:51.604459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.186 [2024-04-25 03:28:51.604682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.186 [2024-04-25 03:28:51.604708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.186 qpair failed and we were unable to recover it. 00:28:17.186 [2024-04-25 03:28:51.604971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.186 [2024-04-25 03:28:51.605203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.186 [2024-04-25 03:28:51.605227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.186 qpair failed and we were unable to recover it. 00:28:17.186 [2024-04-25 03:28:51.605442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.186 [2024-04-25 03:28:51.605672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.186 [2024-04-25 03:28:51.605697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.186 qpair failed and we were unable to recover it. 00:28:17.186 [2024-04-25 03:28:51.605900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.186 [2024-04-25 03:28:51.606162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.186 [2024-04-25 03:28:51.606189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.186 qpair failed and we were unable to recover it. 00:28:17.186 [2024-04-25 03:28:51.606412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.186 [2024-04-25 03:28:51.606652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.186 [2024-04-25 03:28:51.606680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.186 qpair failed and we were unable to recover it. 00:28:17.186 [2024-04-25 03:28:51.606925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.186 [2024-04-25 03:28:51.607164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.186 [2024-04-25 03:28:51.607189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.186 qpair failed and we were unable to recover it. 00:28:17.186 [2024-04-25 03:28:51.607371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.186 [2024-04-25 03:28:51.607586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.186 [2024-04-25 03:28:51.607615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.186 qpair failed and we were unable to recover it. 00:28:17.186 [2024-04-25 03:28:51.607817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.186 [2024-04-25 03:28:51.608014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.186 [2024-04-25 03:28:51.608041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.186 qpair failed and we were unable to recover it. 00:28:17.186 [2024-04-25 03:28:51.608261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.186 [2024-04-25 03:28:51.608472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.186 [2024-04-25 03:28:51.608496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.186 qpair failed and we were unable to recover it. 00:28:17.186 [2024-04-25 03:28:51.608687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.186 [2024-04-25 03:28:51.608886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.186 [2024-04-25 03:28:51.608911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.186 qpair failed and we were unable to recover it. 00:28:17.186 [2024-04-25 03:28:51.609140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.186 [2024-04-25 03:28:51.609351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.186 [2024-04-25 03:28:51.609378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.186 qpair failed and we were unable to recover it. 00:28:17.186 [2024-04-25 03:28:51.609639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.186 [2024-04-25 03:28:51.609847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.186 [2024-04-25 03:28:51.609874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.186 qpair failed and we were unable to recover it. 00:28:17.186 [2024-04-25 03:28:51.610060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.186 [2024-04-25 03:28:51.610306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.186 [2024-04-25 03:28:51.610331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.186 qpair failed and we were unable to recover it. 00:28:17.186 [2024-04-25 03:28:51.610635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.186 [2024-04-25 03:28:51.610823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.186 [2024-04-25 03:28:51.610851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.186 qpair failed and we were unable to recover it. 00:28:17.186 [2024-04-25 03:28:51.611095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.186 [2024-04-25 03:28:51.611314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.186 [2024-04-25 03:28:51.611343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.186 qpair failed and we were unable to recover it. 00:28:17.186 [2024-04-25 03:28:51.611532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.186 [2024-04-25 03:28:51.611707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.186 [2024-04-25 03:28:51.611732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.186 qpair failed and we were unable to recover it. 00:28:17.186 [2024-04-25 03:28:51.611955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.186 [2024-04-25 03:28:51.612149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.187 [2024-04-25 03:28:51.612176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.187 qpair failed and we were unable to recover it. 00:28:17.187 [2024-04-25 03:28:51.612377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.187 [2024-04-25 03:28:51.612549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.187 [2024-04-25 03:28:51.612574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.187 qpair failed and we were unable to recover it. 00:28:17.187 [2024-04-25 03:28:51.612773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.187 [2024-04-25 03:28:51.613029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.187 [2024-04-25 03:28:51.613054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.187 qpair failed and we were unable to recover it. 00:28:17.187 [2024-04-25 03:28:51.613249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.187 [2024-04-25 03:28:51.613469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.187 [2024-04-25 03:28:51.613496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.187 qpair failed and we were unable to recover it. 00:28:17.187 [2024-04-25 03:28:51.613694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.187 [2024-04-25 03:28:51.613912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.187 [2024-04-25 03:28:51.613941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.187 qpair failed and we were unable to recover it. 00:28:17.187 [2024-04-25 03:28:51.614186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.187 [2024-04-25 03:28:51.614381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.187 [2024-04-25 03:28:51.614406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.187 qpair failed and we were unable to recover it. 00:28:17.187 [2024-04-25 03:28:51.614584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.187 [2024-04-25 03:28:51.614814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.187 [2024-04-25 03:28:51.614843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.187 qpair failed and we were unable to recover it. 00:28:17.187 [2024-04-25 03:28:51.615044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.187 [2024-04-25 03:28:51.615301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.187 [2024-04-25 03:28:51.615329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.187 qpair failed and we were unable to recover it. 00:28:17.187 [2024-04-25 03:28:51.615519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.187 [2024-04-25 03:28:51.615727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.187 [2024-04-25 03:28:51.615755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.187 qpair failed and we were unable to recover it. 00:28:17.187 [2024-04-25 03:28:51.616008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.187 [2024-04-25 03:28:51.616200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.187 [2024-04-25 03:28:51.616227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.187 qpair failed and we were unable to recover it. 00:28:17.187 [2024-04-25 03:28:51.616474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.187 [2024-04-25 03:28:51.616702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.187 [2024-04-25 03:28:51.616732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.187 qpair failed and we were unable to recover it. 00:28:17.187 [2024-04-25 03:28:51.616982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.187 [2024-04-25 03:28:51.617203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.187 [2024-04-25 03:28:51.617232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.187 qpair failed and we were unable to recover it. 00:28:17.187 [2024-04-25 03:28:51.617481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.187 [2024-04-25 03:28:51.617745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.187 [2024-04-25 03:28:51.617770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.187 qpair failed and we were unable to recover it. 00:28:17.187 [2024-04-25 03:28:51.617946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.187 [2024-04-25 03:28:51.618189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.187 [2024-04-25 03:28:51.618216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.187 qpair failed and we were unable to recover it. 00:28:17.187 [2024-04-25 03:28:51.618431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.187 [2024-04-25 03:28:51.618643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.187 [2024-04-25 03:28:51.618671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.187 qpair failed and we were unable to recover it. 00:28:17.187 [2024-04-25 03:28:51.618890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.187 [2024-04-25 03:28:51.619145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.187 [2024-04-25 03:28:51.619172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.187 qpair failed and we were unable to recover it. 00:28:17.187 [2024-04-25 03:28:51.619385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.187 [2024-04-25 03:28:51.619636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.187 [2024-04-25 03:28:51.619664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.187 qpair failed and we were unable to recover it. 00:28:17.187 [2024-04-25 03:28:51.619900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.187 [2024-04-25 03:28:51.620092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.187 [2024-04-25 03:28:51.620117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.187 qpair failed and we were unable to recover it. 00:28:17.187 [2024-04-25 03:28:51.620315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.187 [2024-04-25 03:28:51.620565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.187 [2024-04-25 03:28:51.620592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.187 qpair failed and we were unable to recover it. 00:28:17.187 [2024-04-25 03:28:51.620826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.187 [2024-04-25 03:28:51.621029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.187 [2024-04-25 03:28:51.621056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.187 qpair failed and we were unable to recover it. 00:28:17.187 [2024-04-25 03:28:51.621296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.187 [2024-04-25 03:28:51.621482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.187 [2024-04-25 03:28:51.621508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.187 qpair failed and we were unable to recover it. 00:28:17.187 [2024-04-25 03:28:51.621729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.187 [2024-04-25 03:28:51.621919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.187 [2024-04-25 03:28:51.621946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.187 qpair failed and we were unable to recover it. 00:28:17.187 [2024-04-25 03:28:51.622194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.187 [2024-04-25 03:28:51.622435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.187 [2024-04-25 03:28:51.622462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.187 qpair failed and we were unable to recover it. 00:28:17.187 [2024-04-25 03:28:51.622693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.187 [2024-04-25 03:28:51.622872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.187 [2024-04-25 03:28:51.622896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.187 qpair failed and we were unable to recover it. 00:28:17.187 [2024-04-25 03:28:51.623124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.187 [2024-04-25 03:28:51.623338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.187 [2024-04-25 03:28:51.623367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.187 qpair failed and we were unable to recover it. 00:28:17.187 [2024-04-25 03:28:51.623554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.187 [2024-04-25 03:28:51.623743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.187 [2024-04-25 03:28:51.623773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.187 qpair failed and we were unable to recover it. 00:28:17.187 [2024-04-25 03:28:51.623994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.187 [2024-04-25 03:28:51.624362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.187 [2024-04-25 03:28:51.624418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.187 qpair failed and we were unable to recover it. 00:28:17.187 [2024-04-25 03:28:51.624635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.187 [2024-04-25 03:28:51.624838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.187 [2024-04-25 03:28:51.624863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.187 qpair failed and we were unable to recover it. 00:28:17.187 [2024-04-25 03:28:51.625127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.187 [2024-04-25 03:28:51.625314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.187 [2024-04-25 03:28:51.625343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.187 qpair failed and we were unable to recover it. 00:28:17.187 [2024-04-25 03:28:51.625607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.187 [2024-04-25 03:28:51.625830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.188 [2024-04-25 03:28:51.625859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.188 qpair failed and we were unable to recover it. 00:28:17.188 [2024-04-25 03:28:51.626058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.188 [2024-04-25 03:28:51.626229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.188 [2024-04-25 03:28:51.626254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.188 qpair failed and we were unable to recover it. 00:28:17.188 [2024-04-25 03:28:51.626454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.188 [2024-04-25 03:28:51.626678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.188 [2024-04-25 03:28:51.626706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.188 qpair failed and we were unable to recover it. 00:28:17.188 [2024-04-25 03:28:51.626926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.188 [2024-04-25 03:28:51.627117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.188 [2024-04-25 03:28:51.627146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.188 qpair failed and we were unable to recover it. 00:28:17.188 [2024-04-25 03:28:51.627399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.188 [2024-04-25 03:28:51.627623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.188 [2024-04-25 03:28:51.627653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.188 qpair failed and we were unable to recover it. 00:28:17.188 [2024-04-25 03:28:51.627895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.188 [2024-04-25 03:28:51.628144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.188 [2024-04-25 03:28:51.628172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.188 qpair failed and we were unable to recover it. 00:28:17.188 [2024-04-25 03:28:51.628348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.188 [2024-04-25 03:28:51.628543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.188 [2024-04-25 03:28:51.628569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.188 qpair failed and we were unable to recover it. 00:28:17.188 [2024-04-25 03:28:51.628794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.188 [2024-04-25 03:28:51.629015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.188 [2024-04-25 03:28:51.629043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.188 qpair failed and we were unable to recover it. 00:28:17.188 [2024-04-25 03:28:51.629259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.188 [2024-04-25 03:28:51.629477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.188 [2024-04-25 03:28:51.629505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.188 qpair failed and we were unable to recover it. 00:28:17.188 [2024-04-25 03:28:51.629690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.188 [2024-04-25 03:28:51.629905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.188 [2024-04-25 03:28:51.629933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.188 qpair failed and we were unable to recover it. 00:28:17.188 [2024-04-25 03:28:51.630133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.188 [2024-04-25 03:28:51.630325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.188 [2024-04-25 03:28:51.630349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.188 qpair failed and we were unable to recover it. 00:28:17.188 [2024-04-25 03:28:51.630564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.188 [2024-04-25 03:28:51.630781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.188 [2024-04-25 03:28:51.630809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.188 qpair failed and we were unable to recover it. 00:28:17.188 [2024-04-25 03:28:51.631032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.188 [2024-04-25 03:28:51.631246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.188 [2024-04-25 03:28:51.631273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.188 qpair failed and we were unable to recover it. 00:28:17.188 [2024-04-25 03:28:51.631456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.188 [2024-04-25 03:28:51.631654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.188 [2024-04-25 03:28:51.631689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.188 qpair failed and we were unable to recover it. 00:28:17.188 [2024-04-25 03:28:51.631905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.188 [2024-04-25 03:28:51.632120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.188 [2024-04-25 03:28:51.632147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.188 qpair failed and we were unable to recover it. 00:28:17.188 [2024-04-25 03:28:51.632367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.188 [2024-04-25 03:28:51.632559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.188 [2024-04-25 03:28:51.632586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.188 qpair failed and we were unable to recover it. 00:28:17.188 [2024-04-25 03:28:51.632816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.188 [2024-04-25 03:28:51.633054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.188 [2024-04-25 03:28:51.633082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.188 qpair failed and we were unable to recover it. 00:28:17.188 [2024-04-25 03:28:51.633336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.188 [2024-04-25 03:28:51.633509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.188 [2024-04-25 03:28:51.633534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.188 qpair failed and we were unable to recover it. 00:28:17.188 [2024-04-25 03:28:51.633742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.188 [2024-04-25 03:28:51.633946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.188 [2024-04-25 03:28:51.633972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.188 qpair failed and we were unable to recover it. 00:28:17.188 [2024-04-25 03:28:51.634148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.188 [2024-04-25 03:28:51.634345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.188 [2024-04-25 03:28:51.634369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.188 qpair failed and we were unable to recover it. 00:28:17.188 [2024-04-25 03:28:51.634597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.188 [2024-04-25 03:28:51.634832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.188 [2024-04-25 03:28:51.634861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.188 qpair failed and we were unable to recover it. 00:28:17.188 [2024-04-25 03:28:51.635114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.188 [2024-04-25 03:28:51.635329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.188 [2024-04-25 03:28:51.635358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.188 qpair failed and we were unable to recover it. 00:28:17.188 [2024-04-25 03:28:51.635551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.188 [2024-04-25 03:28:51.635749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.188 [2024-04-25 03:28:51.635775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.188 qpair failed and we were unable to recover it. 00:28:17.188 [2024-04-25 03:28:51.635942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.188 [2024-04-25 03:28:51.636163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.188 [2024-04-25 03:28:51.636189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.188 qpair failed and we were unable to recover it. 00:28:17.188 [2024-04-25 03:28:51.636450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.188 [2024-04-25 03:28:51.636699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.188 [2024-04-25 03:28:51.636725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.188 qpair failed and we were unable to recover it. 00:28:17.188 [2024-04-25 03:28:51.636897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.188 [2024-04-25 03:28:51.637066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.188 [2024-04-25 03:28:51.637090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.188 qpair failed and we were unable to recover it. 00:28:17.188 [2024-04-25 03:28:51.637283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.188 [2024-04-25 03:28:51.637452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.188 [2024-04-25 03:28:51.637476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.188 qpair failed and we were unable to recover it. 00:28:17.188 [2024-04-25 03:28:51.637680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.188 [2024-04-25 03:28:51.637878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.188 [2024-04-25 03:28:51.637902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.188 qpair failed and we were unable to recover it. 00:28:17.188 [2024-04-25 03:28:51.638136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.188 [2024-04-25 03:28:51.638347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.188 [2024-04-25 03:28:51.638375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.188 qpair failed and we were unable to recover it. 00:28:17.188 [2024-04-25 03:28:51.638596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.189 [2024-04-25 03:28:51.638820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.189 [2024-04-25 03:28:51.638848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.189 qpair failed and we were unable to recover it. 00:28:17.189 [2024-04-25 03:28:51.639066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.189 [2024-04-25 03:28:51.639287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.189 [2024-04-25 03:28:51.639314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.189 qpair failed and we were unable to recover it. 00:28:17.189 [2024-04-25 03:28:51.639535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.189 [2024-04-25 03:28:51.639768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.189 [2024-04-25 03:28:51.639794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.189 qpair failed and we were unable to recover it. 00:28:17.189 [2024-04-25 03:28:51.640019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.189 [2024-04-25 03:28:51.640239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.189 [2024-04-25 03:28:51.640267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.189 qpair failed and we were unable to recover it. 00:28:17.189 [2024-04-25 03:28:51.640516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.189 [2024-04-25 03:28:51.640711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.189 [2024-04-25 03:28:51.640736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.189 qpair failed and we were unable to recover it. 00:28:17.189 [2024-04-25 03:28:51.640956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.189 [2024-04-25 03:28:51.641173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.189 [2024-04-25 03:28:51.641197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.189 qpair failed and we were unable to recover it. 00:28:17.189 [2024-04-25 03:28:51.641573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.189 [2024-04-25 03:28:51.641846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.189 [2024-04-25 03:28:51.641873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.189 qpair failed and we were unable to recover it. 00:28:17.189 [2024-04-25 03:28:51.642070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.189 [2024-04-25 03:28:51.642313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.189 [2024-04-25 03:28:51.642338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.189 qpair failed and we were unable to recover it. 00:28:17.189 [2024-04-25 03:28:51.642523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.189 [2024-04-25 03:28:51.642750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.189 [2024-04-25 03:28:51.642778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.189 qpair failed and we were unable to recover it. 00:28:17.189 [2024-04-25 03:28:51.642982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.189 [2024-04-25 03:28:51.643187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.189 [2024-04-25 03:28:51.643212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.189 qpair failed and we were unable to recover it. 00:28:17.189 [2024-04-25 03:28:51.643410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.189 [2024-04-25 03:28:51.643607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.189 [2024-04-25 03:28:51.643640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.189 qpair failed and we were unable to recover it. 00:28:17.189 [2024-04-25 03:28:51.643811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.189 [2024-04-25 03:28:51.644055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.189 [2024-04-25 03:28:51.644082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.189 qpair failed and we were unable to recover it. 00:28:17.189 [2024-04-25 03:28:51.644296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.189 [2024-04-25 03:28:51.644511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.189 [2024-04-25 03:28:51.644538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.189 qpair failed and we were unable to recover it. 00:28:17.189 [2024-04-25 03:28:51.644760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.189 [2024-04-25 03:28:51.644935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.189 [2024-04-25 03:28:51.644960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.189 qpair failed and we were unable to recover it. 00:28:17.189 [2024-04-25 03:28:51.645176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.189 [2024-04-25 03:28:51.645405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.189 [2024-04-25 03:28:51.645432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.189 qpair failed and we were unable to recover it. 00:28:17.189 [2024-04-25 03:28:51.645653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.189 [2024-04-25 03:28:51.645872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.189 [2024-04-25 03:28:51.645897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.189 qpair failed and we were unable to recover it. 00:28:17.189 [2024-04-25 03:28:51.646094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.189 [2024-04-25 03:28:51.646324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.189 [2024-04-25 03:28:51.646351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.189 qpair failed and we were unable to recover it. 00:28:17.189 [2024-04-25 03:28:51.646610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.189 [2024-04-25 03:28:51.646813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.189 [2024-04-25 03:28:51.646841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.189 qpair failed and we were unable to recover it. 00:28:17.189 [2024-04-25 03:28:51.647081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.189 [2024-04-25 03:28:51.647302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.189 [2024-04-25 03:28:51.647329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.189 qpair failed and we were unable to recover it. 00:28:17.189 [2024-04-25 03:28:51.647571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.189 [2024-04-25 03:28:51.647762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.189 [2024-04-25 03:28:51.647796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.189 qpair failed and we were unable to recover it. 00:28:17.189 [2024-04-25 03:28:51.648044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.189 [2024-04-25 03:28:51.648222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.189 [2024-04-25 03:28:51.648249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.189 qpair failed and we were unable to recover it. 00:28:17.189 [2024-04-25 03:28:51.648466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.189 [2024-04-25 03:28:51.648718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.189 [2024-04-25 03:28:51.648747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.189 qpair failed and we were unable to recover it. 00:28:17.189 [2024-04-25 03:28:51.648971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.189 [2024-04-25 03:28:51.649149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.189 [2024-04-25 03:28:51.649174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.189 qpair failed and we were unable to recover it. 00:28:17.189 [2024-04-25 03:28:51.649382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.189 [2024-04-25 03:28:51.649601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.189 [2024-04-25 03:28:51.649634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.189 qpair failed and we were unable to recover it. 00:28:17.189 [2024-04-25 03:28:51.649874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.189 [2024-04-25 03:28:51.650078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.189 [2024-04-25 03:28:51.650103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.189 qpair failed and we were unable to recover it. 00:28:17.189 [2024-04-25 03:28:51.650295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.189 [2024-04-25 03:28:51.650489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.189 [2024-04-25 03:28:51.650513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.189 qpair failed and we were unable to recover it. 00:28:17.189 [2024-04-25 03:28:51.650689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.189 [2024-04-25 03:28:51.650859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.189 [2024-04-25 03:28:51.650884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.189 qpair failed and we were unable to recover it. 00:28:17.189 [2024-04-25 03:28:51.651194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.189 [2024-04-25 03:28:51.651401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.189 [2024-04-25 03:28:51.651426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.189 qpair failed and we were unable to recover it. 00:28:17.189 [2024-04-25 03:28:51.651590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.189 [2024-04-25 03:28:51.651769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.189 [2024-04-25 03:28:51.651795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.189 qpair failed and we were unable to recover it. 00:28:17.190 [2024-04-25 03:28:51.651992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.190 [2024-04-25 03:28:51.652208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.190 [2024-04-25 03:28:51.652240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.190 qpair failed and we were unable to recover it. 00:28:17.190 [2024-04-25 03:28:51.652461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.190 [2024-04-25 03:28:51.652679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.190 [2024-04-25 03:28:51.652707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.190 qpair failed and we were unable to recover it. 00:28:17.190 [2024-04-25 03:28:51.652931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.190 [2024-04-25 03:28:51.653138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.190 [2024-04-25 03:28:51.653166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.190 qpair failed and we were unable to recover it. 00:28:17.190 [2024-04-25 03:28:51.653413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.190 [2024-04-25 03:28:51.653618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.190 [2024-04-25 03:28:51.653649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.190 qpair failed and we were unable to recover it. 00:28:17.190 [2024-04-25 03:28:51.653883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.190 [2024-04-25 03:28:51.654076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.190 [2024-04-25 03:28:51.654101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.190 qpair failed and we were unable to recover it. 00:28:17.190 [2024-04-25 03:28:51.654325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.190 [2024-04-25 03:28:51.654537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.190 [2024-04-25 03:28:51.654565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.190 qpair failed and we were unable to recover it. 00:28:17.190 [2024-04-25 03:28:51.654793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.190 [2024-04-25 03:28:51.655031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.190 [2024-04-25 03:28:51.655059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.190 qpair failed and we were unable to recover it. 00:28:17.190 [2024-04-25 03:28:51.655312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.190 [2024-04-25 03:28:51.655509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.190 [2024-04-25 03:28:51.655534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.190 qpair failed and we were unable to recover it. 00:28:17.463 [2024-04-25 03:28:51.655699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.463 [2024-04-25 03:28:51.655896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.463 [2024-04-25 03:28:51.655938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.463 qpair failed and we were unable to recover it. 00:28:17.463 [2024-04-25 03:28:51.656293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.463 [2024-04-25 03:28:51.656526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.463 [2024-04-25 03:28:51.656554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.463 qpair failed and we were unable to recover it. 00:28:17.463 [2024-04-25 03:28:51.656813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.463 [2024-04-25 03:28:51.657071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.463 [2024-04-25 03:28:51.657104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.463 qpair failed and we were unable to recover it. 00:28:17.463 [2024-04-25 03:28:51.657327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.463 [2024-04-25 03:28:51.657523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.463 [2024-04-25 03:28:51.657547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.463 qpair failed and we were unable to recover it. 00:28:17.463 [2024-04-25 03:28:51.657769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.463 [2024-04-25 03:28:51.658020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.463 [2024-04-25 03:28:51.658049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.463 qpair failed and we were unable to recover it. 00:28:17.463 [2024-04-25 03:28:51.658259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.463 [2024-04-25 03:28:51.658479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.463 [2024-04-25 03:28:51.658523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.463 qpair failed and we were unable to recover it. 00:28:17.463 [2024-04-25 03:28:51.658695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.463 [2024-04-25 03:28:51.658896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.463 [2024-04-25 03:28:51.658926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.463 qpair failed and we were unable to recover it. 00:28:17.463 [2024-04-25 03:28:51.659127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.463 [2024-04-25 03:28:51.659359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.463 [2024-04-25 03:28:51.659386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.463 qpair failed and we were unable to recover it. 00:28:17.463 [2024-04-25 03:28:51.659585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.463 [2024-04-25 03:28:51.659809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.463 [2024-04-25 03:28:51.659839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.463 qpair failed and we were unable to recover it. 00:28:17.463 [2024-04-25 03:28:51.660039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.463 [2024-04-25 03:28:51.660247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.464 [2024-04-25 03:28:51.660273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.464 qpair failed and we were unable to recover it. 00:28:17.464 [2024-04-25 03:28:51.660504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.464 [2024-04-25 03:28:51.660728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.464 [2024-04-25 03:28:51.660757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.464 qpair failed and we were unable to recover it. 00:28:17.464 [2024-04-25 03:28:51.661003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.464 [2024-04-25 03:28:51.661185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.464 [2024-04-25 03:28:51.661212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.464 qpair failed and we were unable to recover it. 00:28:17.464 [2024-04-25 03:28:51.661395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.464 [2024-04-25 03:28:51.661569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.464 [2024-04-25 03:28:51.661599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.464 qpair failed and we were unable to recover it. 00:28:17.464 [2024-04-25 03:28:51.661795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.464 [2024-04-25 03:28:51.662026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.464 [2024-04-25 03:28:51.662056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.464 qpair failed and we were unable to recover it. 00:28:17.464 [2024-04-25 03:28:51.662278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.464 [2024-04-25 03:28:51.662496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.464 [2024-04-25 03:28:51.662524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.464 qpair failed and we were unable to recover it. 00:28:17.464 [2024-04-25 03:28:51.662741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.464 [2024-04-25 03:28:51.662923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.464 [2024-04-25 03:28:51.662951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.464 qpair failed and we were unable to recover it. 00:28:17.464 [2024-04-25 03:28:51.663165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.464 [2024-04-25 03:28:51.663387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.464 [2024-04-25 03:28:51.663411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.464 qpair failed and we were unable to recover it. 00:28:17.464 [2024-04-25 03:28:51.663650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.464 [2024-04-25 03:28:51.663849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.464 [2024-04-25 03:28:51.663890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.464 qpair failed and we were unable to recover it. 00:28:17.464 [2024-04-25 03:28:51.664120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.464 [2024-04-25 03:28:51.664332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.464 [2024-04-25 03:28:51.664359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.464 qpair failed and we were unable to recover it. 00:28:17.464 [2024-04-25 03:28:51.664574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.464 [2024-04-25 03:28:51.664797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.464 [2024-04-25 03:28:51.664826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.464 qpair failed and we were unable to recover it. 00:28:17.464 [2024-04-25 03:28:51.665049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.464 [2024-04-25 03:28:51.665241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.464 [2024-04-25 03:28:51.665266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.464 qpair failed and we were unable to recover it. 00:28:17.464 [2024-04-25 03:28:51.665550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.464 [2024-04-25 03:28:51.665797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.464 [2024-04-25 03:28:51.665826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.464 qpair failed and we were unable to recover it. 00:28:17.464 [2024-04-25 03:28:51.666075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.464 [2024-04-25 03:28:51.666425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.464 [2024-04-25 03:28:51.666478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.464 qpair failed and we were unable to recover it. 00:28:17.464 [2024-04-25 03:28:51.666684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.464 [2024-04-25 03:28:51.666886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.464 [2024-04-25 03:28:51.666916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.464 qpair failed and we were unable to recover it. 00:28:17.464 [2024-04-25 03:28:51.667143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.464 [2024-04-25 03:28:51.667353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.464 [2024-04-25 03:28:51.667380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.464 qpair failed and we were unable to recover it. 00:28:17.464 [2024-04-25 03:28:51.667596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.464 [2024-04-25 03:28:51.667791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.464 [2024-04-25 03:28:51.667819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.464 qpair failed and we were unable to recover it. 00:28:17.464 [2024-04-25 03:28:51.668071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.464 [2024-04-25 03:28:51.668278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.464 [2024-04-25 03:28:51.668306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.464 qpair failed and we were unable to recover it. 00:28:17.464 [2024-04-25 03:28:51.668548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.464 [2024-04-25 03:28:51.668792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.464 [2024-04-25 03:28:51.668820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.464 qpair failed and we were unable to recover it. 00:28:17.464 [2024-04-25 03:28:51.669061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.464 [2024-04-25 03:28:51.669284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.464 [2024-04-25 03:28:51.669311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.464 qpair failed and we were unable to recover it. 00:28:17.464 [2024-04-25 03:28:51.669520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.464 [2024-04-25 03:28:51.669761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.464 [2024-04-25 03:28:51.669790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.464 qpair failed and we were unable to recover it. 00:28:17.464 [2024-04-25 03:28:51.669978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.464 [2024-04-25 03:28:51.670178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.464 [2024-04-25 03:28:51.670203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.464 qpair failed and we were unable to recover it. 00:28:17.464 [2024-04-25 03:28:51.670372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.464 [2024-04-25 03:28:51.670587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.464 [2024-04-25 03:28:51.670614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.464 qpair failed and we were unable to recover it. 00:28:17.464 [2024-04-25 03:28:51.670868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.464 [2024-04-25 03:28:51.671116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.464 [2024-04-25 03:28:51.671143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.464 qpair failed and we were unable to recover it. 00:28:17.464 [2024-04-25 03:28:51.671366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.464 [2024-04-25 03:28:51.671611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.464 [2024-04-25 03:28:51.671644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.464 qpair failed and we were unable to recover it. 00:28:17.464 [2024-04-25 03:28:51.671900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.464 [2024-04-25 03:28:51.672090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.464 [2024-04-25 03:28:51.672118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.464 qpair failed and we were unable to recover it. 00:28:17.464 [2024-04-25 03:28:51.672332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.464 [2024-04-25 03:28:51.672508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.464 [2024-04-25 03:28:51.672535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.464 qpair failed and we were unable to recover it. 00:28:17.464 [2024-04-25 03:28:51.672731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.464 [2024-04-25 03:28:51.672922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.464 [2024-04-25 03:28:51.672957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.464 qpair failed and we were unable to recover it. 00:28:17.464 [2024-04-25 03:28:51.673155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.464 [2024-04-25 03:28:51.673316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.464 [2024-04-25 03:28:51.673341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.464 qpair failed and we were unable to recover it. 00:28:17.465 [2024-04-25 03:28:51.673537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.465 [2024-04-25 03:28:51.673771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.465 [2024-04-25 03:28:51.673796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.465 qpair failed and we were unable to recover it. 00:28:17.465 [2024-04-25 03:28:51.674000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.465 [2024-04-25 03:28:51.674213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.465 [2024-04-25 03:28:51.674243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.465 qpair failed and we were unable to recover it. 00:28:17.465 [2024-04-25 03:28:51.674462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.465 [2024-04-25 03:28:51.674684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.465 [2024-04-25 03:28:51.674712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.465 qpair failed and we were unable to recover it. 00:28:17.465 [2024-04-25 03:28:51.674955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.465 [2024-04-25 03:28:51.675174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.465 [2024-04-25 03:28:51.675198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.465 qpair failed and we were unable to recover it. 00:28:17.465 [2024-04-25 03:28:51.675372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.465 [2024-04-25 03:28:51.675586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.465 [2024-04-25 03:28:51.675616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.465 qpair failed and we were unable to recover it. 00:28:17.465 [2024-04-25 03:28:51.675890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.465 [2024-04-25 03:28:51.676213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.465 [2024-04-25 03:28:51.676263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.465 qpair failed and we were unable to recover it. 00:28:17.465 [2024-04-25 03:28:51.676507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.465 [2024-04-25 03:28:51.676710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.465 [2024-04-25 03:28:51.676736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.465 qpair failed and we were unable to recover it. 00:28:17.465 [2024-04-25 03:28:51.676960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.465 [2024-04-25 03:28:51.677174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.465 [2024-04-25 03:28:51.677203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.465 qpair failed and we were unable to recover it. 00:28:17.465 [2024-04-25 03:28:51.677419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.465 [2024-04-25 03:28:51.677673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.465 [2024-04-25 03:28:51.677701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.465 qpair failed and we were unable to recover it. 00:28:17.465 [2024-04-25 03:28:51.677938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.465 [2024-04-25 03:28:51.678162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.465 [2024-04-25 03:28:51.678187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.465 qpair failed and we were unable to recover it. 00:28:17.465 [2024-04-25 03:28:51.678379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.465 [2024-04-25 03:28:51.678596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.465 [2024-04-25 03:28:51.678621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.465 qpair failed and we were unable to recover it. 00:28:17.465 [2024-04-25 03:28:51.678868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.465 [2024-04-25 03:28:51.679284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.465 [2024-04-25 03:28:51.679341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.465 qpair failed and we were unable to recover it. 00:28:17.465 [2024-04-25 03:28:51.679528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.465 [2024-04-25 03:28:51.679783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.465 [2024-04-25 03:28:51.679809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.465 qpair failed and we were unable to recover it. 00:28:17.465 [2024-04-25 03:28:51.680030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.465 [2024-04-25 03:28:51.680220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.465 [2024-04-25 03:28:51.680244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.465 qpair failed and we were unable to recover it. 00:28:17.465 [2024-04-25 03:28:51.680418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.465 [2024-04-25 03:28:51.680674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.465 [2024-04-25 03:28:51.680702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.465 qpair failed and we were unable to recover it. 00:28:17.465 [2024-04-25 03:28:51.680956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.465 [2024-04-25 03:28:51.681171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.465 [2024-04-25 03:28:51.681198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.465 qpair failed and we were unable to recover it. 00:28:17.465 [2024-04-25 03:28:51.681401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.465 [2024-04-25 03:28:51.681618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.465 [2024-04-25 03:28:51.681654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.465 qpair failed and we were unable to recover it. 00:28:17.465 [2024-04-25 03:28:51.681902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.465 [2024-04-25 03:28:51.682080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.465 [2024-04-25 03:28:51.682107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.465 qpair failed and we were unable to recover it. 00:28:17.465 [2024-04-25 03:28:51.682322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.465 [2024-04-25 03:28:51.682510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.465 [2024-04-25 03:28:51.682535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.465 qpair failed and we were unable to recover it. 00:28:17.465 [2024-04-25 03:28:51.682842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.465 [2024-04-25 03:28:51.683064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.465 [2024-04-25 03:28:51.683088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.465 qpair failed and we were unable to recover it. 00:28:17.465 [2024-04-25 03:28:51.683307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.465 [2024-04-25 03:28:51.683493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.465 [2024-04-25 03:28:51.683519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.465 qpair failed and we were unable to recover it. 00:28:17.465 [2024-04-25 03:28:51.683778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.465 [2024-04-25 03:28:51.683955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.465 [2024-04-25 03:28:51.683979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.465 qpair failed and we were unable to recover it. 00:28:17.465 [2024-04-25 03:28:51.684157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.465 [2024-04-25 03:28:51.684355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.465 [2024-04-25 03:28:51.684380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.465 qpair failed and we were unable to recover it. 00:28:17.465 [2024-04-25 03:28:51.684577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.465 [2024-04-25 03:28:51.684775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.465 [2024-04-25 03:28:51.684800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.465 qpair failed and we were unable to recover it. 00:28:17.465 [2024-04-25 03:28:51.684994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.465 [2024-04-25 03:28:51.685187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.465 [2024-04-25 03:28:51.685213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.465 qpair failed and we were unable to recover it. 00:28:17.465 [2024-04-25 03:28:51.685445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.465 [2024-04-25 03:28:51.685646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.465 [2024-04-25 03:28:51.685679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.465 qpair failed and we were unable to recover it. 00:28:17.465 [2024-04-25 03:28:51.685875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.465 [2024-04-25 03:28:51.686075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.465 [2024-04-25 03:28:51.686101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.465 qpair failed and we were unable to recover it. 00:28:17.465 [2024-04-25 03:28:51.686300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.465 [2024-04-25 03:28:51.686499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.465 [2024-04-25 03:28:51.686523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.465 qpair failed and we were unable to recover it. 00:28:17.465 [2024-04-25 03:28:51.686722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.465 [2024-04-25 03:28:51.686931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.465 [2024-04-25 03:28:51.686960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.466 qpair failed and we were unable to recover it. 00:28:17.466 [2024-04-25 03:28:51.687186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.466 [2024-04-25 03:28:51.687367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.466 [2024-04-25 03:28:51.687396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.466 qpair failed and we were unable to recover it. 00:28:17.466 [2024-04-25 03:28:51.687620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.466 [2024-04-25 03:28:51.687799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.466 [2024-04-25 03:28:51.687824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.466 qpair failed and we were unable to recover it. 00:28:17.466 [2024-04-25 03:28:51.688019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.466 [2024-04-25 03:28:51.688256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.466 [2024-04-25 03:28:51.688280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.466 qpair failed and we were unable to recover it. 00:28:17.466 [2024-04-25 03:28:51.688524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.466 [2024-04-25 03:28:51.688708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.466 [2024-04-25 03:28:51.688736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.466 qpair failed and we were unable to recover it. 00:28:17.466 [2024-04-25 03:28:51.688979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.466 [2024-04-25 03:28:51.689190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.466 [2024-04-25 03:28:51.689218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.466 qpair failed and we were unable to recover it. 00:28:17.466 [2024-04-25 03:28:51.689424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.466 [2024-04-25 03:28:51.689647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.466 [2024-04-25 03:28:51.689677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.466 qpair failed and we were unable to recover it. 00:28:17.466 [2024-04-25 03:28:51.689899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.466 [2024-04-25 03:28:51.690120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.466 [2024-04-25 03:28:51.690147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.466 qpair failed and we were unable to recover it. 00:28:17.466 [2024-04-25 03:28:51.690364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.466 [2024-04-25 03:28:51.690581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.466 [2024-04-25 03:28:51.690608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.466 qpair failed and we were unable to recover it. 00:28:17.466 [2024-04-25 03:28:51.690848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.466 [2024-04-25 03:28:51.691028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.466 [2024-04-25 03:28:51.691056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.466 qpair failed and we were unable to recover it. 00:28:17.466 [2024-04-25 03:28:51.691289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.466 [2024-04-25 03:28:51.691485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.466 [2024-04-25 03:28:51.691510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.466 qpair failed and we were unable to recover it. 00:28:17.466 [2024-04-25 03:28:51.691689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.466 [2024-04-25 03:28:51.691888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.466 [2024-04-25 03:28:51.691913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.466 qpair failed and we were unable to recover it. 00:28:17.466 [2024-04-25 03:28:51.692122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.466 [2024-04-25 03:28:51.692361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.466 [2024-04-25 03:28:51.692388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.466 qpair failed and we were unable to recover it. 00:28:17.466 [2024-04-25 03:28:51.692639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.466 [2024-04-25 03:28:51.692883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.466 [2024-04-25 03:28:51.692910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.466 qpair failed and we were unable to recover it. 00:28:17.466 [2024-04-25 03:28:51.693131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.466 [2024-04-25 03:28:51.693334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.466 [2024-04-25 03:28:51.693359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.466 qpair failed and we were unable to recover it. 00:28:17.466 [2024-04-25 03:28:51.693564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.466 [2024-04-25 03:28:51.693760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.466 [2024-04-25 03:28:51.693803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.466 qpair failed and we were unable to recover it. 00:28:17.466 [2024-04-25 03:28:51.694051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.466 [2024-04-25 03:28:51.694218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.466 [2024-04-25 03:28:51.694243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.466 qpair failed and we were unable to recover it. 00:28:17.466 [2024-04-25 03:28:51.694497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.466 [2024-04-25 03:28:51.694745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.466 [2024-04-25 03:28:51.694773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.466 qpair failed and we were unable to recover it. 00:28:17.466 [2024-04-25 03:28:51.694990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.466 [2024-04-25 03:28:51.695212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.466 [2024-04-25 03:28:51.695237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.466 qpair failed and we were unable to recover it. 00:28:17.466 [2024-04-25 03:28:51.695458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.466 [2024-04-25 03:28:51.695680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.466 [2024-04-25 03:28:51.695708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.466 qpair failed and we were unable to recover it. 00:28:17.466 [2024-04-25 03:28:51.695923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.466 [2024-04-25 03:28:51.696307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.466 [2024-04-25 03:28:51.696361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.466 qpair failed and we were unable to recover it. 00:28:17.466 [2024-04-25 03:28:51.696589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.466 [2024-04-25 03:28:51.696840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.466 [2024-04-25 03:28:51.696868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.466 qpair failed and we were unable to recover it. 00:28:17.466 [2024-04-25 03:28:51.697093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.466 [2024-04-25 03:28:51.697335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.466 [2024-04-25 03:28:51.697363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.466 qpair failed and we were unable to recover it. 00:28:17.466 [2024-04-25 03:28:51.697583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.466 [2024-04-25 03:28:51.697803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.466 [2024-04-25 03:28:51.697831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.466 qpair failed and we were unable to recover it. 00:28:17.466 [2024-04-25 03:28:51.698077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.466 [2024-04-25 03:28:51.698259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.466 [2024-04-25 03:28:51.698285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.466 qpair failed and we were unable to recover it. 00:28:17.466 [2024-04-25 03:28:51.698454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.466 [2024-04-25 03:28:51.698651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.466 [2024-04-25 03:28:51.698676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.466 qpair failed and we were unable to recover it. 00:28:17.466 [2024-04-25 03:28:51.698876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.466 [2024-04-25 03:28:51.699102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.466 [2024-04-25 03:28:51.699129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.466 qpair failed and we were unable to recover it. 00:28:17.466 [2024-04-25 03:28:51.699357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.466 [2024-04-25 03:28:51.699565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.466 [2024-04-25 03:28:51.699592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.466 qpair failed and we were unable to recover it. 00:28:17.466 [2024-04-25 03:28:51.699840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.466 [2024-04-25 03:28:51.700062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.466 [2024-04-25 03:28:51.700102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.466 qpair failed and we were unable to recover it. 00:28:17.466 [2024-04-25 03:28:51.700294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.467 [2024-04-25 03:28:51.700467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.467 [2024-04-25 03:28:51.700491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.467 qpair failed and we were unable to recover it. 00:28:17.467 [2024-04-25 03:28:51.700685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.467 [2024-04-25 03:28:51.700902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.467 [2024-04-25 03:28:51.700930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.467 qpair failed and we were unable to recover it. 00:28:17.467 [2024-04-25 03:28:51.701144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.467 [2024-04-25 03:28:51.701362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.467 [2024-04-25 03:28:51.701387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.467 qpair failed and we were unable to recover it. 00:28:17.467 [2024-04-25 03:28:51.701585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.467 [2024-04-25 03:28:51.701811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.467 [2024-04-25 03:28:51.701842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.467 qpair failed and we were unable to recover it. 00:28:17.467 [2024-04-25 03:28:51.702057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.467 [2024-04-25 03:28:51.702227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.467 [2024-04-25 03:28:51.702253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.467 qpair failed and we were unable to recover it. 00:28:17.467 [2024-04-25 03:28:51.702442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.467 [2024-04-25 03:28:51.702694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.467 [2024-04-25 03:28:51.702719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.467 qpair failed and we were unable to recover it. 00:28:17.467 [2024-04-25 03:28:51.702893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.467 [2024-04-25 03:28:51.703092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.467 [2024-04-25 03:28:51.703117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.467 qpair failed and we were unable to recover it. 00:28:17.467 [2024-04-25 03:28:51.703313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.467 [2024-04-25 03:28:51.703514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.467 [2024-04-25 03:28:51.703538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.467 qpair failed and we were unable to recover it. 00:28:17.467 [2024-04-25 03:28:51.703711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.467 [2024-04-25 03:28:51.703917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.467 [2024-04-25 03:28:51.703943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.467 qpair failed and we were unable to recover it. 00:28:17.467 [2024-04-25 03:28:51.704142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.467 [2024-04-25 03:28:51.704343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.467 [2024-04-25 03:28:51.704368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.467 qpair failed and we were unable to recover it. 00:28:17.467 [2024-04-25 03:28:51.704594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.467 [2024-04-25 03:28:51.704785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.467 [2024-04-25 03:28:51.704811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.467 qpair failed and we were unable to recover it. 00:28:17.467 [2024-04-25 03:28:51.705003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.467 [2024-04-25 03:28:51.705201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.467 [2024-04-25 03:28:51.705226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.467 qpair failed and we were unable to recover it. 00:28:17.467 [2024-04-25 03:28:51.705393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.467 [2024-04-25 03:28:51.705642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.467 [2024-04-25 03:28:51.705685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.467 qpair failed and we were unable to recover it. 00:28:17.467 [2024-04-25 03:28:51.705889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.467 [2024-04-25 03:28:51.706105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.467 [2024-04-25 03:28:51.706130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.467 qpair failed and we were unable to recover it. 00:28:17.467 [2024-04-25 03:28:51.706325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.467 [2024-04-25 03:28:51.706526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.467 [2024-04-25 03:28:51.706550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.467 qpair failed and we were unable to recover it. 00:28:17.467 [2024-04-25 03:28:51.706724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.467 [2024-04-25 03:28:51.706926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.467 [2024-04-25 03:28:51.706951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.467 qpair failed and we were unable to recover it. 00:28:17.467 [2024-04-25 03:28:51.707122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.467 [2024-04-25 03:28:51.707323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.467 [2024-04-25 03:28:51.707348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.467 qpair failed and we were unable to recover it. 00:28:17.467 [2024-04-25 03:28:51.707544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.467 [2024-04-25 03:28:51.707743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.467 [2024-04-25 03:28:51.707768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.467 qpair failed and we were unable to recover it. 00:28:17.467 [2024-04-25 03:28:51.707935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.467 [2024-04-25 03:28:51.708141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.467 [2024-04-25 03:28:51.708166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.467 qpair failed and we were unable to recover it. 00:28:17.467 [2024-04-25 03:28:51.708371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.467 [2024-04-25 03:28:51.708530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.467 [2024-04-25 03:28:51.708556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.467 qpair failed and we were unable to recover it. 00:28:17.467 [2024-04-25 03:28:51.708738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.467 [2024-04-25 03:28:51.708959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.467 [2024-04-25 03:28:51.708983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.467 qpair failed and we were unable to recover it. 00:28:17.467 [2024-04-25 03:28:51.709183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.467 [2024-04-25 03:28:51.709379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.467 [2024-04-25 03:28:51.709404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.467 qpair failed and we were unable to recover it. 00:28:17.467 [2024-04-25 03:28:51.709597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.467 [2024-04-25 03:28:51.709839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.467 [2024-04-25 03:28:51.709864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.467 qpair failed and we were unable to recover it. 00:28:17.467 [2024-04-25 03:28:51.710066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.467 [2024-04-25 03:28:51.710262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.467 [2024-04-25 03:28:51.710288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.467 qpair failed and we were unable to recover it. 00:28:17.467 [2024-04-25 03:28:51.710506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.467 [2024-04-25 03:28:51.710719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.467 [2024-04-25 03:28:51.710745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.467 qpair failed and we were unable to recover it. 00:28:17.467 [2024-04-25 03:28:51.710920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.467 [2024-04-25 03:28:51.711114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.467 [2024-04-25 03:28:51.711139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.467 qpair failed and we were unable to recover it. 00:28:17.467 [2024-04-25 03:28:51.711338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.467 [2024-04-25 03:28:51.711495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.467 [2024-04-25 03:28:51.711519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.467 qpair failed and we were unable to recover it. 00:28:17.467 [2024-04-25 03:28:51.711713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.467 [2024-04-25 03:28:51.711910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.467 [2024-04-25 03:28:51.711934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.467 qpair failed and we were unable to recover it. 00:28:17.467 [2024-04-25 03:28:51.712132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.467 [2024-04-25 03:28:51.712298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.467 [2024-04-25 03:28:51.712323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.467 qpair failed and we were unable to recover it. 00:28:17.468 [2024-04-25 03:28:51.712529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.468 [2024-04-25 03:28:51.712730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.468 [2024-04-25 03:28:51.712757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.468 qpair failed and we were unable to recover it. 00:28:17.468 [2024-04-25 03:28:51.712952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.468 [2024-04-25 03:28:51.713154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.468 [2024-04-25 03:28:51.713179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.468 qpair failed and we were unable to recover it. 00:28:17.468 [2024-04-25 03:28:51.713375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.468 [2024-04-25 03:28:51.713543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.468 [2024-04-25 03:28:51.713567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.468 qpair failed and we were unable to recover it. 00:28:17.468 [2024-04-25 03:28:51.713732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.468 [2024-04-25 03:28:51.713928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.468 [2024-04-25 03:28:51.713953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.468 qpair failed and we were unable to recover it. 00:28:17.468 [2024-04-25 03:28:51.714164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.468 [2024-04-25 03:28:51.714357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.468 [2024-04-25 03:28:51.714381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.468 qpair failed and we were unable to recover it. 00:28:17.468 [2024-04-25 03:28:51.714614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.468 [2024-04-25 03:28:51.714797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.468 [2024-04-25 03:28:51.714822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.468 qpair failed and we were unable to recover it. 00:28:17.468 [2024-04-25 03:28:51.714993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.468 [2024-04-25 03:28:51.715215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.468 [2024-04-25 03:28:51.715240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.468 qpair failed and we were unable to recover it. 00:28:17.468 [2024-04-25 03:28:51.715443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.468 [2024-04-25 03:28:51.715644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.468 [2024-04-25 03:28:51.715670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.468 qpair failed and we were unable to recover it. 00:28:17.468 [2024-04-25 03:28:51.715861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.468 [2024-04-25 03:28:51.716081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.468 [2024-04-25 03:28:51.716107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.468 qpair failed and we were unable to recover it. 00:28:17.468 [2024-04-25 03:28:51.716267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.468 [2024-04-25 03:28:51.716466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.468 [2024-04-25 03:28:51.716493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.468 qpair failed and we were unable to recover it. 00:28:17.468 [2024-04-25 03:28:51.716709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.468 [2024-04-25 03:28:51.716902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.468 [2024-04-25 03:28:51.716927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.468 qpair failed and we were unable to recover it. 00:28:17.468 [2024-04-25 03:28:51.717122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.468 [2024-04-25 03:28:51.717343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.468 [2024-04-25 03:28:51.717368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.468 qpair failed and we were unable to recover it. 00:28:17.468 [2024-04-25 03:28:51.717560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.468 [2024-04-25 03:28:51.717759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.468 [2024-04-25 03:28:51.717785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.468 qpair failed and we were unable to recover it. 00:28:17.468 [2024-04-25 03:28:51.717986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.468 [2024-04-25 03:28:51.718206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.468 [2024-04-25 03:28:51.718231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.468 qpair failed and we were unable to recover it. 00:28:17.468 [2024-04-25 03:28:51.718399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.468 [2024-04-25 03:28:51.718590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.468 [2024-04-25 03:28:51.718614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.468 qpair failed and we were unable to recover it. 00:28:17.468 [2024-04-25 03:28:51.718816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.468 [2024-04-25 03:28:51.719034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.468 [2024-04-25 03:28:51.719058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.468 qpair failed and we were unable to recover it. 00:28:17.468 [2024-04-25 03:28:51.719265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.468 [2024-04-25 03:28:51.719452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.468 [2024-04-25 03:28:51.719477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.468 qpair failed and we were unable to recover it. 00:28:17.468 [2024-04-25 03:28:51.719666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.468 [2024-04-25 03:28:51.719880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.468 [2024-04-25 03:28:51.719904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.468 qpair failed and we were unable to recover it. 00:28:17.468 [2024-04-25 03:28:51.720107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.468 [2024-04-25 03:28:51.720292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.468 [2024-04-25 03:28:51.720316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.468 qpair failed and we were unable to recover it. 00:28:17.468 [2024-04-25 03:28:51.720537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.468 [2024-04-25 03:28:51.720770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.468 [2024-04-25 03:28:51.720800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.468 qpair failed and we were unable to recover it. 00:28:17.468 [2024-04-25 03:28:51.720991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.468 [2024-04-25 03:28:51.721181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.468 [2024-04-25 03:28:51.721205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.468 qpair failed and we were unable to recover it. 00:28:17.468 [2024-04-25 03:28:51.721411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.468 [2024-04-25 03:28:51.721626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.468 [2024-04-25 03:28:51.721685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.468 qpair failed and we were unable to recover it. 00:28:17.468 [2024-04-25 03:28:51.721889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.468 [2024-04-25 03:28:51.722096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.468 [2024-04-25 03:28:51.722121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.468 qpair failed and we were unable to recover it. 00:28:17.468 [2024-04-25 03:28:51.722320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.468 [2024-04-25 03:28:51.722522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.469 [2024-04-25 03:28:51.722547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.469 qpair failed and we were unable to recover it. 00:28:17.469 [2024-04-25 03:28:51.722774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.469 [2024-04-25 03:28:51.722942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.469 [2024-04-25 03:28:51.722967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.469 qpair failed and we were unable to recover it. 00:28:17.469 [2024-04-25 03:28:51.723168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.469 [2024-04-25 03:28:51.723369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.469 [2024-04-25 03:28:51.723394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.469 qpair failed and we were unable to recover it. 00:28:17.469 [2024-04-25 03:28:51.723589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.469 [2024-04-25 03:28:51.723826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.469 [2024-04-25 03:28:51.723853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.469 qpair failed and we were unable to recover it. 00:28:17.469 [2024-04-25 03:28:51.724061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.469 [2024-04-25 03:28:51.724254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.469 [2024-04-25 03:28:51.724278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.469 qpair failed and we were unable to recover it. 00:28:17.469 [2024-04-25 03:28:51.724538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.469 [2024-04-25 03:28:51.724737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.469 [2024-04-25 03:28:51.724762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.469 qpair failed and we were unable to recover it. 00:28:17.469 [2024-04-25 03:28:51.724935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.469 [2024-04-25 03:28:51.725131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.469 [2024-04-25 03:28:51.725162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.469 qpair failed and we were unable to recover it. 00:28:17.469 [2024-04-25 03:28:51.725357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.469 [2024-04-25 03:28:51.725578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.469 [2024-04-25 03:28:51.725606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.469 qpair failed and we were unable to recover it. 00:28:17.469 [2024-04-25 03:28:51.725856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.469 [2024-04-25 03:28:51.726094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.469 [2024-04-25 03:28:51.726121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.469 qpair failed and we were unable to recover it. 00:28:17.469 [2024-04-25 03:28:51.726319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.469 [2024-04-25 03:28:51.726549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.469 [2024-04-25 03:28:51.726574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.469 qpair failed and we were unable to recover it. 00:28:17.469 [2024-04-25 03:28:51.726764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.469 [2024-04-25 03:28:51.726962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.469 [2024-04-25 03:28:51.726986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.469 qpair failed and we were unable to recover it. 00:28:17.469 [2024-04-25 03:28:51.727187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.469 [2024-04-25 03:28:51.727378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.469 [2024-04-25 03:28:51.727402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.469 qpair failed and we were unable to recover it. 00:28:17.469 [2024-04-25 03:28:51.727651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.469 [2024-04-25 03:28:51.727833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.469 [2024-04-25 03:28:51.727857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.469 qpair failed and we were unable to recover it. 00:28:17.469 [2024-04-25 03:28:51.728055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.469 [2024-04-25 03:28:51.728217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.469 [2024-04-25 03:28:51.728243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.469 qpair failed and we were unable to recover it. 00:28:17.469 [2024-04-25 03:28:51.728444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.469 [2024-04-25 03:28:51.728601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.469 [2024-04-25 03:28:51.728625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.469 qpair failed and we were unable to recover it. 00:28:17.469 [2024-04-25 03:28:51.728834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.469 [2024-04-25 03:28:51.729056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.469 [2024-04-25 03:28:51.729081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.469 qpair failed and we were unable to recover it. 00:28:17.469 [2024-04-25 03:28:51.729305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.469 [2024-04-25 03:28:51.729524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.469 [2024-04-25 03:28:51.729554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.469 qpair failed and we were unable to recover it. 00:28:17.469 [2024-04-25 03:28:51.729757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.469 [2024-04-25 03:28:51.729959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.469 [2024-04-25 03:28:51.729984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.469 qpair failed and we were unable to recover it. 00:28:17.469 [2024-04-25 03:28:51.730184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.469 [2024-04-25 03:28:51.730413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.469 [2024-04-25 03:28:51.730437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.469 qpair failed and we were unable to recover it. 00:28:17.469 [2024-04-25 03:28:51.730665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.469 [2024-04-25 03:28:51.730904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.469 [2024-04-25 03:28:51.730929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.469 qpair failed and we were unable to recover it. 00:28:17.469 [2024-04-25 03:28:51.731134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.469 [2024-04-25 03:28:51.731303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.469 [2024-04-25 03:28:51.731329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.469 qpair failed and we were unable to recover it. 00:28:17.469 [2024-04-25 03:28:51.731552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.469 [2024-04-25 03:28:51.731747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.469 [2024-04-25 03:28:51.731772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.469 qpair failed and we were unable to recover it. 00:28:17.469 [2024-04-25 03:28:51.731995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.469 [2024-04-25 03:28:51.732163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.469 [2024-04-25 03:28:51.732187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.469 qpair failed and we were unable to recover it. 00:28:17.469 [2024-04-25 03:28:51.732384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.469 [2024-04-25 03:28:51.732576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.469 [2024-04-25 03:28:51.732600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.469 qpair failed and we were unable to recover it. 00:28:17.469 [2024-04-25 03:28:51.732828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.469 [2024-04-25 03:28:51.733023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.469 [2024-04-25 03:28:51.733049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.469 qpair failed and we were unable to recover it. 00:28:17.469 [2024-04-25 03:28:51.733265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.469 [2024-04-25 03:28:51.733462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.469 [2024-04-25 03:28:51.733488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.469 qpair failed and we were unable to recover it. 00:28:17.469 [2024-04-25 03:28:51.733683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.469 [2024-04-25 03:28:51.733862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.469 [2024-04-25 03:28:51.733891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.469 qpair failed and we were unable to recover it. 00:28:17.469 [2024-04-25 03:28:51.734088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.469 [2024-04-25 03:28:51.734283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.469 [2024-04-25 03:28:51.734309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.469 qpair failed and we were unable to recover it. 00:28:17.469 [2024-04-25 03:28:51.734507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.469 [2024-04-25 03:28:51.734733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.469 [2024-04-25 03:28:51.734758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.469 qpair failed and we were unable to recover it. 00:28:17.469 [2024-04-25 03:28:51.734929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.470 [2024-04-25 03:28:51.735097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.470 [2024-04-25 03:28:51.735123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.470 qpair failed and we were unable to recover it. 00:28:17.470 [2024-04-25 03:28:51.735340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.470 [2024-04-25 03:28:51.735533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.470 [2024-04-25 03:28:51.735560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.470 qpair failed and we were unable to recover it. 00:28:17.470 [2024-04-25 03:28:51.735789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.470 [2024-04-25 03:28:51.735962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.470 [2024-04-25 03:28:51.735987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.470 qpair failed and we were unable to recover it. 00:28:17.470 [2024-04-25 03:28:51.736158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.470 [2024-04-25 03:28:51.736345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.470 [2024-04-25 03:28:51.736370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.470 qpair failed and we were unable to recover it. 00:28:17.470 [2024-04-25 03:28:51.736558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.470 [2024-04-25 03:28:51.736778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.470 [2024-04-25 03:28:51.736804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.470 qpair failed and we were unable to recover it. 00:28:17.470 [2024-04-25 03:28:51.737000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.470 [2024-04-25 03:28:51.737195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.470 [2024-04-25 03:28:51.737219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.470 qpair failed and we were unable to recover it. 00:28:17.470 [2024-04-25 03:28:51.737450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.470 [2024-04-25 03:28:51.737639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.470 [2024-04-25 03:28:51.737681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.470 qpair failed and we were unable to recover it. 00:28:17.470 [2024-04-25 03:28:51.737878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.470 [2024-04-25 03:28:51.738038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.470 [2024-04-25 03:28:51.738062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.470 qpair failed and we were unable to recover it. 00:28:17.470 [2024-04-25 03:28:51.738293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.470 [2024-04-25 03:28:51.738493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.470 [2024-04-25 03:28:51.738519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.470 qpair failed and we were unable to recover it. 00:28:17.470 [2024-04-25 03:28:51.738746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.470 [2024-04-25 03:28:51.738912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.470 [2024-04-25 03:28:51.738937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.470 qpair failed and we were unable to recover it. 00:28:17.470 [2024-04-25 03:28:51.739125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.470 [2024-04-25 03:28:51.739319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.470 [2024-04-25 03:28:51.739343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.470 qpair failed and we were unable to recover it. 00:28:17.470 [2024-04-25 03:28:51.739513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.470 [2024-04-25 03:28:51.739734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.470 [2024-04-25 03:28:51.739759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.470 qpair failed and we were unable to recover it. 00:28:17.470 [2024-04-25 03:28:51.739955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.470 [2024-04-25 03:28:51.740134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.470 [2024-04-25 03:28:51.740159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.470 qpair failed and we were unable to recover it. 00:28:17.470 [2024-04-25 03:28:51.740356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.470 [2024-04-25 03:28:51.740580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.470 [2024-04-25 03:28:51.740607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.470 qpair failed and we were unable to recover it. 00:28:17.470 [2024-04-25 03:28:51.740827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.470 [2024-04-25 03:28:51.741003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.470 [2024-04-25 03:28:51.741028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.470 qpair failed and we were unable to recover it. 00:28:17.470 [2024-04-25 03:28:51.741252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.470 [2024-04-25 03:28:51.741442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.470 [2024-04-25 03:28:51.741466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.470 qpair failed and we were unable to recover it. 00:28:17.470 [2024-04-25 03:28:51.741674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.470 [2024-04-25 03:28:51.741862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.470 [2024-04-25 03:28:51.741887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.470 qpair failed and we were unable to recover it. 00:28:17.470 [2024-04-25 03:28:51.742054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.470 [2024-04-25 03:28:51.742241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.470 [2024-04-25 03:28:51.742265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.470 qpair failed and we were unable to recover it. 00:28:17.470 [2024-04-25 03:28:51.742444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.470 [2024-04-25 03:28:51.742624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.470 [2024-04-25 03:28:51.742657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.470 qpair failed and we were unable to recover it. 00:28:17.470 [2024-04-25 03:28:51.742848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.470 [2024-04-25 03:28:51.743056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.470 [2024-04-25 03:28:51.743081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.470 qpair failed and we were unable to recover it. 00:28:17.470 [2024-04-25 03:28:51.743321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.470 [2024-04-25 03:28:51.743565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.470 [2024-04-25 03:28:51.743589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.470 qpair failed and we were unable to recover it. 00:28:17.470 [2024-04-25 03:28:51.743776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.470 [2024-04-25 03:28:51.743970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.470 [2024-04-25 03:28:51.743994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.470 qpair failed and we were unable to recover it. 00:28:17.470 [2024-04-25 03:28:51.744209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.470 [2024-04-25 03:28:51.744389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.470 [2024-04-25 03:28:51.744415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.470 qpair failed and we were unable to recover it. 00:28:17.470 [2024-04-25 03:28:51.744620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.470 [2024-04-25 03:28:51.744900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.470 [2024-04-25 03:28:51.744925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.470 qpair failed and we were unable to recover it. 00:28:17.470 [2024-04-25 03:28:51.745219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.470 [2024-04-25 03:28:51.745453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.470 [2024-04-25 03:28:51.745477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.470 qpair failed and we were unable to recover it. 00:28:17.470 [2024-04-25 03:28:51.745709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.470 [2024-04-25 03:28:51.745909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.470 [2024-04-25 03:28:51.745934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.470 qpair failed and we were unable to recover it. 00:28:17.470 [2024-04-25 03:28:51.746170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.470 [2024-04-25 03:28:51.746349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.470 [2024-04-25 03:28:51.746373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.470 qpair failed and we were unable to recover it. 00:28:17.470 [2024-04-25 03:28:51.746566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.470 [2024-04-25 03:28:51.746842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.470 [2024-04-25 03:28:51.746868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.470 qpair failed and we were unable to recover it. 00:28:17.470 [2024-04-25 03:28:51.747087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.470 [2024-04-25 03:28:51.747300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.470 [2024-04-25 03:28:51.747326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.470 qpair failed and we were unable to recover it. 00:28:17.471 [2024-04-25 03:28:51.747561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.471 [2024-04-25 03:28:51.747742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.471 [2024-04-25 03:28:51.747768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.471 qpair failed and we were unable to recover it. 00:28:17.471 [2024-04-25 03:28:51.747966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.471 [2024-04-25 03:28:51.748161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.471 [2024-04-25 03:28:51.748186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.471 qpair failed and we were unable to recover it. 00:28:17.471 [2024-04-25 03:28:51.748386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.471 [2024-04-25 03:28:51.748639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.471 [2024-04-25 03:28:51.748691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.471 qpair failed and we were unable to recover it. 00:28:17.471 [2024-04-25 03:28:51.748930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.471 [2024-04-25 03:28:51.749161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.471 [2024-04-25 03:28:51.749186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.471 qpair failed and we were unable to recover it. 00:28:17.471 [2024-04-25 03:28:51.749375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.471 [2024-04-25 03:28:51.749542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.471 [2024-04-25 03:28:51.749583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.471 qpair failed and we were unable to recover it. 00:28:17.471 [2024-04-25 03:28:51.749798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.471 [2024-04-25 03:28:51.750001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.471 [2024-04-25 03:28:51.750026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.471 qpair failed and we were unable to recover it. 00:28:17.471 [2024-04-25 03:28:51.750219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.471 [2024-04-25 03:28:51.750442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.471 [2024-04-25 03:28:51.750467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.471 qpair failed and we were unable to recover it. 00:28:17.471 [2024-04-25 03:28:51.750723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.471 [2024-04-25 03:28:51.750924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.471 [2024-04-25 03:28:51.750948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.471 qpair failed and we were unable to recover it. 00:28:17.471 [2024-04-25 03:28:51.751127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.471 [2024-04-25 03:28:51.751310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.471 [2024-04-25 03:28:51.751337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.471 qpair failed and we were unable to recover it. 00:28:17.471 [2024-04-25 03:28:51.751568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.471 [2024-04-25 03:28:51.751771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.471 [2024-04-25 03:28:51.751796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.471 qpair failed and we were unable to recover it. 00:28:17.471 [2024-04-25 03:28:51.752018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.471 [2024-04-25 03:28:51.752218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.471 [2024-04-25 03:28:51.752243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.471 qpair failed and we were unable to recover it. 00:28:17.471 [2024-04-25 03:28:51.752405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.471 [2024-04-25 03:28:51.752653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.471 [2024-04-25 03:28:51.752679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.471 qpair failed and we were unable to recover it. 00:28:17.471 [2024-04-25 03:28:51.752877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.471 [2024-04-25 03:28:51.753071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.471 [2024-04-25 03:28:51.753095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.471 qpair failed and we were unable to recover it. 00:28:17.471 [2024-04-25 03:28:51.753313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.471 [2024-04-25 03:28:51.753491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.471 [2024-04-25 03:28:51.753515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.471 qpair failed and we were unable to recover it. 00:28:17.471 [2024-04-25 03:28:51.753749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.471 [2024-04-25 03:28:51.753926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.471 [2024-04-25 03:28:51.753950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.471 qpair failed and we were unable to recover it. 00:28:17.471 [2024-04-25 03:28:51.754274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.471 [2024-04-25 03:28:51.754473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.471 [2024-04-25 03:28:51.754497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.471 qpair failed and we were unable to recover it. 00:28:17.471 [2024-04-25 03:28:51.754693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.471 [2024-04-25 03:28:51.755079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.471 [2024-04-25 03:28:51.755140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.471 qpair failed and we were unable to recover it. 00:28:17.471 [2024-04-25 03:28:51.755345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.471 [2024-04-25 03:28:51.755535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.471 [2024-04-25 03:28:51.755558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.471 qpair failed and we were unable to recover it. 00:28:17.471 [2024-04-25 03:28:51.755790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.471 [2024-04-25 03:28:51.756000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.471 [2024-04-25 03:28:51.756025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.471 qpair failed and we were unable to recover it. 00:28:17.471 [2024-04-25 03:28:51.756233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.471 [2024-04-25 03:28:51.756473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.471 [2024-04-25 03:28:51.756512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.471 qpair failed and we were unable to recover it. 00:28:17.471 [2024-04-25 03:28:51.756726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.471 [2024-04-25 03:28:51.756922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.471 [2024-04-25 03:28:51.756947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.471 qpair failed and we were unable to recover it. 00:28:17.471 [2024-04-25 03:28:51.757158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.471 [2024-04-25 03:28:51.757370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.471 [2024-04-25 03:28:51.757393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.471 qpair failed and we were unable to recover it. 00:28:17.471 [2024-04-25 03:28:51.757625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.471 [2024-04-25 03:28:51.757869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.471 [2024-04-25 03:28:51.757894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.471 qpair failed and we were unable to recover it. 00:28:17.471 [2024-04-25 03:28:51.758161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.471 [2024-04-25 03:28:51.758387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.471 [2024-04-25 03:28:51.758415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.471 qpair failed and we were unable to recover it. 00:28:17.471 [2024-04-25 03:28:51.758729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.471 [2024-04-25 03:28:51.758924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.471 [2024-04-25 03:28:51.758949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.471 qpair failed and we were unable to recover it. 00:28:17.471 [2024-04-25 03:28:51.759155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.471 [2024-04-25 03:28:51.759317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.471 [2024-04-25 03:28:51.759341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.471 qpair failed and we were unable to recover it. 00:28:17.471 [2024-04-25 03:28:51.759561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.471 [2024-04-25 03:28:51.759755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.471 [2024-04-25 03:28:51.759780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.471 qpair failed and we were unable to recover it. 00:28:17.471 [2024-04-25 03:28:51.759992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.471 [2024-04-25 03:28:51.760226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.471 [2024-04-25 03:28:51.760250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.471 qpair failed and we were unable to recover it. 00:28:17.471 [2024-04-25 03:28:51.760414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.471 [2024-04-25 03:28:51.760593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.472 [2024-04-25 03:28:51.760617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.472 qpair failed and we were unable to recover it. 00:28:17.472 [2024-04-25 03:28:51.760834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.472 [2024-04-25 03:28:51.761040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.472 [2024-04-25 03:28:51.761064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.472 qpair failed and we were unable to recover it. 00:28:17.472 [2024-04-25 03:28:51.761284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.472 [2024-04-25 03:28:51.761511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.472 [2024-04-25 03:28:51.761538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.472 qpair failed and we were unable to recover it. 00:28:17.472 [2024-04-25 03:28:51.761762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.472 [2024-04-25 03:28:51.761947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.472 [2024-04-25 03:28:51.761971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.472 qpair failed and we were unable to recover it. 00:28:17.472 [2024-04-25 03:28:51.762175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.472 [2024-04-25 03:28:51.762415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.472 [2024-04-25 03:28:51.762440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.472 qpair failed and we were unable to recover it. 00:28:17.472 [2024-04-25 03:28:51.762663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.472 [2024-04-25 03:28:51.762858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.472 [2024-04-25 03:28:51.762883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.472 qpair failed and we were unable to recover it. 00:28:17.472 [2024-04-25 03:28:51.763108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.472 [2024-04-25 03:28:51.763331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.472 [2024-04-25 03:28:51.763355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.472 qpair failed and we were unable to recover it. 00:28:17.472 [2024-04-25 03:28:51.763582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.472 [2024-04-25 03:28:51.763790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.472 [2024-04-25 03:28:51.763816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.472 qpair failed and we were unable to recover it. 00:28:17.472 [2024-04-25 03:28:51.764015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.472 [2024-04-25 03:28:51.764210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.472 [2024-04-25 03:28:51.764235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.472 qpair failed and we were unable to recover it. 00:28:17.472 [2024-04-25 03:28:51.764525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.472 [2024-04-25 03:28:51.764729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.472 [2024-04-25 03:28:51.764755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.472 qpair failed and we were unable to recover it. 00:28:17.472 [2024-04-25 03:28:51.765013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.472 [2024-04-25 03:28:51.765241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.472 [2024-04-25 03:28:51.765265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.472 qpair failed and we were unable to recover it. 00:28:17.472 [2024-04-25 03:28:51.765671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.472 [2024-04-25 03:28:51.765872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.472 [2024-04-25 03:28:51.765897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.472 qpair failed and we were unable to recover it. 00:28:17.472 [2024-04-25 03:28:51.766172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.472 [2024-04-25 03:28:51.766404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.472 [2024-04-25 03:28:51.766429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.472 qpair failed and we were unable to recover it. 00:28:17.472 [2024-04-25 03:28:51.766618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.472 [2024-04-25 03:28:51.766864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.472 [2024-04-25 03:28:51.766889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.472 qpair failed and we were unable to recover it. 00:28:17.472 [2024-04-25 03:28:51.767086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.472 [2024-04-25 03:28:51.767254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.472 [2024-04-25 03:28:51.767278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.472 qpair failed and we were unable to recover it. 00:28:17.472 [2024-04-25 03:28:51.767503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.472 [2024-04-25 03:28:51.767734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.472 [2024-04-25 03:28:51.767760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.472 qpair failed and we were unable to recover it. 00:28:17.472 [2024-04-25 03:28:51.767951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.472 [2024-04-25 03:28:51.768208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.472 [2024-04-25 03:28:51.768232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.472 qpair failed and we were unable to recover it. 00:28:17.472 [2024-04-25 03:28:51.768435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.472 [2024-04-25 03:28:51.768655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.472 [2024-04-25 03:28:51.768690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.472 qpair failed and we were unable to recover it. 00:28:17.472 [2024-04-25 03:28:51.768896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.472 [2024-04-25 03:28:51.769084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.472 [2024-04-25 03:28:51.769107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.472 qpair failed and we were unable to recover it. 00:28:17.472 [2024-04-25 03:28:51.769324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.472 [2024-04-25 03:28:51.769536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.472 [2024-04-25 03:28:51.769561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.472 qpair failed and we were unable to recover it. 00:28:17.472 [2024-04-25 03:28:51.769798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.472 [2024-04-25 03:28:51.769991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.472 [2024-04-25 03:28:51.770017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.472 qpair failed and we were unable to recover it. 00:28:17.472 [2024-04-25 03:28:51.770260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.472 [2024-04-25 03:28:51.770482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.472 [2024-04-25 03:28:51.770506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.472 qpair failed and we were unable to recover it. 00:28:17.472 [2024-04-25 03:28:51.770710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.472 [2024-04-25 03:28:51.770959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.472 [2024-04-25 03:28:51.770983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.472 qpair failed and we were unable to recover it. 00:28:17.472 [2024-04-25 03:28:51.771220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.472 [2024-04-25 03:28:51.771470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.472 [2024-04-25 03:28:51.771509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.472 qpair failed and we were unable to recover it. 00:28:17.472 [2024-04-25 03:28:51.771720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.472 [2024-04-25 03:28:51.771929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.472 [2024-04-25 03:28:51.771954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.472 qpair failed and we were unable to recover it. 00:28:17.472 [2024-04-25 03:28:51.772189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.472 [2024-04-25 03:28:51.772385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.473 [2024-04-25 03:28:51.772412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.473 qpair failed and we were unable to recover it. 00:28:17.473 [2024-04-25 03:28:51.772676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.473 [2024-04-25 03:28:51.772959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.473 [2024-04-25 03:28:51.773026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.473 qpair failed and we were unable to recover it. 00:28:17.473 [2024-04-25 03:28:51.773236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.473 [2024-04-25 03:28:51.773512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.473 [2024-04-25 03:28:51.773536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.473 qpair failed and we were unable to recover it. 00:28:17.473 [2024-04-25 03:28:51.773761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.473 [2024-04-25 03:28:51.773929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.473 [2024-04-25 03:28:51.773969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.473 qpair failed and we were unable to recover it. 00:28:17.473 [2024-04-25 03:28:51.774204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.473 [2024-04-25 03:28:51.774375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.473 [2024-04-25 03:28:51.774399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.473 qpair failed and we were unable to recover it. 00:28:17.473 [2024-04-25 03:28:51.774575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.473 [2024-04-25 03:28:51.774799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.473 [2024-04-25 03:28:51.774825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.473 qpair failed and we were unable to recover it. 00:28:17.473 [2024-04-25 03:28:51.775049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.473 [2024-04-25 03:28:51.775287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.473 [2024-04-25 03:28:51.775326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.473 qpair failed and we were unable to recover it. 00:28:17.473 [2024-04-25 03:28:51.775531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.473 [2024-04-25 03:28:51.775711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.473 [2024-04-25 03:28:51.775738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.473 qpair failed and we were unable to recover it. 00:28:17.473 [2024-04-25 03:28:51.775908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.473 [2024-04-25 03:28:51.776167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.473 [2024-04-25 03:28:51.776191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.473 qpair failed and we were unable to recover it. 00:28:17.473 [2024-04-25 03:28:51.776443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.473 [2024-04-25 03:28:51.776650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.473 [2024-04-25 03:28:51.776676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.473 qpair failed and we were unable to recover it. 00:28:17.473 [2024-04-25 03:28:51.776889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.473 [2024-04-25 03:28:51.777102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.473 [2024-04-25 03:28:51.777126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.473 qpair failed and we were unable to recover it. 00:28:17.473 [2024-04-25 03:28:51.777333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.473 [2024-04-25 03:28:51.777544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.473 [2024-04-25 03:28:51.777571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.473 qpair failed and we were unable to recover it. 00:28:17.473 [2024-04-25 03:28:51.777796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.473 [2024-04-25 03:28:51.777968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.473 [2024-04-25 03:28:51.777993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.473 qpair failed and we were unable to recover it. 00:28:17.473 [2024-04-25 03:28:51.778185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.473 [2024-04-25 03:28:51.778370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.473 [2024-04-25 03:28:51.778396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.473 qpair failed and we were unable to recover it. 00:28:17.473 [2024-04-25 03:28:51.778643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.473 [2024-04-25 03:28:51.778860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.473 [2024-04-25 03:28:51.778885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.473 qpair failed and we were unable to recover it. 00:28:17.473 [2024-04-25 03:28:51.779158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.473 [2024-04-25 03:28:51.779354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.473 [2024-04-25 03:28:51.779378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.473 qpair failed and we were unable to recover it. 00:28:17.473 [2024-04-25 03:28:51.779585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.473 [2024-04-25 03:28:51.779785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.473 [2024-04-25 03:28:51.779809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.473 qpair failed and we were unable to recover it. 00:28:17.473 [2024-04-25 03:28:51.779993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.473 [2024-04-25 03:28:51.780204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.473 [2024-04-25 03:28:51.780229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.473 qpair failed and we were unable to recover it. 00:28:17.473 [2024-04-25 03:28:51.780459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.473 [2024-04-25 03:28:51.780724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.473 [2024-04-25 03:28:51.780750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.473 qpair failed and we were unable to recover it. 00:28:17.473 [2024-04-25 03:28:51.780944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.473 [2024-04-25 03:28:51.781199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.473 [2024-04-25 03:28:51.781223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.473 qpair failed and we were unable to recover it. 00:28:17.473 [2024-04-25 03:28:51.781410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.473 [2024-04-25 03:28:51.781612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.473 [2024-04-25 03:28:51.781643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.473 qpair failed and we were unable to recover it. 00:28:17.473 [2024-04-25 03:28:51.781818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.473 [2024-04-25 03:28:51.781993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.473 [2024-04-25 03:28:51.782019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.473 qpair failed and we were unable to recover it. 00:28:17.473 [2024-04-25 03:28:51.782299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.473 [2024-04-25 03:28:51.782508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.473 [2024-04-25 03:28:51.782533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.473 qpair failed and we were unable to recover it. 00:28:17.473 [2024-04-25 03:28:51.782756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.473 [2024-04-25 03:28:51.782961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.473 [2024-04-25 03:28:51.782985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.473 qpair failed and we were unable to recover it. 00:28:17.473 [2024-04-25 03:28:51.783161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.473 [2024-04-25 03:28:51.783360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.473 [2024-04-25 03:28:51.783385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.473 qpair failed and we were unable to recover it. 00:28:17.473 [2024-04-25 03:28:51.783569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.473 [2024-04-25 03:28:51.783801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.473 [2024-04-25 03:28:51.783827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.473 qpair failed and we were unable to recover it. 00:28:17.473 [2024-04-25 03:28:51.784017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.473 [2024-04-25 03:28:51.784226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.473 [2024-04-25 03:28:51.784252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.473 qpair failed and we were unable to recover it. 00:28:17.473 [2024-04-25 03:28:51.784418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.473 [2024-04-25 03:28:51.784651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.473 [2024-04-25 03:28:51.784693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.473 qpair failed and we were unable to recover it. 00:28:17.473 [2024-04-25 03:28:51.784927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.473 [2024-04-25 03:28:51.785155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.473 [2024-04-25 03:28:51.785180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.473 qpair failed and we were unable to recover it. 00:28:17.473 [2024-04-25 03:28:51.785439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.474 [2024-04-25 03:28:51.785667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.474 [2024-04-25 03:28:51.785693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.474 qpair failed and we were unable to recover it. 00:28:17.474 [2024-04-25 03:28:51.785944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.474 [2024-04-25 03:28:51.786141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.474 [2024-04-25 03:28:51.786166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.474 qpair failed and we were unable to recover it. 00:28:17.474 [2024-04-25 03:28:51.786363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.474 [2024-04-25 03:28:51.786649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.474 [2024-04-25 03:28:51.786689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.474 qpair failed and we were unable to recover it. 00:28:17.474 [2024-04-25 03:28:51.786857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.474 [2024-04-25 03:28:51.787078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.474 [2024-04-25 03:28:51.787103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.474 qpair failed and we were unable to recover it. 00:28:17.474 [2024-04-25 03:28:51.787293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.474 [2024-04-25 03:28:51.787516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.474 [2024-04-25 03:28:51.787540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.474 qpair failed and we were unable to recover it. 00:28:17.474 [2024-04-25 03:28:51.787740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.474 [2024-04-25 03:28:51.787963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.474 [2024-04-25 03:28:51.787988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.474 qpair failed and we were unable to recover it. 00:28:17.474 [2024-04-25 03:28:51.788186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.474 [2024-04-25 03:28:51.788388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.474 [2024-04-25 03:28:51.788413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.474 qpair failed and we were unable to recover it. 00:28:17.474 [2024-04-25 03:28:51.788643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.474 [2024-04-25 03:28:51.788843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.474 [2024-04-25 03:28:51.788868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.474 qpair failed and we were unable to recover it. 00:28:17.474 [2024-04-25 03:28:51.789067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.474 [2024-04-25 03:28:51.789263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.474 [2024-04-25 03:28:51.789288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.474 qpair failed and we were unable to recover it. 00:28:17.474 [2024-04-25 03:28:51.789576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.474 [2024-04-25 03:28:51.789771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.474 [2024-04-25 03:28:51.789796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.474 qpair failed and we were unable to recover it. 00:28:17.474 [2024-04-25 03:28:51.790018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.474 [2024-04-25 03:28:51.790212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.474 [2024-04-25 03:28:51.790237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.474 qpair failed and we were unable to recover it. 00:28:17.474 [2024-04-25 03:28:51.790396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.474 [2024-04-25 03:28:51.790591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.474 [2024-04-25 03:28:51.790615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.474 qpair failed and we were unable to recover it. 00:28:17.474 [2024-04-25 03:28:51.790816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.474 [2024-04-25 03:28:51.791003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.474 [2024-04-25 03:28:51.791028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.474 qpair failed and we were unable to recover it. 00:28:17.474 [2024-04-25 03:28:51.791217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.474 [2024-04-25 03:28:51.791423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.474 [2024-04-25 03:28:51.791447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.474 qpair failed and we were unable to recover it. 00:28:17.474 [2024-04-25 03:28:51.791669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.474 [2024-04-25 03:28:51.791836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.474 [2024-04-25 03:28:51.791862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.474 qpair failed and we were unable to recover it. 00:28:17.474 [2024-04-25 03:28:51.792090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.474 [2024-04-25 03:28:51.792293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.474 [2024-04-25 03:28:51.792318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.474 qpair failed and we were unable to recover it. 00:28:17.474 [2024-04-25 03:28:51.792540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.474 [2024-04-25 03:28:51.792765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.474 [2024-04-25 03:28:51.792790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.474 qpair failed and we were unable to recover it. 00:28:17.474 [2024-04-25 03:28:51.793028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.474 [2024-04-25 03:28:51.793226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.474 [2024-04-25 03:28:51.793255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.474 qpair failed and we were unable to recover it. 00:28:17.474 [2024-04-25 03:28:51.793453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.474 [2024-04-25 03:28:51.793625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.474 [2024-04-25 03:28:51.793658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.474 qpair failed and we were unable to recover it. 00:28:17.474 [2024-04-25 03:28:51.793837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.474 [2024-04-25 03:28:51.794005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.474 [2024-04-25 03:28:51.794031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.474 qpair failed and we were unable to recover it. 00:28:17.474 [2024-04-25 03:28:51.794224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.474 [2024-04-25 03:28:51.794422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.474 [2024-04-25 03:28:51.794447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.474 qpair failed and we were unable to recover it. 00:28:17.474 [2024-04-25 03:28:51.794625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.474 [2024-04-25 03:28:51.794835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.474 [2024-04-25 03:28:51.794861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.474 qpair failed and we were unable to recover it. 00:28:17.474 [2024-04-25 03:28:51.795040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.474 [2024-04-25 03:28:51.795267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.474 [2024-04-25 03:28:51.795292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.474 qpair failed and we were unable to recover it. 00:28:17.474 [2024-04-25 03:28:51.795459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.474 [2024-04-25 03:28:51.795699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.474 [2024-04-25 03:28:51.795724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.474 qpair failed and we were unable to recover it. 00:28:17.474 [2024-04-25 03:28:51.795922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.474 [2024-04-25 03:28:51.796091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.474 [2024-04-25 03:28:51.796117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.474 qpair failed and we were unable to recover it. 00:28:17.474 [2024-04-25 03:28:51.796295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.474 [2024-04-25 03:28:51.796516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.474 [2024-04-25 03:28:51.796540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.474 qpair failed and we were unable to recover it. 00:28:17.474 [2024-04-25 03:28:51.796739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.474 [2024-04-25 03:28:51.796935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.474 [2024-04-25 03:28:51.796960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.474 qpair failed and we were unable to recover it. 00:28:17.474 [2024-04-25 03:28:51.797150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.474 [2024-04-25 03:28:51.797321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.474 [2024-04-25 03:28:51.797349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.474 qpair failed and we were unable to recover it. 00:28:17.474 [2024-04-25 03:28:51.797539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.474 [2024-04-25 03:28:51.797757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.474 [2024-04-25 03:28:51.797783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.474 qpair failed and we were unable to recover it. 00:28:17.474 [2024-04-25 03:28:51.797987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.475 [2024-04-25 03:28:51.798183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.475 [2024-04-25 03:28:51.798207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.475 qpair failed and we were unable to recover it. 00:28:17.475 [2024-04-25 03:28:51.798409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.475 [2024-04-25 03:28:51.798633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.475 [2024-04-25 03:28:51.798659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.475 qpair failed and we were unable to recover it. 00:28:17.475 [2024-04-25 03:28:51.798866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.475 [2024-04-25 03:28:51.799067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.475 [2024-04-25 03:28:51.799091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.475 qpair failed and we were unable to recover it. 00:28:17.475 [2024-04-25 03:28:51.799316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.475 [2024-04-25 03:28:51.799540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.475 [2024-04-25 03:28:51.799564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.475 qpair failed and we were unable to recover it. 00:28:17.475 [2024-04-25 03:28:51.799804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.475 [2024-04-25 03:28:51.799970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.475 [2024-04-25 03:28:51.799996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.475 qpair failed and we were unable to recover it. 00:28:17.475 [2024-04-25 03:28:51.800195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.475 [2024-04-25 03:28:51.800391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.475 [2024-04-25 03:28:51.800416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.475 qpair failed and we were unable to recover it. 00:28:17.475 [2024-04-25 03:28:51.800659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.475 [2024-04-25 03:28:51.800901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.475 [2024-04-25 03:28:51.800926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.475 qpair failed and we were unable to recover it. 00:28:17.475 [2024-04-25 03:28:51.801134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.475 [2024-04-25 03:28:51.801328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.475 [2024-04-25 03:28:51.801353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.475 qpair failed and we were unable to recover it. 00:28:17.475 [2024-04-25 03:28:51.801531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.475 [2024-04-25 03:28:51.801756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.475 [2024-04-25 03:28:51.801786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.475 qpair failed and we were unable to recover it. 00:28:17.475 [2024-04-25 03:28:51.801982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.475 [2024-04-25 03:28:51.802208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.475 [2024-04-25 03:28:51.802232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.475 qpair failed and we were unable to recover it. 00:28:17.475 [2024-04-25 03:28:51.802453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.475 [2024-04-25 03:28:51.802624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.475 [2024-04-25 03:28:51.802657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.475 qpair failed and we were unable to recover it. 00:28:17.475 [2024-04-25 03:28:51.802834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.475 [2024-04-25 03:28:51.803023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.475 [2024-04-25 03:28:51.803047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.475 qpair failed and we were unable to recover it. 00:28:17.475 [2024-04-25 03:28:51.803249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.475 [2024-04-25 03:28:51.803471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.475 [2024-04-25 03:28:51.803496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.475 qpair failed and we were unable to recover it. 00:28:17.475 [2024-04-25 03:28:51.803696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.475 [2024-04-25 03:28:51.803864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.475 [2024-04-25 03:28:51.803889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.475 qpair failed and we were unable to recover it. 00:28:17.475 [2024-04-25 03:28:51.804093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.475 [2024-04-25 03:28:51.804314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.475 [2024-04-25 03:28:51.804339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.475 qpair failed and we were unable to recover it. 00:28:17.475 [2024-04-25 03:28:51.804511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.475 [2024-04-25 03:28:51.804734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.475 [2024-04-25 03:28:51.804760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.475 qpair failed and we were unable to recover it. 00:28:17.475 [2024-04-25 03:28:51.804933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.475 [2024-04-25 03:28:51.805119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.475 [2024-04-25 03:28:51.805144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.475 qpair failed and we were unable to recover it. 00:28:17.475 [2024-04-25 03:28:51.805368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.475 [2024-04-25 03:28:51.805557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.475 [2024-04-25 03:28:51.805591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.475 qpair failed and we were unable to recover it. 00:28:17.475 [2024-04-25 03:28:51.805800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.475 [2024-04-25 03:28:51.805992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.475 [2024-04-25 03:28:51.806018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.475 qpair failed and we were unable to recover it. 00:28:17.475 [2024-04-25 03:28:51.806215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.475 [2024-04-25 03:28:51.806386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.475 [2024-04-25 03:28:51.806411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.475 qpair failed and we were unable to recover it. 00:28:17.475 [2024-04-25 03:28:51.806598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.475 [2024-04-25 03:28:51.806801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.475 [2024-04-25 03:28:51.806826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.475 qpair failed and we were unable to recover it. 00:28:17.475 [2024-04-25 03:28:51.807033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.475 [2024-04-25 03:28:51.807230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.475 [2024-04-25 03:28:51.807256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.475 qpair failed and we were unable to recover it. 00:28:17.475 [2024-04-25 03:28:51.807489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.475 [2024-04-25 03:28:51.807729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.475 [2024-04-25 03:28:51.807754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.475 qpair failed and we were unable to recover it. 00:28:17.475 [2024-04-25 03:28:51.807953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.475 [2024-04-25 03:28:51.808169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.475 [2024-04-25 03:28:51.808194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.475 qpair failed and we were unable to recover it. 00:28:17.475 [2024-04-25 03:28:51.808423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.475 [2024-04-25 03:28:51.808622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.475 [2024-04-25 03:28:51.808669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.475 qpair failed and we were unable to recover it. 00:28:17.475 [2024-04-25 03:28:51.808874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.475 [2024-04-25 03:28:51.809080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.475 [2024-04-25 03:28:51.809105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.475 qpair failed and we were unable to recover it. 00:28:17.475 [2024-04-25 03:28:51.809300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.475 [2024-04-25 03:28:51.809495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.475 [2024-04-25 03:28:51.809519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.475 qpair failed and we were unable to recover it. 00:28:17.475 [2024-04-25 03:28:51.809709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.475 [2024-04-25 03:28:51.809929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.475 [2024-04-25 03:28:51.809954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.475 qpair failed and we were unable to recover it. 00:28:17.475 [2024-04-25 03:28:51.810148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.475 [2024-04-25 03:28:51.810347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.475 [2024-04-25 03:28:51.810372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.475 qpair failed and we were unable to recover it. 00:28:17.475 [2024-04-25 03:28:51.810570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.476 [2024-04-25 03:28:51.810778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.476 [2024-04-25 03:28:51.810803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.476 qpair failed and we were unable to recover it. 00:28:17.476 [2024-04-25 03:28:51.811011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.476 [2024-04-25 03:28:51.811199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.476 [2024-04-25 03:28:51.811224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.476 qpair failed and we were unable to recover it. 00:28:17.476 [2024-04-25 03:28:51.811390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.476 [2024-04-25 03:28:51.811639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.476 [2024-04-25 03:28:51.811682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.476 qpair failed and we were unable to recover it. 00:28:17.476 [2024-04-25 03:28:51.811889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.476 [2024-04-25 03:28:51.812097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.476 [2024-04-25 03:28:51.812121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.476 qpair failed and we were unable to recover it. 00:28:17.476 [2024-04-25 03:28:51.812294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.476 [2024-04-25 03:28:51.812491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.476 [2024-04-25 03:28:51.812516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.476 qpair failed and we were unable to recover it. 00:28:17.476 [2024-04-25 03:28:51.812713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.476 [2024-04-25 03:28:51.812910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.476 [2024-04-25 03:28:51.812936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.476 qpair failed and we were unable to recover it. 00:28:17.476 [2024-04-25 03:28:51.813161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.476 [2024-04-25 03:28:51.813328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.476 [2024-04-25 03:28:51.813352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.476 qpair failed and we were unable to recover it. 00:28:17.476 [2024-04-25 03:28:51.813554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.476 [2024-04-25 03:28:51.813752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.476 [2024-04-25 03:28:51.813777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.476 qpair failed and we were unable to recover it. 00:28:17.476 [2024-04-25 03:28:51.813978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.476 [2024-04-25 03:28:51.814187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.476 [2024-04-25 03:28:51.814211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.476 qpair failed and we were unable to recover it. 00:28:17.476 [2024-04-25 03:28:51.814436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.476 [2024-04-25 03:28:51.814617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.476 [2024-04-25 03:28:51.814647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.476 qpair failed and we were unable to recover it. 00:28:17.476 [2024-04-25 03:28:51.814871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.476 [2024-04-25 03:28:51.815095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.476 [2024-04-25 03:28:51.815120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.476 qpair failed and we were unable to recover it. 00:28:17.476 [2024-04-25 03:28:51.815354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.476 [2024-04-25 03:28:51.815602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.476 [2024-04-25 03:28:51.815626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.476 qpair failed and we were unable to recover it. 00:28:17.476 [2024-04-25 03:28:51.815904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.476 [2024-04-25 03:28:51.816185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.476 [2024-04-25 03:28:51.816210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.476 qpair failed and we were unable to recover it. 00:28:17.476 [2024-04-25 03:28:51.816473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.476 [2024-04-25 03:28:51.816653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.476 [2024-04-25 03:28:51.816699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.476 qpair failed and we were unable to recover it. 00:28:17.476 [2024-04-25 03:28:51.816974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.476 [2024-04-25 03:28:51.817169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.476 [2024-04-25 03:28:51.817193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.476 qpair failed and we were unable to recover it. 00:28:17.476 [2024-04-25 03:28:51.817405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.476 [2024-04-25 03:28:51.817619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.476 [2024-04-25 03:28:51.817662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.476 qpair failed and we were unable to recover it. 00:28:17.476 [2024-04-25 03:28:51.817852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.476 [2024-04-25 03:28:51.818028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.476 [2024-04-25 03:28:51.818053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.476 qpair failed and we were unable to recover it. 00:28:17.476 [2024-04-25 03:28:51.818231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.476 [2024-04-25 03:28:51.818451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.476 [2024-04-25 03:28:51.818477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.476 qpair failed and we were unable to recover it. 00:28:17.476 [2024-04-25 03:28:51.818644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.476 [2024-04-25 03:28:51.818878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.476 [2024-04-25 03:28:51.818902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.476 qpair failed and we were unable to recover it. 00:28:17.476 [2024-04-25 03:28:51.819110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.476 [2024-04-25 03:28:51.819345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.476 [2024-04-25 03:28:51.819370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.476 qpair failed and we were unable to recover it. 00:28:17.476 [2024-04-25 03:28:51.819575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.476 [2024-04-25 03:28:51.819779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.476 [2024-04-25 03:28:51.819804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.476 qpair failed and we were unable to recover it. 00:28:17.476 [2024-04-25 03:28:51.819977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.476 [2024-04-25 03:28:51.820201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.476 [2024-04-25 03:28:51.820225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.476 qpair failed and we were unable to recover it. 00:28:17.476 [2024-04-25 03:28:51.820418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.476 [2024-04-25 03:28:51.820710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.476 [2024-04-25 03:28:51.820735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.476 qpair failed and we were unable to recover it. 00:28:17.476 [2024-04-25 03:28:51.820931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.476 [2024-04-25 03:28:51.821095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.476 [2024-04-25 03:28:51.821121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.476 qpair failed and we were unable to recover it. 00:28:17.476 [2024-04-25 03:28:51.821309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.476 [2024-04-25 03:28:51.821512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.476 [2024-04-25 03:28:51.821541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.476 qpair failed and we were unable to recover it. 00:28:17.476 [2024-04-25 03:28:51.821765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.476 [2024-04-25 03:28:51.821962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.476 [2024-04-25 03:28:51.821988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.476 qpair failed and we were unable to recover it. 00:28:17.476 [2024-04-25 03:28:51.822213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.476 [2024-04-25 03:28:51.822377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.476 [2024-04-25 03:28:51.822403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.476 qpair failed and we were unable to recover it. 00:28:17.476 [2024-04-25 03:28:51.822602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.476 [2024-04-25 03:28:51.822824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.476 [2024-04-25 03:28:51.822849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.476 qpair failed and we were unable to recover it. 00:28:17.476 [2024-04-25 03:28:51.823058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.476 [2024-04-25 03:28:51.823423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.476 [2024-04-25 03:28:51.823473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.477 qpair failed and we were unable to recover it. 00:28:17.477 [2024-04-25 03:28:51.823715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.477 [2024-04-25 03:28:51.823941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.477 [2024-04-25 03:28:51.823966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.477 qpair failed and we were unable to recover it. 00:28:17.477 [2024-04-25 03:28:51.824198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.477 [2024-04-25 03:28:51.824373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.477 [2024-04-25 03:28:51.824397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.477 qpair failed and we were unable to recover it. 00:28:17.477 [2024-04-25 03:28:51.824568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.477 [2024-04-25 03:28:51.824762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.477 [2024-04-25 03:28:51.824787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.477 qpair failed and we were unable to recover it. 00:28:17.477 [2024-04-25 03:28:51.825019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.477 [2024-04-25 03:28:51.825217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.477 [2024-04-25 03:28:51.825242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.477 qpair failed and we were unable to recover it. 00:28:17.477 [2024-04-25 03:28:51.825463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.477 [2024-04-25 03:28:51.825647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.477 [2024-04-25 03:28:51.825682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.477 qpair failed and we were unable to recover it. 00:28:17.477 [2024-04-25 03:28:51.825879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.477 [2024-04-25 03:28:51.826105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.477 [2024-04-25 03:28:51.826129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.477 qpair failed and we were unable to recover it. 00:28:17.477 [2024-04-25 03:28:51.826356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.477 [2024-04-25 03:28:51.826612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.477 [2024-04-25 03:28:51.826643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.477 qpair failed and we were unable to recover it. 00:28:17.477 [2024-04-25 03:28:51.826820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.477 [2024-04-25 03:28:51.827018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.477 [2024-04-25 03:28:51.827043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.477 qpair failed and we were unable to recover it. 00:28:17.477 [2024-04-25 03:28:51.827266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.477 [2024-04-25 03:28:51.827490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.477 [2024-04-25 03:28:51.827514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.477 qpair failed and we were unable to recover it. 00:28:17.477 [2024-04-25 03:28:51.827735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.477 [2024-04-25 03:28:51.827912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.477 [2024-04-25 03:28:51.827937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.477 qpair failed and we were unable to recover it. 00:28:17.477 [2024-04-25 03:28:51.828166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.477 [2024-04-25 03:28:51.828331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.477 [2024-04-25 03:28:51.828355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.477 qpair failed and we were unable to recover it. 00:28:17.477 [2024-04-25 03:28:51.828585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.477 [2024-04-25 03:28:51.828769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.477 [2024-04-25 03:28:51.828794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.477 qpair failed and we were unable to recover it. 00:28:17.477 [2024-04-25 03:28:51.829027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.477 [2024-04-25 03:28:51.829216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.477 [2024-04-25 03:28:51.829240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.477 qpair failed and we were unable to recover it. 00:28:17.477 [2024-04-25 03:28:51.829460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.477 [2024-04-25 03:28:51.829679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.477 [2024-04-25 03:28:51.829705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.477 qpair failed and we were unable to recover it. 00:28:17.477 [2024-04-25 03:28:51.829901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.477 [2024-04-25 03:28:51.830123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.477 [2024-04-25 03:28:51.830148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.477 qpair failed and we were unable to recover it. 00:28:17.477 [2024-04-25 03:28:51.830345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.477 [2024-04-25 03:28:51.830548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.477 [2024-04-25 03:28:51.830572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.477 qpair failed and we were unable to recover it. 00:28:17.477 [2024-04-25 03:28:51.830757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.477 [2024-04-25 03:28:51.830952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.477 [2024-04-25 03:28:51.830976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.477 qpair failed and we were unable to recover it. 00:28:17.477 [2024-04-25 03:28:51.831176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.477 [2024-04-25 03:28:51.831340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.477 [2024-04-25 03:28:51.831365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.477 qpair failed and we were unable to recover it. 00:28:17.477 [2024-04-25 03:28:51.831565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.477 [2024-04-25 03:28:51.831762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.477 [2024-04-25 03:28:51.831788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.477 qpair failed and we were unable to recover it. 00:28:17.477 [2024-04-25 03:28:51.831957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.477 [2024-04-25 03:28:51.832124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.477 [2024-04-25 03:28:51.832149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.477 qpair failed and we were unable to recover it. 00:28:17.477 [2024-04-25 03:28:51.832344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.477 [2024-04-25 03:28:51.832588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.477 [2024-04-25 03:28:51.832615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.477 qpair failed and we were unable to recover it. 00:28:17.477 [2024-04-25 03:28:51.832866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.477 [2024-04-25 03:28:51.833088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.477 [2024-04-25 03:28:51.833112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.477 qpair failed and we were unable to recover it. 00:28:17.477 [2024-04-25 03:28:51.833312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.477 [2024-04-25 03:28:51.833509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.477 [2024-04-25 03:28:51.833536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.477 qpair failed and we were unable to recover it. 00:28:17.477 [2024-04-25 03:28:51.833752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.477 [2024-04-25 03:28:51.833923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.477 [2024-04-25 03:28:51.833948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.477 qpair failed and we were unable to recover it. 00:28:17.477 [2024-04-25 03:28:51.834128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.477 [2024-04-25 03:28:51.834299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.478 [2024-04-25 03:28:51.834324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.478 qpair failed and we were unable to recover it. 00:28:17.478 [2024-04-25 03:28:51.834484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.478 [2024-04-25 03:28:51.834659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.478 [2024-04-25 03:28:51.834696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.478 qpair failed and we were unable to recover it. 00:28:17.478 [2024-04-25 03:28:51.834947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.478 [2024-04-25 03:28:51.835161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.478 [2024-04-25 03:28:51.835190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.478 qpair failed and we were unable to recover it. 00:28:17.478 [2024-04-25 03:28:51.835398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.478 [2024-04-25 03:28:51.835593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.478 [2024-04-25 03:28:51.835617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.478 qpair failed and we were unable to recover it. 00:28:17.478 [2024-04-25 03:28:51.835806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.478 [2024-04-25 03:28:51.836000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.478 [2024-04-25 03:28:51.836025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.478 qpair failed and we were unable to recover it. 00:28:17.478 [2024-04-25 03:28:51.836193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.478 [2024-04-25 03:28:51.836404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.478 [2024-04-25 03:28:51.836431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.478 qpair failed and we were unable to recover it. 00:28:17.478 [2024-04-25 03:28:51.836654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.478 [2024-04-25 03:28:51.836865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.478 [2024-04-25 03:28:51.836890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.478 qpair failed and we were unable to recover it. 00:28:17.478 [2024-04-25 03:28:51.837138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.478 [2024-04-25 03:28:51.837379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.478 [2024-04-25 03:28:51.837407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.478 qpair failed and we were unable to recover it. 00:28:17.478 [2024-04-25 03:28:51.837648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.478 [2024-04-25 03:28:51.837864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.478 [2024-04-25 03:28:51.837889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.478 qpair failed and we were unable to recover it. 00:28:17.478 [2024-04-25 03:28:51.838115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.478 [2024-04-25 03:28:51.838358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.478 [2024-04-25 03:28:51.838386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.478 qpair failed and we were unable to recover it. 00:28:17.478 [2024-04-25 03:28:51.838618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.478 [2024-04-25 03:28:51.838845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.478 [2024-04-25 03:28:51.838887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.478 qpair failed and we were unable to recover it. 00:28:17.478 [2024-04-25 03:28:51.839170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.478 [2024-04-25 03:28:51.839616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.478 [2024-04-25 03:28:51.839693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.478 qpair failed and we were unable to recover it. 00:28:17.478 [2024-04-25 03:28:51.839955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.478 [2024-04-25 03:28:51.840143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.478 [2024-04-25 03:28:51.840170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.478 qpair failed and we were unable to recover it. 00:28:17.478 [2024-04-25 03:28:51.840349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.478 [2024-04-25 03:28:51.840596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.478 [2024-04-25 03:28:51.840624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.478 qpair failed and we were unable to recover it. 00:28:17.478 [2024-04-25 03:28:51.840852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.478 [2024-04-25 03:28:51.841077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.478 [2024-04-25 03:28:51.841104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.478 qpair failed and we were unable to recover it. 00:28:17.478 [2024-04-25 03:28:51.841316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.478 [2024-04-25 03:28:51.841531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.478 [2024-04-25 03:28:51.841558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.478 qpair failed and we were unable to recover it. 00:28:17.478 [2024-04-25 03:28:51.841777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.478 [2024-04-25 03:28:51.841996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.478 [2024-04-25 03:28:51.842023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.478 qpair failed and we were unable to recover it. 00:28:17.478 [2024-04-25 03:28:51.842240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.478 [2024-04-25 03:28:51.842440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.478 [2024-04-25 03:28:51.842481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.478 qpair failed and we were unable to recover it. 00:28:17.478 [2024-04-25 03:28:51.842672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.478 [2024-04-25 03:28:51.842867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.478 [2024-04-25 03:28:51.842893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.478 qpair failed and we were unable to recover it. 00:28:17.478 [2024-04-25 03:28:51.843128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.478 [2024-04-25 03:28:51.843354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.478 [2024-04-25 03:28:51.843381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.478 qpair failed and we were unable to recover it. 00:28:17.478 [2024-04-25 03:28:51.843578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.478 [2024-04-25 03:28:51.843778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.478 [2024-04-25 03:28:51.843804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.478 qpair failed and we were unable to recover it. 00:28:17.478 [2024-04-25 03:28:51.843997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.478 [2024-04-25 03:28:51.844243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.478 [2024-04-25 03:28:51.844271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.478 qpair failed and we were unable to recover it. 00:28:17.478 [2024-04-25 03:28:51.844538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.478 [2024-04-25 03:28:51.844787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.478 [2024-04-25 03:28:51.844813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.478 qpair failed and we were unable to recover it. 00:28:17.478 [2024-04-25 03:28:51.845007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.478 [2024-04-25 03:28:51.845255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.478 [2024-04-25 03:28:51.845282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.478 qpair failed and we were unable to recover it. 00:28:17.478 [2024-04-25 03:28:51.845514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.478 [2024-04-25 03:28:51.845706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.478 [2024-04-25 03:28:51.845733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.478 qpair failed and we were unable to recover it. 00:28:17.478 [2024-04-25 03:28:51.845930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.478 [2024-04-25 03:28:51.846107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.478 [2024-04-25 03:28:51.846131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.478 qpair failed and we were unable to recover it. 00:28:17.478 [2024-04-25 03:28:51.846480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.478 [2024-04-25 03:28:51.846732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.478 [2024-04-25 03:28:51.846758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.478 qpair failed and we were unable to recover it. 00:28:17.478 [2024-04-25 03:28:51.846951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.478 [2024-04-25 03:28:51.847142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.478 [2024-04-25 03:28:51.847169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.478 qpair failed and we were unable to recover it. 00:28:17.478 [2024-04-25 03:28:51.847369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.478 [2024-04-25 03:28:51.847612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.478 [2024-04-25 03:28:51.847648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.478 qpair failed and we were unable to recover it. 00:28:17.478 [2024-04-25 03:28:51.847871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.479 [2024-04-25 03:28:51.848124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.479 [2024-04-25 03:28:51.848152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.479 qpair failed and we were unable to recover it. 00:28:17.479 [2024-04-25 03:28:51.848351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.479 [2024-04-25 03:28:51.848541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.479 [2024-04-25 03:28:51.848568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.479 qpair failed and we were unable to recover it. 00:28:17.479 [2024-04-25 03:28:51.848792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.479 [2024-04-25 03:28:51.848979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.479 [2024-04-25 03:28:51.849009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.479 qpair failed and we were unable to recover it. 00:28:17.479 [2024-04-25 03:28:51.849203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.479 [2024-04-25 03:28:51.849477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.479 [2024-04-25 03:28:51.849504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.479 qpair failed and we were unable to recover it. 00:28:17.479 [2024-04-25 03:28:51.849721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.479 [2024-04-25 03:28:51.849933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.479 [2024-04-25 03:28:51.849960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.479 qpair failed and we were unable to recover it. 00:28:17.479 [2024-04-25 03:28:51.850203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.479 [2024-04-25 03:28:51.850414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.479 [2024-04-25 03:28:51.850441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.479 qpair failed and we were unable to recover it. 00:28:17.479 [2024-04-25 03:28:51.850693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.479 [2024-04-25 03:28:51.850885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.479 [2024-04-25 03:28:51.850911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.479 qpair failed and we were unable to recover it. 00:28:17.479 [2024-04-25 03:28:51.851175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.479 [2024-04-25 03:28:51.851394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.479 [2024-04-25 03:28:51.851421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.479 qpair failed and we were unable to recover it. 00:28:17.479 [2024-04-25 03:28:51.851615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.479 [2024-04-25 03:28:51.851827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.479 [2024-04-25 03:28:51.851852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.479 qpair failed and we were unable to recover it. 00:28:17.479 [2024-04-25 03:28:51.852053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.479 [2024-04-25 03:28:51.852292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.479 [2024-04-25 03:28:51.852320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.479 qpair failed and we were unable to recover it. 00:28:17.479 [2024-04-25 03:28:51.852575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.479 [2024-04-25 03:28:51.852784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.479 [2024-04-25 03:28:51.852826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.479 qpair failed and we were unable to recover it. 00:28:17.479 [2024-04-25 03:28:51.853031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.479 [2024-04-25 03:28:51.853245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.479 [2024-04-25 03:28:51.853273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.479 qpair failed and we were unable to recover it. 00:28:17.479 [2024-04-25 03:28:51.853508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.479 [2024-04-25 03:28:51.853734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.479 [2024-04-25 03:28:51.853760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.479 qpair failed and we were unable to recover it. 00:28:17.479 [2024-04-25 03:28:51.853934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.479 [2024-04-25 03:28:51.854187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.479 [2024-04-25 03:28:51.854214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.479 qpair failed and we were unable to recover it. 00:28:17.479 [2024-04-25 03:28:51.854429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.479 [2024-04-25 03:28:51.854647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.479 [2024-04-25 03:28:51.854693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.479 qpair failed and we were unable to recover it. 00:28:17.479 [2024-04-25 03:28:51.854903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.479 [2024-04-25 03:28:51.855132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.479 [2024-04-25 03:28:51.855161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.479 qpair failed and we were unable to recover it. 00:28:17.479 [2024-04-25 03:28:51.855383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.479 [2024-04-25 03:28:51.855580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.479 [2024-04-25 03:28:51.855606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.479 qpair failed and we were unable to recover it. 00:28:17.479 [2024-04-25 03:28:51.855833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.479 [2024-04-25 03:28:51.856005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.479 [2024-04-25 03:28:51.856030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.479 qpair failed and we were unable to recover it. 00:28:17.479 [2024-04-25 03:28:51.856225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.479 [2024-04-25 03:28:51.856481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.479 [2024-04-25 03:28:51.856509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.479 qpair failed and we were unable to recover it. 00:28:17.479 [2024-04-25 03:28:51.856758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.479 [2024-04-25 03:28:51.857010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.479 [2024-04-25 03:28:51.857035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.479 qpair failed and we were unable to recover it. 00:28:17.479 [2024-04-25 03:28:51.857263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.479 [2024-04-25 03:28:51.857505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.479 [2024-04-25 03:28:51.857532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.479 qpair failed and we were unable to recover it. 00:28:17.479 [2024-04-25 03:28:51.857763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.479 [2024-04-25 03:28:51.858012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.479 [2024-04-25 03:28:51.858040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.479 qpair failed and we were unable to recover it. 00:28:17.479 [2024-04-25 03:28:51.858251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.479 [2024-04-25 03:28:51.858548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.479 [2024-04-25 03:28:51.858575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.479 qpair failed and we were unable to recover it. 00:28:17.479 [2024-04-25 03:28:51.858808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.479 [2024-04-25 03:28:51.859067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.479 [2024-04-25 03:28:51.859092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.479 qpair failed and we were unable to recover it. 00:28:17.479 [2024-04-25 03:28:51.859263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.479 [2024-04-25 03:28:51.859457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.479 [2024-04-25 03:28:51.859482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.479 qpair failed and we were unable to recover it. 00:28:17.479 [2024-04-25 03:28:51.859681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.479 [2024-04-25 03:28:51.859902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.479 [2024-04-25 03:28:51.859931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.479 qpair failed and we were unable to recover it. 00:28:17.479 [2024-04-25 03:28:51.860176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.479 [2024-04-25 03:28:51.860367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.479 [2024-04-25 03:28:51.860395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.479 qpair failed and we were unable to recover it. 00:28:17.479 [2024-04-25 03:28:51.860610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.479 [2024-04-25 03:28:51.860851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.479 [2024-04-25 03:28:51.860877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.479 qpair failed and we were unable to recover it. 00:28:17.479 [2024-04-25 03:28:51.861101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.479 [2024-04-25 03:28:51.861315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.479 [2024-04-25 03:28:51.861349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.479 qpair failed and we were unable to recover it. 00:28:17.479 [2024-04-25 03:28:51.861596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.480 [2024-04-25 03:28:51.861785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.480 [2024-04-25 03:28:51.861810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.480 qpair failed and we were unable to recover it. 00:28:17.480 [2024-04-25 03:28:51.862002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.480 [2024-04-25 03:28:51.862211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.480 [2024-04-25 03:28:51.862238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.480 qpair failed and we were unable to recover it. 00:28:17.480 [2024-04-25 03:28:51.862462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.480 [2024-04-25 03:28:51.862637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.480 [2024-04-25 03:28:51.862664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.480 qpair failed and we were unable to recover it. 00:28:17.480 [2024-04-25 03:28:51.862838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.480 [2024-04-25 03:28:51.863052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.480 [2024-04-25 03:28:51.863080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.480 qpair failed and we were unable to recover it. 00:28:17.480 [2024-04-25 03:28:51.863300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.480 [2024-04-25 03:28:51.863486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.480 [2024-04-25 03:28:51.863513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.480 qpair failed and we were unable to recover it. 00:28:17.480 [2024-04-25 03:28:51.863736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.480 [2024-04-25 03:28:51.863943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.480 [2024-04-25 03:28:51.863971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.480 qpair failed and we were unable to recover it. 00:28:17.480 [2024-04-25 03:28:51.864192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.480 [2024-04-25 03:28:51.864378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.480 [2024-04-25 03:28:51.864404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.480 qpair failed and we were unable to recover it. 00:28:17.480 [2024-04-25 03:28:51.864669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.480 [2024-04-25 03:28:51.864837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.480 [2024-04-25 03:28:51.864879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.480 qpair failed and we were unable to recover it. 00:28:17.480 [2024-04-25 03:28:51.865084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.480 [2024-04-25 03:28:51.865254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.480 [2024-04-25 03:28:51.865280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.480 qpair failed and we were unable to recover it. 00:28:17.480 [2024-04-25 03:28:51.865478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.480 [2024-04-25 03:28:51.865701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.480 [2024-04-25 03:28:51.865735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.480 qpair failed and we were unable to recover it. 00:28:17.480 [2024-04-25 03:28:51.865959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.480 [2024-04-25 03:28:51.866182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.480 [2024-04-25 03:28:51.866206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.480 qpair failed and we were unable to recover it. 00:28:17.480 [2024-04-25 03:28:51.866427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.480 [2024-04-25 03:28:51.866681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.480 [2024-04-25 03:28:51.866709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.480 qpair failed and we were unable to recover it. 00:28:17.480 [2024-04-25 03:28:51.866931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.480 [2024-04-25 03:28:51.867149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.480 [2024-04-25 03:28:51.867177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.480 qpair failed and we were unable to recover it. 00:28:17.480 [2024-04-25 03:28:51.867354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.480 [2024-04-25 03:28:51.867547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.480 [2024-04-25 03:28:51.867575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.480 qpair failed and we were unable to recover it. 00:28:17.480 [2024-04-25 03:28:51.867813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.480 [2024-04-25 03:28:51.868009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.480 [2024-04-25 03:28:51.868036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.480 qpair failed and we were unable to recover it. 00:28:17.480 [2024-04-25 03:28:51.868257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.480 [2024-04-25 03:28:51.868431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.480 [2024-04-25 03:28:51.868456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.480 qpair failed and we were unable to recover it. 00:28:17.480 [2024-04-25 03:28:51.868682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.480 [2024-04-25 03:28:51.868852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.480 [2024-04-25 03:28:51.868877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.480 qpair failed and we were unable to recover it. 00:28:17.480 [2024-04-25 03:28:51.869095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.480 [2024-04-25 03:28:51.869283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.480 [2024-04-25 03:28:51.869313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.480 qpair failed and we were unable to recover it. 00:28:17.480 [2024-04-25 03:28:51.869544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.480 [2024-04-25 03:28:51.869783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.480 [2024-04-25 03:28:51.869811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.480 qpair failed and we were unable to recover it. 00:28:17.480 [2024-04-25 03:28:51.869995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.480 [2024-04-25 03:28:51.870197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.480 [2024-04-25 03:28:51.870226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.480 qpair failed and we were unable to recover it. 00:28:17.480 [2024-04-25 03:28:51.870462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.480 [2024-04-25 03:28:51.870691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.480 [2024-04-25 03:28:51.870717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.480 qpair failed and we were unable to recover it. 00:28:17.480 [2024-04-25 03:28:51.870963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.480 [2024-04-25 03:28:51.871262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.480 [2024-04-25 03:28:51.871286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.480 qpair failed and we were unable to recover it. 00:28:17.480 [2024-04-25 03:28:51.871519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.480 [2024-04-25 03:28:51.871692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.480 [2024-04-25 03:28:51.871717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.480 qpair failed and we were unable to recover it. 00:28:17.480 [2024-04-25 03:28:51.871918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.480 [2024-04-25 03:28:51.872156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.480 [2024-04-25 03:28:51.872184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.480 qpair failed and we were unable to recover it. 00:28:17.480 [2024-04-25 03:28:51.872399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.480 [2024-04-25 03:28:51.872646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.480 [2024-04-25 03:28:51.872674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.480 qpair failed and we were unable to recover it. 00:28:17.480 [2024-04-25 03:28:51.872890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.480 [2024-04-25 03:28:51.873139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.480 [2024-04-25 03:28:51.873167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.480 qpair failed and we were unable to recover it. 00:28:17.480 [2024-04-25 03:28:51.873412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.480 [2024-04-25 03:28:51.873645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.480 [2024-04-25 03:28:51.873674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.480 qpair failed and we were unable to recover it. 00:28:17.480 [2024-04-25 03:28:51.873888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.480 [2024-04-25 03:28:51.874103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.480 [2024-04-25 03:28:51.874131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.480 qpair failed and we were unable to recover it. 00:28:17.480 [2024-04-25 03:28:51.874356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.480 [2024-04-25 03:28:51.874525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.480 [2024-04-25 03:28:51.874552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.480 qpair failed and we were unable to recover it. 00:28:17.481 [2024-04-25 03:28:51.874755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.481 [2024-04-25 03:28:51.874966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.481 [2024-04-25 03:28:51.874996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.481 qpair failed and we were unable to recover it. 00:28:17.481 [2024-04-25 03:28:51.875225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.481 [2024-04-25 03:28:51.875410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.481 [2024-04-25 03:28:51.875440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.481 qpair failed and we were unable to recover it. 00:28:17.481 [2024-04-25 03:28:51.875656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.481 [2024-04-25 03:28:51.875886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.481 [2024-04-25 03:28:51.875914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.481 qpair failed and we were unable to recover it. 00:28:17.481 [2024-04-25 03:28:51.876127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.481 [2024-04-25 03:28:51.876347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.481 [2024-04-25 03:28:51.876374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.481 qpair failed and we were unable to recover it. 00:28:17.481 [2024-04-25 03:28:51.876560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.481 [2024-04-25 03:28:51.876792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.481 [2024-04-25 03:28:51.876818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.481 qpair failed and we were unable to recover it. 00:28:17.481 [2024-04-25 03:28:51.877145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.481 [2024-04-25 03:28:51.877406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.481 [2024-04-25 03:28:51.877433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.481 qpair failed and we were unable to recover it. 00:28:17.481 [2024-04-25 03:28:51.877617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.481 [2024-04-25 03:28:51.877818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.481 [2024-04-25 03:28:51.877845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.481 qpair failed and we were unable to recover it. 00:28:17.481 [2024-04-25 03:28:51.878060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.481 [2024-04-25 03:28:51.878253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.481 [2024-04-25 03:28:51.878277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.481 qpair failed and we were unable to recover it. 00:28:17.481 [2024-04-25 03:28:51.878448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.481 [2024-04-25 03:28:51.878643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.481 [2024-04-25 03:28:51.878669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.481 qpair failed and we were unable to recover it. 00:28:17.481 [2024-04-25 03:28:51.878923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.481 [2024-04-25 03:28:51.879138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.481 [2024-04-25 03:28:51.879166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.481 qpair failed and we were unable to recover it. 00:28:17.481 [2024-04-25 03:28:51.879356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.481 [2024-04-25 03:28:51.879549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.481 [2024-04-25 03:28:51.879578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.481 qpair failed and we were unable to recover it. 00:28:17.481 [2024-04-25 03:28:51.879781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.481 [2024-04-25 03:28:51.880012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.481 [2024-04-25 03:28:51.880037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.481 qpair failed and we were unable to recover it. 00:28:17.481 [2024-04-25 03:28:51.880231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.481 [2024-04-25 03:28:51.880399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.481 [2024-04-25 03:28:51.880424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.481 qpair failed and we were unable to recover it. 00:28:17.481 [2024-04-25 03:28:51.880625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.481 [2024-04-25 03:28:51.880827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.481 [2024-04-25 03:28:51.880852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.481 qpair failed and we were unable to recover it. 00:28:17.481 [2024-04-25 03:28:51.881040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.481 [2024-04-25 03:28:51.881254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.481 [2024-04-25 03:28:51.881281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.481 qpair failed and we were unable to recover it. 00:28:17.481 [2024-04-25 03:28:51.881500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.481 [2024-04-25 03:28:51.881727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.481 [2024-04-25 03:28:51.881757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.481 qpair failed and we were unable to recover it. 00:28:17.481 [2024-04-25 03:28:51.881941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.481 [2024-04-25 03:28:51.882154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.481 [2024-04-25 03:28:51.882181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.481 qpair failed and we were unable to recover it. 00:28:17.481 [2024-04-25 03:28:51.882404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.481 [2024-04-25 03:28:51.882617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.481 [2024-04-25 03:28:51.882663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.481 qpair failed and we were unable to recover it. 00:28:17.481 [2024-04-25 03:28:51.882910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.481 [2024-04-25 03:28:51.883176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.481 [2024-04-25 03:28:51.883200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.481 qpair failed and we were unable to recover it. 00:28:17.481 [2024-04-25 03:28:51.883365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.481 [2024-04-25 03:28:51.883589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.481 [2024-04-25 03:28:51.883613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.481 qpair failed and we were unable to recover it. 00:28:17.481 [2024-04-25 03:28:51.883809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.481 [2024-04-25 03:28:51.884046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.481 [2024-04-25 03:28:51.884074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.481 qpair failed and we were unable to recover it. 00:28:17.481 [2024-04-25 03:28:51.884337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.481 [2024-04-25 03:28:51.884535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.481 [2024-04-25 03:28:51.884559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.481 qpair failed and we were unable to recover it. 00:28:17.481 [2024-04-25 03:28:51.884771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.481 [2024-04-25 03:28:51.884987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.481 [2024-04-25 03:28:51.885014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.481 qpair failed and we were unable to recover it. 00:28:17.481 [2024-04-25 03:28:51.885267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.481 [2024-04-25 03:28:51.885459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.481 [2024-04-25 03:28:51.885483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.481 qpair failed and we were unable to recover it. 00:28:17.481 [2024-04-25 03:28:51.885685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.482 [2024-04-25 03:28:51.885894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.482 [2024-04-25 03:28:51.885919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.482 qpair failed and we were unable to recover it. 00:28:17.482 [2024-04-25 03:28:51.886108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.482 [2024-04-25 03:28:51.886297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.482 [2024-04-25 03:28:51.886321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.482 qpair failed and we were unable to recover it. 00:28:17.482 [2024-04-25 03:28:51.886601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.482 [2024-04-25 03:28:51.886812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.482 [2024-04-25 03:28:51.886840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.482 qpair failed and we were unable to recover it. 00:28:17.482 [2024-04-25 03:28:51.887051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.482 [2024-04-25 03:28:51.887279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.482 [2024-04-25 03:28:51.887304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.482 qpair failed and we were unable to recover it. 00:28:17.482 [2024-04-25 03:28:51.887504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.482 [2024-04-25 03:28:51.887696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.482 [2024-04-25 03:28:51.887725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.482 qpair failed and we were unable to recover it. 00:28:17.482 [2024-04-25 03:28:51.887972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.482 [2024-04-25 03:28:51.888188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.482 [2024-04-25 03:28:51.888215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.482 qpair failed and we were unable to recover it. 00:28:17.482 [2024-04-25 03:28:51.888454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.482 [2024-04-25 03:28:51.888644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.482 [2024-04-25 03:28:51.888683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.482 qpair failed and we were unable to recover it. 00:28:17.482 [2024-04-25 03:28:51.888892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.482 [2024-04-25 03:28:51.889118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.482 [2024-04-25 03:28:51.889143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.482 qpair failed and we were unable to recover it. 00:28:17.482 [2024-04-25 03:28:51.889315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.482 [2024-04-25 03:28:51.889477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.482 [2024-04-25 03:28:51.889517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.482 qpair failed and we were unable to recover it. 00:28:17.482 [2024-04-25 03:28:51.889772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.482 [2024-04-25 03:28:51.889994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.482 [2024-04-25 03:28:51.890024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.482 qpair failed and we were unable to recover it. 00:28:17.482 [2024-04-25 03:28:51.890270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.482 [2024-04-25 03:28:51.890448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.482 [2024-04-25 03:28:51.890476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.482 qpair failed and we were unable to recover it. 00:28:17.482 [2024-04-25 03:28:51.890694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.482 [2024-04-25 03:28:51.890944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.482 [2024-04-25 03:28:51.890972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.482 qpair failed and we were unable to recover it. 00:28:17.482 [2024-04-25 03:28:51.891210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.482 [2024-04-25 03:28:51.891454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.482 [2024-04-25 03:28:51.891481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.482 qpair failed and we were unable to recover it. 00:28:17.482 [2024-04-25 03:28:51.891729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.482 [2024-04-25 03:28:51.891922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.482 [2024-04-25 03:28:51.891949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.482 qpair failed and we were unable to recover it. 00:28:17.482 [2024-04-25 03:28:51.892174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.482 [2024-04-25 03:28:51.892398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.482 [2024-04-25 03:28:51.892423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.482 qpair failed and we were unable to recover it. 00:28:17.482 [2024-04-25 03:28:51.892618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.482 [2024-04-25 03:28:51.892827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.482 [2024-04-25 03:28:51.892852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.482 qpair failed and we were unable to recover it. 00:28:17.482 [2024-04-25 03:28:51.893104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.482 [2024-04-25 03:28:51.893343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.482 [2024-04-25 03:28:51.893367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.482 qpair failed and we were unable to recover it. 00:28:17.482 [2024-04-25 03:28:51.893597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.482 [2024-04-25 03:28:51.893790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.482 [2024-04-25 03:28:51.893818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.482 qpair failed and we were unable to recover it. 00:28:17.482 [2024-04-25 03:28:51.894067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.482 [2024-04-25 03:28:51.894310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.482 [2024-04-25 03:28:51.894337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.482 qpair failed and we were unable to recover it. 00:28:17.482 [2024-04-25 03:28:51.894581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.482 [2024-04-25 03:28:51.894807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.482 [2024-04-25 03:28:51.894836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.482 qpair failed and we were unable to recover it. 00:28:17.482 [2024-04-25 03:28:51.895261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.482 [2024-04-25 03:28:51.895656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.482 [2024-04-25 03:28:51.895711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.482 qpair failed and we were unable to recover it. 00:28:17.482 [2024-04-25 03:28:51.895916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.482 [2024-04-25 03:28:51.896117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.482 [2024-04-25 03:28:51.896142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.482 qpair failed and we were unable to recover it. 00:28:17.482 [2024-04-25 03:28:51.896338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.482 [2024-04-25 03:28:51.896621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.482 [2024-04-25 03:28:51.896689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.482 qpair failed and we were unable to recover it. 00:28:17.482 [2024-04-25 03:28:51.896923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.482 [2024-04-25 03:28:51.897152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.482 [2024-04-25 03:28:51.897179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.482 qpair failed and we were unable to recover it. 00:28:17.482 [2024-04-25 03:28:51.897367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.482 [2024-04-25 03:28:51.897620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.482 [2024-04-25 03:28:51.897651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.482 qpair failed and we were unable to recover it. 00:28:17.482 [2024-04-25 03:28:51.897888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.482 [2024-04-25 03:28:51.898082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.482 [2024-04-25 03:28:51.898106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.482 qpair failed and we were unable to recover it. 00:28:17.482 [2024-04-25 03:28:51.898309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.482 [2024-04-25 03:28:51.898521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.482 [2024-04-25 03:28:51.898549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.482 qpair failed and we were unable to recover it. 00:28:17.482 [2024-04-25 03:28:51.898779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.482 [2024-04-25 03:28:51.898965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.482 [2024-04-25 03:28:51.898989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.482 qpair failed and we were unable to recover it. 00:28:17.482 [2024-04-25 03:28:51.899208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.482 [2024-04-25 03:28:51.899603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.482 [2024-04-25 03:28:51.899660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.482 qpair failed and we were unable to recover it. 00:28:17.483 [2024-04-25 03:28:51.899900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.483 [2024-04-25 03:28:51.900123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.483 [2024-04-25 03:28:51.900150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.483 qpair failed and we were unable to recover it. 00:28:17.483 [2024-04-25 03:28:51.900342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.483 [2024-04-25 03:28:51.900552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.483 [2024-04-25 03:28:51.900579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.483 qpair failed and we were unable to recover it. 00:28:17.483 [2024-04-25 03:28:51.900803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.483 [2024-04-25 03:28:51.901024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.483 [2024-04-25 03:28:51.901048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.483 qpair failed and we were unable to recover it. 00:28:17.483 [2024-04-25 03:28:51.901263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.483 [2024-04-25 03:28:51.901456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.483 [2024-04-25 03:28:51.901482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.483 qpair failed and we were unable to recover it. 00:28:17.483 [2024-04-25 03:28:51.901687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.483 [2024-04-25 03:28:51.901913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.483 [2024-04-25 03:28:51.901940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.483 qpair failed and we were unable to recover it. 00:28:17.483 [2024-04-25 03:28:51.902164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.483 [2024-04-25 03:28:51.902354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.483 [2024-04-25 03:28:51.902381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.483 qpair failed and we were unable to recover it. 00:28:17.483 [2024-04-25 03:28:51.902593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.483 [2024-04-25 03:28:51.902782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.483 [2024-04-25 03:28:51.902810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.483 qpair failed and we were unable to recover it. 00:28:17.483 [2024-04-25 03:28:51.903029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.483 [2024-04-25 03:28:51.903191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.483 [2024-04-25 03:28:51.903216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.483 qpair failed and we were unable to recover it. 00:28:17.483 [2024-04-25 03:28:51.903423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.483 [2024-04-25 03:28:51.903681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.483 [2024-04-25 03:28:51.903710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.483 qpair failed and we were unable to recover it. 00:28:17.483 [2024-04-25 03:28:51.903924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.483 [2024-04-25 03:28:51.904174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.483 [2024-04-25 03:28:51.904199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.483 qpair failed and we were unable to recover it. 00:28:17.483 [2024-04-25 03:28:51.904407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.483 [2024-04-25 03:28:51.904625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.483 [2024-04-25 03:28:51.904658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.483 qpair failed and we were unable to recover it. 00:28:17.483 [2024-04-25 03:28:51.904873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.483 [2024-04-25 03:28:51.905088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.483 [2024-04-25 03:28:51.905117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.483 qpair failed and we were unable to recover it. 00:28:17.483 [2024-04-25 03:28:51.905363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.483 [2024-04-25 03:28:51.905585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.483 [2024-04-25 03:28:51.905612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.483 qpair failed and we were unable to recover it. 00:28:17.483 [2024-04-25 03:28:51.905852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.483 [2024-04-25 03:28:51.906043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.483 [2024-04-25 03:28:51.906070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.483 qpair failed and we were unable to recover it. 00:28:17.483 [2024-04-25 03:28:51.906290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.483 [2024-04-25 03:28:51.906454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.483 [2024-04-25 03:28:51.906490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.483 qpair failed and we were unable to recover it. 00:28:17.483 [2024-04-25 03:28:51.906712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.483 [2024-04-25 03:28:51.906904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.483 [2024-04-25 03:28:51.906936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.483 qpair failed and we were unable to recover it. 00:28:17.483 [2024-04-25 03:28:51.907139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.483 [2024-04-25 03:28:51.907396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.483 [2024-04-25 03:28:51.907423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.483 qpair failed and we were unable to recover it. 00:28:17.483 [2024-04-25 03:28:51.907640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.483 [2024-04-25 03:28:51.907861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.483 [2024-04-25 03:28:51.907889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.483 qpair failed and we were unable to recover it. 00:28:17.483 [2024-04-25 03:28:51.908115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.483 [2024-04-25 03:28:51.908309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.483 [2024-04-25 03:28:51.908334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.483 qpair failed and we were unable to recover it. 00:28:17.483 [2024-04-25 03:28:51.908559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.483 [2024-04-25 03:28:51.908790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.483 [2024-04-25 03:28:51.908816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.483 qpair failed and we were unable to recover it. 00:28:17.483 [2024-04-25 03:28:51.909016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.483 [2024-04-25 03:28:51.909385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.483 [2024-04-25 03:28:51.909437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.483 qpair failed and we were unable to recover it. 00:28:17.483 [2024-04-25 03:28:51.909655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.483 [2024-04-25 03:28:51.909889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.483 [2024-04-25 03:28:51.909913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.483 qpair failed and we were unable to recover it. 00:28:17.483 [2024-04-25 03:28:51.910106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.483 [2024-04-25 03:28:51.910303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.483 [2024-04-25 03:28:51.910329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.483 qpair failed and we were unable to recover it. 00:28:17.483 [2024-04-25 03:28:51.910556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.483 [2024-04-25 03:28:51.910808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.483 [2024-04-25 03:28:51.910836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.483 qpair failed and we were unable to recover it. 00:28:17.483 [2024-04-25 03:28:51.911058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.483 [2024-04-25 03:28:51.911272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.483 [2024-04-25 03:28:51.911301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.483 qpair failed and we were unable to recover it. 00:28:17.483 [2024-04-25 03:28:51.911522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.483 [2024-04-25 03:28:51.911716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.483 [2024-04-25 03:28:51.911740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.483 qpair failed and we were unable to recover it. 00:28:17.483 [2024-04-25 03:28:51.911939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.483 [2024-04-25 03:28:51.912210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.483 [2024-04-25 03:28:51.912261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.483 qpair failed and we were unable to recover it. 00:28:17.483 [2024-04-25 03:28:51.912652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.483 [2024-04-25 03:28:51.912893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.483 [2024-04-25 03:28:51.912920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.483 qpair failed and we were unable to recover it. 00:28:17.483 [2024-04-25 03:28:51.913163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.483 [2024-04-25 03:28:51.913356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.483 [2024-04-25 03:28:51.913380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.484 qpair failed and we were unable to recover it. 00:28:17.484 [2024-04-25 03:28:51.913578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.484 [2024-04-25 03:28:51.913740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.484 [2024-04-25 03:28:51.913765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.484 qpair failed and we were unable to recover it. 00:28:17.484 [2024-04-25 03:28:51.913990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.484 [2024-04-25 03:28:51.914188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.484 [2024-04-25 03:28:51.914215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.484 qpair failed and we were unable to recover it. 00:28:17.484 [2024-04-25 03:28:51.914462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.484 [2024-04-25 03:28:51.914676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.484 [2024-04-25 03:28:51.914704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.484 qpair failed and we were unable to recover it. 00:28:17.484 [2024-04-25 03:28:51.914952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.484 [2024-04-25 03:28:51.915140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.484 [2024-04-25 03:28:51.915167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.484 qpair failed and we were unable to recover it. 00:28:17.484 [2024-04-25 03:28:51.915391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.484 [2024-04-25 03:28:51.915640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.484 [2024-04-25 03:28:51.915668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.484 qpair failed and we were unable to recover it. 00:28:17.484 [2024-04-25 03:28:51.915884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.484 [2024-04-25 03:28:51.916083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.484 [2024-04-25 03:28:51.916107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.484 qpair failed and we were unable to recover it. 00:28:17.484 [2024-04-25 03:28:51.916273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.484 [2024-04-25 03:28:51.916436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.484 [2024-04-25 03:28:51.916461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.484 qpair failed and we were unable to recover it. 00:28:17.484 [2024-04-25 03:28:51.916640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.484 [2024-04-25 03:28:51.916860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.484 [2024-04-25 03:28:51.916895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.484 qpair failed and we were unable to recover it. 00:28:17.484 [2024-04-25 03:28:51.917139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.484 [2024-04-25 03:28:51.917363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.484 [2024-04-25 03:28:51.917391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.484 qpair failed and we were unable to recover it. 00:28:17.484 [2024-04-25 03:28:51.917608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.484 [2024-04-25 03:28:51.917871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.484 [2024-04-25 03:28:51.917897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.484 qpair failed and we were unable to recover it. 00:28:17.484 [2024-04-25 03:28:51.918118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.484 [2024-04-25 03:28:51.918332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.484 [2024-04-25 03:28:51.918360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.484 qpair failed and we were unable to recover it. 00:28:17.484 [2024-04-25 03:28:51.918555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.484 [2024-04-25 03:28:51.918764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.484 [2024-04-25 03:28:51.918793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.484 qpair failed and we were unable to recover it. 00:28:17.484 [2024-04-25 03:28:51.919043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.484 [2024-04-25 03:28:51.919259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.484 [2024-04-25 03:28:51.919288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.484 qpair failed and we were unable to recover it. 00:28:17.484 [2024-04-25 03:28:51.919539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.484 [2024-04-25 03:28:51.919759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.484 [2024-04-25 03:28:51.919787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.484 qpair failed and we were unable to recover it. 00:28:17.484 [2024-04-25 03:28:51.920051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.484 [2024-04-25 03:28:51.920276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.484 [2024-04-25 03:28:51.920300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.484 qpair failed and we were unable to recover it. 00:28:17.484 [2024-04-25 03:28:51.920506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.484 [2024-04-25 03:28:51.920677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.484 [2024-04-25 03:28:51.920703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.484 qpair failed and we were unable to recover it. 00:28:17.484 [2024-04-25 03:28:51.920925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.484 [2024-04-25 03:28:51.921164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.484 [2024-04-25 03:28:51.921191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.484 qpair failed and we were unable to recover it. 00:28:17.484 [2024-04-25 03:28:51.921442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.484 [2024-04-25 03:28:51.921614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.484 [2024-04-25 03:28:51.921644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.484 qpair failed and we were unable to recover it. 00:28:17.484 [2024-04-25 03:28:51.921881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.484 [2024-04-25 03:28:51.922055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.484 [2024-04-25 03:28:51.922081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.484 qpair failed and we were unable to recover it. 00:28:17.484 [2024-04-25 03:28:51.922340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.484 [2024-04-25 03:28:51.922532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.484 [2024-04-25 03:28:51.922572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.484 qpair failed and we were unable to recover it. 00:28:17.484 [2024-04-25 03:28:51.922793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.484 [2024-04-25 03:28:51.923178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.484 [2024-04-25 03:28:51.923229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.484 qpair failed and we were unable to recover it. 00:28:17.484 [2024-04-25 03:28:51.923473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.484 [2024-04-25 03:28:51.923670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.484 [2024-04-25 03:28:51.923695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.484 qpair failed and we were unable to recover it. 00:28:17.484 [2024-04-25 03:28:51.923868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.484 [2024-04-25 03:28:51.924037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.484 [2024-04-25 03:28:51.924062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.484 qpair failed and we were unable to recover it. 00:28:17.484 [2024-04-25 03:28:51.924256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.484 [2024-04-25 03:28:51.924501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.484 [2024-04-25 03:28:51.924529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.484 qpair failed and we were unable to recover it. 00:28:17.484 [2024-04-25 03:28:51.924747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.484 [2024-04-25 03:28:51.924962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.484 [2024-04-25 03:28:51.924989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.484 qpair failed and we were unable to recover it. 00:28:17.484 [2024-04-25 03:28:51.925228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.484 [2024-04-25 03:28:51.925456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.484 [2024-04-25 03:28:51.925483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.484 qpair failed and we were unable to recover it. 00:28:17.484 [2024-04-25 03:28:51.925695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.484 [2024-04-25 03:28:51.925866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.484 [2024-04-25 03:28:51.925890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.484 qpair failed and we were unable to recover it. 00:28:17.484 [2024-04-25 03:28:51.926123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.484 [2024-04-25 03:28:51.926333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.484 [2024-04-25 03:28:51.926361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.484 qpair failed and we were unable to recover it. 00:28:17.484 [2024-04-25 03:28:51.926588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.484 [2024-04-25 03:28:51.926768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.485 [2024-04-25 03:28:51.926793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.485 qpair failed and we were unable to recover it. 00:28:17.485 [2024-04-25 03:28:51.926979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.485 [2024-04-25 03:28:51.927244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.485 [2024-04-25 03:28:51.927268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.485 qpair failed and we were unable to recover it. 00:28:17.485 [2024-04-25 03:28:51.927434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.485 [2024-04-25 03:28:51.927655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.485 [2024-04-25 03:28:51.927694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.485 qpair failed and we were unable to recover it. 00:28:17.485 [2024-04-25 03:28:51.927959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.485 [2024-04-25 03:28:51.928152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.485 [2024-04-25 03:28:51.928181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.485 qpair failed and we were unable to recover it. 00:28:17.485 [2024-04-25 03:28:51.928409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.485 [2024-04-25 03:28:51.928640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.485 [2024-04-25 03:28:51.928666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.485 qpair failed and we were unable to recover it. 00:28:17.485 [2024-04-25 03:28:51.928860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.485 [2024-04-25 03:28:51.929110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.485 [2024-04-25 03:28:51.929138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.485 qpair failed and we were unable to recover it. 00:28:17.485 [2024-04-25 03:28:51.929353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.485 [2024-04-25 03:28:51.929567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.485 [2024-04-25 03:28:51.929594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.485 qpair failed and we were unable to recover it. 00:28:17.485 [2024-04-25 03:28:51.929846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.485 [2024-04-25 03:28:51.930073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.485 [2024-04-25 03:28:51.930098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.485 qpair failed and we were unable to recover it. 00:28:17.485 [2024-04-25 03:28:51.930321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.485 [2024-04-25 03:28:51.930533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.485 [2024-04-25 03:28:51.930557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.485 qpair failed and we were unable to recover it. 00:28:17.485 [2024-04-25 03:28:51.930759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.485 [2024-04-25 03:28:51.930953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.485 [2024-04-25 03:28:51.930979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.485 qpair failed and we were unable to recover it. 00:28:17.485 [2024-04-25 03:28:51.931231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.485 [2024-04-25 03:28:51.931472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.485 [2024-04-25 03:28:51.931499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.485 qpair failed and we were unable to recover it. 00:28:17.485 [2024-04-25 03:28:51.931720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.485 [2024-04-25 03:28:51.931966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.485 [2024-04-25 03:28:51.931993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.485 qpair failed and we were unable to recover it. 00:28:17.485 [2024-04-25 03:28:51.932224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.485 [2024-04-25 03:28:51.932397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.485 [2024-04-25 03:28:51.932421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.485 qpair failed and we were unable to recover it. 00:28:17.485 [2024-04-25 03:28:51.932656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.485 [2024-04-25 03:28:51.932848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.485 [2024-04-25 03:28:51.932873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.485 qpair failed and we were unable to recover it. 00:28:17.485 [2024-04-25 03:28:51.933064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.485 [2024-04-25 03:28:51.933316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.485 [2024-04-25 03:28:51.933343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.485 qpair failed and we were unable to recover it. 00:28:17.485 [2024-04-25 03:28:51.933556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.485 [2024-04-25 03:28:51.933769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.485 [2024-04-25 03:28:51.933799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.485 qpair failed and we were unable to recover it. 00:28:17.485 [2024-04-25 03:28:51.933980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.485 [2024-04-25 03:28:51.934178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.485 [2024-04-25 03:28:51.934204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.485 qpair failed and we were unable to recover it. 00:28:17.485 [2024-04-25 03:28:51.934423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.485 [2024-04-25 03:28:51.934646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.485 [2024-04-25 03:28:51.934675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.485 qpair failed and we were unable to recover it. 00:28:17.485 [2024-04-25 03:28:51.934874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.485 [2024-04-25 03:28:51.935087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.485 [2024-04-25 03:28:51.935115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.485 qpair failed and we were unable to recover it. 00:28:17.485 [2024-04-25 03:28:51.935347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.485 [2024-04-25 03:28:51.935520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.485 [2024-04-25 03:28:51.935545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.485 qpair failed and we were unable to recover it. 00:28:17.485 [2024-04-25 03:28:51.935782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.485 [2024-04-25 03:28:51.935973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.485 [2024-04-25 03:28:51.935997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.485 qpair failed and we were unable to recover it. 00:28:17.485 [2024-04-25 03:28:51.936196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.485 [2024-04-25 03:28:51.936383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.485 [2024-04-25 03:28:51.936415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.485 qpair failed and we were unable to recover it. 00:28:17.485 [2024-04-25 03:28:51.936674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.485 [2024-04-25 03:28:51.936866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.485 [2024-04-25 03:28:51.936893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.485 qpair failed and we were unable to recover it. 00:28:17.485 [2024-04-25 03:28:51.937110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.485 [2024-04-25 03:28:51.937325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.485 [2024-04-25 03:28:51.937369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.485 qpair failed and we were unable to recover it. 00:28:17.485 [2024-04-25 03:28:51.937570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.485 [2024-04-25 03:28:51.937818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.485 [2024-04-25 03:28:51.937844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.485 qpair failed and we were unable to recover it. 00:28:17.485 [2024-04-25 03:28:51.938082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.485 [2024-04-25 03:28:51.938296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.485 [2024-04-25 03:28:51.938324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.485 qpair failed and we were unable to recover it. 00:28:17.485 [2024-04-25 03:28:51.938512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.485 [2024-04-25 03:28:51.938729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.485 [2024-04-25 03:28:51.938758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.485 qpair failed and we were unable to recover it. 00:28:17.485 [2024-04-25 03:28:51.938956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.485 [2024-04-25 03:28:51.939209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.485 [2024-04-25 03:28:51.939234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.485 qpair failed and we were unable to recover it. 00:28:17.485 [2024-04-25 03:28:51.939408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.485 [2024-04-25 03:28:51.939577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.485 [2024-04-25 03:28:51.939602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.485 qpair failed and we were unable to recover it. 00:28:17.485 [2024-04-25 03:28:51.939788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.485 [2024-04-25 03:28:51.940010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.486 [2024-04-25 03:28:51.940037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.486 qpair failed and we were unable to recover it. 00:28:17.486 [2024-04-25 03:28:51.940256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.486 [2024-04-25 03:28:51.940641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.486 [2024-04-25 03:28:51.940693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.486 qpair failed and we were unable to recover it. 00:28:17.486 [2024-04-25 03:28:51.940941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.486 [2024-04-25 03:28:51.941131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.486 [2024-04-25 03:28:51.941163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.486 qpair failed and we were unable to recover it. 00:28:17.486 [2024-04-25 03:28:51.941379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.486 [2024-04-25 03:28:51.941562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.486 [2024-04-25 03:28:51.941588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.486 qpair failed and we were unable to recover it. 00:28:17.486 [2024-04-25 03:28:51.941816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.486 [2024-04-25 03:28:51.941995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.486 [2024-04-25 03:28:51.942023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.486 qpair failed and we were unable to recover it. 00:28:17.486 [2024-04-25 03:28:51.942265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.486 [2024-04-25 03:28:51.942477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.486 [2024-04-25 03:28:51.942504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.486 qpair failed and we were unable to recover it. 00:28:17.486 [2024-04-25 03:28:51.942753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.486 [2024-04-25 03:28:51.942922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.486 [2024-04-25 03:28:51.942949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.486 qpair failed and we were unable to recover it. 00:28:17.486 [2024-04-25 03:28:51.943118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.486 [2024-04-25 03:28:51.943287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.486 [2024-04-25 03:28:51.943329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.486 qpair failed and we were unable to recover it. 00:28:17.486 [2024-04-25 03:28:51.943517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.486 [2024-04-25 03:28:51.943760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.486 [2024-04-25 03:28:51.943788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.486 qpair failed and we were unable to recover it. 00:28:17.486 [2024-04-25 03:28:51.944005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.486 [2024-04-25 03:28:51.944195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.486 [2024-04-25 03:28:51.944223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.486 qpair failed and we were unable to recover it. 00:28:17.486 [2024-04-25 03:28:51.944482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.486 [2024-04-25 03:28:51.944717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.486 [2024-04-25 03:28:51.944743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.486 qpair failed and we were unable to recover it. 00:28:17.486 [2024-04-25 03:28:51.944968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.486 [2024-04-25 03:28:51.945141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.486 [2024-04-25 03:28:51.945165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.486 qpair failed and we were unable to recover it. 00:28:17.486 [2024-04-25 03:28:51.945337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.486 [2024-04-25 03:28:51.945588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.486 [2024-04-25 03:28:51.945621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.486 qpair failed and we were unable to recover it. 00:28:17.486 [2024-04-25 03:28:51.945860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.486 [2024-04-25 03:28:51.946126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.486 [2024-04-25 03:28:51.946151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.486 qpair failed and we were unable to recover it. 00:28:17.486 [2024-04-25 03:28:51.946406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.486 [2024-04-25 03:28:51.946652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.486 [2024-04-25 03:28:51.946680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.486 qpair failed and we were unable to recover it. 00:28:17.486 [2024-04-25 03:28:51.946869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.486 [2024-04-25 03:28:51.947091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.486 [2024-04-25 03:28:51.947118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.486 qpair failed and we were unable to recover it. 00:28:17.486 [2024-04-25 03:28:51.947321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.486 [2024-04-25 03:28:51.947533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.486 [2024-04-25 03:28:51.947560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.486 qpair failed and we were unable to recover it. 00:28:17.486 [2024-04-25 03:28:51.947771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.486 [2024-04-25 03:28:51.947991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.486 [2024-04-25 03:28:51.948016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.486 qpair failed and we were unable to recover it. 00:28:17.486 [2024-04-25 03:28:51.948243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.486 [2024-04-25 03:28:51.948411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.486 [2024-04-25 03:28:51.948438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.486 qpair failed and we were unable to recover it. 00:28:17.486 [2024-04-25 03:28:51.948677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.486 [2024-04-25 03:28:51.948864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.486 [2024-04-25 03:28:51.948893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.486 qpair failed and we were unable to recover it. 00:28:17.486 [2024-04-25 03:28:51.949126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.486 [2024-04-25 03:28:51.949348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.486 [2024-04-25 03:28:51.949375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.486 qpair failed and we were unable to recover it. 00:28:17.486 [2024-04-25 03:28:51.949595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.486 [2024-04-25 03:28:51.949838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.486 [2024-04-25 03:28:51.949863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.486 qpair failed and we were unable to recover it. 00:28:17.486 [2024-04-25 03:28:51.950057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.486 [2024-04-25 03:28:51.950253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.759 [2024-04-25 03:28:51.950299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.759 qpair failed and we were unable to recover it. 00:28:17.760 [2024-04-25 03:28:51.950517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.760 [2024-04-25 03:28:51.950739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.760 [2024-04-25 03:28:51.950768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.760 qpair failed and we were unable to recover it. 00:28:17.760 [2024-04-25 03:28:51.950958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.760 [2024-04-25 03:28:51.951126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.760 [2024-04-25 03:28:51.951151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.760 qpair failed and we were unable to recover it. 00:28:17.760 [2024-04-25 03:28:51.951373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.760 [2024-04-25 03:28:51.951622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.760 [2024-04-25 03:28:51.951655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.760 qpair failed and we were unable to recover it. 00:28:17.760 [2024-04-25 03:28:51.951892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.760 [2024-04-25 03:28:51.952106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.760 [2024-04-25 03:28:51.952134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.760 qpair failed and we were unable to recover it. 00:28:17.760 [2024-04-25 03:28:51.952379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.760 [2024-04-25 03:28:51.952626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.760 [2024-04-25 03:28:51.952670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.760 qpair failed and we were unable to recover it. 00:28:17.760 [2024-04-25 03:28:51.952868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.760 [2024-04-25 03:28:51.953275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.760 [2024-04-25 03:28:51.953300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.760 qpair failed and we were unable to recover it. 00:28:17.760 [2024-04-25 03:28:51.953496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.760 [2024-04-25 03:28:51.953722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.760 [2024-04-25 03:28:51.953747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.760 qpair failed and we were unable to recover it. 00:28:17.760 [2024-04-25 03:28:51.953977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.760 [2024-04-25 03:28:51.954220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.760 [2024-04-25 03:28:51.954249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.760 qpair failed and we were unable to recover it. 00:28:17.760 [2024-04-25 03:28:51.954495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.760 [2024-04-25 03:28:51.954744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.760 [2024-04-25 03:28:51.954772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.760 qpair failed and we were unable to recover it. 00:28:17.760 [2024-04-25 03:28:51.955018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.760 [2024-04-25 03:28:51.955185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.760 [2024-04-25 03:28:51.955210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.760 qpair failed and we were unable to recover it. 00:28:17.760 [2024-04-25 03:28:51.955386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.760 [2024-04-25 03:28:51.955578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.760 [2024-04-25 03:28:51.955618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.760 qpair failed and we were unable to recover it. 00:28:17.760 [2024-04-25 03:28:51.955810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.760 [2024-04-25 03:28:51.956057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.760 [2024-04-25 03:28:51.956084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.760 qpair failed and we were unable to recover it. 00:28:17.760 [2024-04-25 03:28:51.956331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.760 [2024-04-25 03:28:51.956519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.760 [2024-04-25 03:28:51.956543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.760 qpair failed and we were unable to recover it. 00:28:17.760 [2024-04-25 03:28:51.956734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.760 [2024-04-25 03:28:51.956922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.760 [2024-04-25 03:28:51.956949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.760 qpair failed and we were unable to recover it. 00:28:17.760 [2024-04-25 03:28:51.957167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.760 [2024-04-25 03:28:51.957381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.760 [2024-04-25 03:28:51.957408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.760 qpair failed and we were unable to recover it. 00:28:17.760 [2024-04-25 03:28:51.957626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.760 [2024-04-25 03:28:51.957823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.760 [2024-04-25 03:28:51.957852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.760 qpair failed and we were unable to recover it. 00:28:17.760 [2024-04-25 03:28:51.958099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.760 [2024-04-25 03:28:51.958330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.760 [2024-04-25 03:28:51.958354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.760 qpair failed and we were unable to recover it. 00:28:17.760 [2024-04-25 03:28:51.958582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.760 [2024-04-25 03:28:51.958759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.760 [2024-04-25 03:28:51.958785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.760 qpair failed and we were unable to recover it. 00:28:17.760 [2024-04-25 03:28:51.959015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.760 [2024-04-25 03:28:51.959203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.760 [2024-04-25 03:28:51.959232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.760 qpair failed and we were unable to recover it. 00:28:17.760 [2024-04-25 03:28:51.959450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.760 [2024-04-25 03:28:51.959701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.760 [2024-04-25 03:28:51.959730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.760 qpair failed and we were unable to recover it. 00:28:17.760 [2024-04-25 03:28:51.959922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.760 [2024-04-25 03:28:51.960114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.760 [2024-04-25 03:28:51.960141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.760 qpair failed and we were unable to recover it. 00:28:17.760 [2024-04-25 03:28:51.960382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.760 [2024-04-25 03:28:51.960579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.760 [2024-04-25 03:28:51.960603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.760 qpair failed and we were unable to recover it. 00:28:17.760 [2024-04-25 03:28:51.960813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.760 [2024-04-25 03:28:51.961006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.760 [2024-04-25 03:28:51.961035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.760 qpair failed and we were unable to recover it. 00:28:17.760 [2024-04-25 03:28:51.961251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.760 [2024-04-25 03:28:51.961500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.760 [2024-04-25 03:28:51.961527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.760 qpair failed and we were unable to recover it. 00:28:17.760 [2024-04-25 03:28:51.961719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.760 [2024-04-25 03:28:51.961912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.760 [2024-04-25 03:28:51.961941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.760 qpair failed and we were unable to recover it. 00:28:17.760 [2024-04-25 03:28:51.962134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.760 [2024-04-25 03:28:51.962326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.760 [2024-04-25 03:28:51.962351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.760 qpair failed and we were unable to recover it. 00:28:17.761 [2024-04-25 03:28:51.962540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.761 [2024-04-25 03:28:51.962795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.761 [2024-04-25 03:28:51.962824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.761 qpair failed and we were unable to recover it. 00:28:17.761 [2024-04-25 03:28:51.963081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.761 [2024-04-25 03:28:51.963298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.761 [2024-04-25 03:28:51.963326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.761 qpair failed and we were unable to recover it. 00:28:17.761 [2024-04-25 03:28:51.963565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.761 [2024-04-25 03:28:51.963796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.761 [2024-04-25 03:28:51.963821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.761 qpair failed and we were unable to recover it. 00:28:17.761 [2024-04-25 03:28:51.963996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.761 [2024-04-25 03:28:51.964192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.761 [2024-04-25 03:28:51.964217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.761 qpair failed and we were unable to recover it. 00:28:17.761 [2024-04-25 03:28:51.964423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.761 [2024-04-25 03:28:51.964617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.761 [2024-04-25 03:28:51.964648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.761 qpair failed and we were unable to recover it. 00:28:17.761 [2024-04-25 03:28:51.964849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.761 [2024-04-25 03:28:51.965015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.761 [2024-04-25 03:28:51.965042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.761 qpair failed and we were unable to recover it. 00:28:17.761 [2024-04-25 03:28:51.965292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.761 [2024-04-25 03:28:51.965510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.761 [2024-04-25 03:28:51.965538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.761 qpair failed and we were unable to recover it. 00:28:17.761 [2024-04-25 03:28:51.965756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.761 [2024-04-25 03:28:51.965971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.761 [2024-04-25 03:28:51.965996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.761 qpair failed and we were unable to recover it. 00:28:17.761 [2024-04-25 03:28:51.966239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.761 [2024-04-25 03:28:51.966426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.761 [2024-04-25 03:28:51.966455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.761 qpair failed and we were unable to recover it. 00:28:17.761 [2024-04-25 03:28:51.966699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.761 [2024-04-25 03:28:51.966885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.761 [2024-04-25 03:28:51.966914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.761 qpair failed and we were unable to recover it. 00:28:17.761 [2024-04-25 03:28:51.967139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.761 [2024-04-25 03:28:51.967341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.761 [2024-04-25 03:28:51.967366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.761 qpair failed and we were unable to recover it. 00:28:17.761 [2024-04-25 03:28:51.967559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.761 [2024-04-25 03:28:51.967775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.761 [2024-04-25 03:28:51.967803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.761 qpair failed and we were unable to recover it. 00:28:17.761 [2024-04-25 03:28:51.968018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.761 [2024-04-25 03:28:51.968247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.761 [2024-04-25 03:28:51.968276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.761 qpair failed and we were unable to recover it. 00:28:17.761 [2024-04-25 03:28:51.968459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.761 [2024-04-25 03:28:51.968702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.761 [2024-04-25 03:28:51.968730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.761 qpair failed and we were unable to recover it. 00:28:17.761 [2024-04-25 03:28:51.968936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.761 [2024-04-25 03:28:51.969132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.761 [2024-04-25 03:28:51.969157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.761 qpair failed and we were unable to recover it. 00:28:17.761 [2024-04-25 03:28:51.969363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.761 [2024-04-25 03:28:51.969557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.761 [2024-04-25 03:28:51.969582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.761 qpair failed and we were unable to recover it. 00:28:17.761 [2024-04-25 03:28:51.969862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.761 [2024-04-25 03:28:51.970050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.761 [2024-04-25 03:28:51.970075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.761 qpair failed and we were unable to recover it. 00:28:17.761 [2024-04-25 03:28:51.970319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.761 [2024-04-25 03:28:51.970537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.761 [2024-04-25 03:28:51.970564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.761 qpair failed and we were unable to recover it. 00:28:17.761 [2024-04-25 03:28:51.970826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.761 [2024-04-25 03:28:51.971030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.761 [2024-04-25 03:28:51.971055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.761 qpair failed and we were unable to recover it. 00:28:17.761 [2024-04-25 03:28:51.971219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.761 [2024-04-25 03:28:51.971439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.761 [2024-04-25 03:28:51.971463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.761 qpair failed and we were unable to recover it. 00:28:17.761 [2024-04-25 03:28:51.971715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.761 [2024-04-25 03:28:51.971957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.761 [2024-04-25 03:28:51.971982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.761 qpair failed and we were unable to recover it. 00:28:17.761 [2024-04-25 03:28:51.972180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.761 [2024-04-25 03:28:51.972408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.761 [2024-04-25 03:28:51.972437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.761 qpair failed and we were unable to recover it. 00:28:17.761 [2024-04-25 03:28:51.972683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.761 [2024-04-25 03:28:51.972904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.761 [2024-04-25 03:28:51.972932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.761 qpair failed and we were unable to recover it. 00:28:17.761 [2024-04-25 03:28:51.973157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.761 [2024-04-25 03:28:51.973378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.761 [2024-04-25 03:28:51.973406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.761 qpair failed and we were unable to recover it. 00:28:17.761 [2024-04-25 03:28:51.973654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.761 [2024-04-25 03:28:51.973876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.761 [2024-04-25 03:28:51.973904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.761 qpair failed and we were unable to recover it. 00:28:17.761 [2024-04-25 03:28:51.974119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.761 [2024-04-25 03:28:51.974332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.761 [2024-04-25 03:28:51.974356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.761 qpair failed and we were unable to recover it. 00:28:17.761 [2024-04-25 03:28:51.974574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.761 [2024-04-25 03:28:51.974773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.761 [2024-04-25 03:28:51.974798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.761 qpair failed and we were unable to recover it. 00:28:17.761 [2024-04-25 03:28:51.974996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.761 [2024-04-25 03:28:51.975180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.761 [2024-04-25 03:28:51.975207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.761 qpair failed and we were unable to recover it. 00:28:17.761 [2024-04-25 03:28:51.975393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.761 [2024-04-25 03:28:51.975603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.762 [2024-04-25 03:28:51.975636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.762 qpair failed and we were unable to recover it. 00:28:17.762 [2024-04-25 03:28:51.975886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.762 [2024-04-25 03:28:51.976129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.762 [2024-04-25 03:28:51.976154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.762 qpair failed and we were unable to recover it. 00:28:17.762 [2024-04-25 03:28:51.976351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.762 [2024-04-25 03:28:51.976520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.762 [2024-04-25 03:28:51.976544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.762 qpair failed and we were unable to recover it. 00:28:17.762 [2024-04-25 03:28:51.976719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.762 [2024-04-25 03:28:51.976926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.762 [2024-04-25 03:28:51.976952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.762 qpair failed and we were unable to recover it. 00:28:17.762 [2024-04-25 03:28:51.977193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.762 [2024-04-25 03:28:51.977438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.762 [2024-04-25 03:28:51.977466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.762 qpair failed and we were unable to recover it. 00:28:17.762 [2024-04-25 03:28:51.977714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.762 [2024-04-25 03:28:51.977902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.762 [2024-04-25 03:28:51.977929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.762 qpair failed and we were unable to recover it. 00:28:17.762 [2024-04-25 03:28:51.978153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.762 [2024-04-25 03:28:51.978400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.762 [2024-04-25 03:28:51.978427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.762 qpair failed and we were unable to recover it. 00:28:17.762 [2024-04-25 03:28:51.978617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.762 [2024-04-25 03:28:51.978808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.762 [2024-04-25 03:28:51.978832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.762 qpair failed and we were unable to recover it. 00:28:17.762 [2024-04-25 03:28:51.979080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.762 [2024-04-25 03:28:51.979264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.762 [2024-04-25 03:28:51.979290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.762 qpair failed and we were unable to recover it. 00:28:17.762 [2024-04-25 03:28:51.979508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.762 [2024-04-25 03:28:51.979702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.762 [2024-04-25 03:28:51.979736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.762 qpair failed and we were unable to recover it. 00:28:17.762 [2024-04-25 03:28:51.979902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.762 [2024-04-25 03:28:51.980098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.762 [2024-04-25 03:28:51.980125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.762 qpair failed and we were unable to recover it. 00:28:17.762 [2024-04-25 03:28:51.980325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.762 [2024-04-25 03:28:51.980547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.762 [2024-04-25 03:28:51.980575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.762 qpair failed and we were unable to recover it. 00:28:17.762 [2024-04-25 03:28:51.980830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.762 [2024-04-25 03:28:51.981018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.762 [2024-04-25 03:28:51.981045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.762 qpair failed and we were unable to recover it. 00:28:17.762 [2024-04-25 03:28:51.981298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.762 [2024-04-25 03:28:51.981493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.762 [2024-04-25 03:28:51.981518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.762 qpair failed and we were unable to recover it. 00:28:17.762 [2024-04-25 03:28:51.981711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.762 [2024-04-25 03:28:51.981924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.762 [2024-04-25 03:28:51.981952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.762 qpair failed and we were unable to recover it. 00:28:17.762 [2024-04-25 03:28:51.982146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.762 [2024-04-25 03:28:51.982362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.762 [2024-04-25 03:28:51.982391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.762 qpair failed and we were unable to recover it. 00:28:17.762 [2024-04-25 03:28:51.982615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.762 [2024-04-25 03:28:51.982840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.762 [2024-04-25 03:28:51.982869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.762 qpair failed and we were unable to recover it. 00:28:17.762 [2024-04-25 03:28:51.983091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.762 [2024-04-25 03:28:51.983343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.762 [2024-04-25 03:28:51.983370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.762 qpair failed and we were unable to recover it. 00:28:17.762 [2024-04-25 03:28:51.983622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.762 [2024-04-25 03:28:51.983864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.762 [2024-04-25 03:28:51.983891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.762 qpair failed and we were unable to recover it. 00:28:17.762 [2024-04-25 03:28:51.984109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.762 [2024-04-25 03:28:51.984431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.762 [2024-04-25 03:28:51.984488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.762 qpair failed and we were unable to recover it. 00:28:17.762 [2024-04-25 03:28:51.984714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.762 [2024-04-25 03:28:51.984916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.762 [2024-04-25 03:28:51.984957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.762 qpair failed and we were unable to recover it. 00:28:17.762 [2024-04-25 03:28:51.985326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.762 [2024-04-25 03:28:51.985617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.762 [2024-04-25 03:28:51.985652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.762 qpair failed and we were unable to recover it. 00:28:17.762 [2024-04-25 03:28:51.985871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.762 [2024-04-25 03:28:51.986065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.762 [2024-04-25 03:28:51.986090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.762 qpair failed and we were unable to recover it. 00:28:17.762 [2024-04-25 03:28:51.986281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.762 [2024-04-25 03:28:51.986454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.762 [2024-04-25 03:28:51.986479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.762 qpair failed and we were unable to recover it. 00:28:17.762 [2024-04-25 03:28:51.986674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.762 [2024-04-25 03:28:51.986853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.762 [2024-04-25 03:28:51.986880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.762 qpair failed and we were unable to recover it. 00:28:17.762 [2024-04-25 03:28:51.987108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.762 [2024-04-25 03:28:51.987313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.762 [2024-04-25 03:28:51.987341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.762 qpair failed and we were unable to recover it. 00:28:17.762 [2024-04-25 03:28:51.987559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.762 [2024-04-25 03:28:51.987779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.762 [2024-04-25 03:28:51.987807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.762 qpair failed and we were unable to recover it. 00:28:17.762 [2024-04-25 03:28:51.988051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.762 [2024-04-25 03:28:51.988222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.762 [2024-04-25 03:28:51.988247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.762 qpair failed and we were unable to recover it. 00:28:17.762 [2024-04-25 03:28:51.988517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.762 [2024-04-25 03:28:51.988701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.762 [2024-04-25 03:28:51.988728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.762 qpair failed and we were unable to recover it. 00:28:17.763 [2024-04-25 03:28:51.988942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.763 [2024-04-25 03:28:51.989188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.763 [2024-04-25 03:28:51.989215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.763 qpair failed and we were unable to recover it. 00:28:17.763 [2024-04-25 03:28:51.989429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.763 [2024-04-25 03:28:51.989613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.763 [2024-04-25 03:28:51.989648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.763 qpair failed and we were unable to recover it. 00:28:17.763 [2024-04-25 03:28:51.989850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.763 [2024-04-25 03:28:51.990033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.763 [2024-04-25 03:28:51.990074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.763 qpair failed and we were unable to recover it. 00:28:17.763 [2024-04-25 03:28:51.990271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.763 [2024-04-25 03:28:51.990528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.763 [2024-04-25 03:28:51.990552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.763 qpair failed and we were unable to recover it. 00:28:17.763 [2024-04-25 03:28:51.990766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.763 [2024-04-25 03:28:51.990982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.763 [2024-04-25 03:28:51.991009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.763 qpair failed and we were unable to recover it. 00:28:17.763 [2024-04-25 03:28:51.991224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.763 [2024-04-25 03:28:51.991435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.763 [2024-04-25 03:28:51.991462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.763 qpair failed and we were unable to recover it. 00:28:17.763 [2024-04-25 03:28:51.991735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.763 [2024-04-25 03:28:51.991937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.763 [2024-04-25 03:28:51.991961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.763 qpair failed and we were unable to recover it. 00:28:17.763 [2024-04-25 03:28:51.992185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.763 [2024-04-25 03:28:51.992388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.763 [2024-04-25 03:28:51.992414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.763 qpair failed and we were unable to recover it. 00:28:17.763 [2024-04-25 03:28:51.992657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.763 [2024-04-25 03:28:51.992904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.763 [2024-04-25 03:28:51.992929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.763 qpair failed and we were unable to recover it. 00:28:17.763 [2024-04-25 03:28:51.993152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.763 [2024-04-25 03:28:51.993372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.763 [2024-04-25 03:28:51.993400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.763 qpair failed and we were unable to recover it. 00:28:17.763 [2024-04-25 03:28:51.993659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.763 [2024-04-25 03:28:51.993878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.763 [2024-04-25 03:28:51.993915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.763 qpair failed and we were unable to recover it. 00:28:17.763 [2024-04-25 03:28:51.994134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.763 [2024-04-25 03:28:51.994384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.763 [2024-04-25 03:28:51.994411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.763 qpair failed and we were unable to recover it. 00:28:17.763 [2024-04-25 03:28:51.994636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.763 [2024-04-25 03:28:51.994868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.763 [2024-04-25 03:28:51.994896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.763 qpair failed and we were unable to recover it. 00:28:17.763 [2024-04-25 03:28:51.995085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.763 [2024-04-25 03:28:51.995295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.763 [2024-04-25 03:28:51.995321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.763 qpair failed and we were unable to recover it. 00:28:17.763 [2024-04-25 03:28:51.995573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.763 [2024-04-25 03:28:51.995789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.763 [2024-04-25 03:28:51.995831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.763 qpair failed and we were unable to recover it. 00:28:17.763 [2024-04-25 03:28:51.996053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.763 [2024-04-25 03:28:51.996221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.763 [2024-04-25 03:28:51.996245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.763 qpair failed and we were unable to recover it. 00:28:17.763 [2024-04-25 03:28:51.996442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.763 [2024-04-25 03:28:51.996647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.763 [2024-04-25 03:28:51.996675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.763 qpair failed and we were unable to recover it. 00:28:17.763 [2024-04-25 03:28:51.996890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.763 [2024-04-25 03:28:51.997107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.763 [2024-04-25 03:28:51.997135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.763 qpair failed and we were unable to recover it. 00:28:17.763 [2024-04-25 03:28:51.997357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.763 [2024-04-25 03:28:51.997566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.763 [2024-04-25 03:28:51.997590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.763 qpair failed and we were unable to recover it. 00:28:17.763 [2024-04-25 03:28:51.997767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.763 [2024-04-25 03:28:51.997988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.763 [2024-04-25 03:28:51.998017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.763 qpair failed and we were unable to recover it. 00:28:17.763 [2024-04-25 03:28:51.998241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.763 [2024-04-25 03:28:51.998462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.763 [2024-04-25 03:28:51.998490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.763 qpair failed and we were unable to recover it. 00:28:17.763 [2024-04-25 03:28:51.998741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.763 [2024-04-25 03:28:51.998929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.763 [2024-04-25 03:28:51.998956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.763 qpair failed and we were unable to recover it. 00:28:17.763 [2024-04-25 03:28:51.999165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.763 [2024-04-25 03:28:51.999411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.763 [2024-04-25 03:28:51.999438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.763 qpair failed and we were unable to recover it. 00:28:17.763 [2024-04-25 03:28:51.999634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.763 [2024-04-25 03:28:51.999855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.763 [2024-04-25 03:28:51.999879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.763 qpair failed and we were unable to recover it. 00:28:17.763 [2024-04-25 03:28:52.000127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.763 [2024-04-25 03:28:52.000373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.763 [2024-04-25 03:28:52.000398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.763 qpair failed and we were unable to recover it. 00:28:17.763 [2024-04-25 03:28:52.000564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.763 [2024-04-25 03:28:52.000769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.763 [2024-04-25 03:28:52.000795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.763 qpair failed and we were unable to recover it. 00:28:17.763 [2024-04-25 03:28:52.000971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.763 [2024-04-25 03:28:52.001203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.763 [2024-04-25 03:28:52.001248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.763 qpair failed and we were unable to recover it. 00:28:17.763 [2024-04-25 03:28:52.001532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.763 [2024-04-25 03:28:52.001780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.763 [2024-04-25 03:28:52.001809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.763 qpair failed and we were unable to recover it. 00:28:17.763 [2024-04-25 03:28:52.002045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.763 [2024-04-25 03:28:52.002267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.763 [2024-04-25 03:28:52.002291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.764 qpair failed and we were unable to recover it. 00:28:17.764 [2024-04-25 03:28:52.002465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.764 [2024-04-25 03:28:52.002694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.764 [2024-04-25 03:28:52.002719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.764 qpair failed and we were unable to recover it. 00:28:17.764 [2024-04-25 03:28:52.002957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.764 [2024-04-25 03:28:52.003200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.764 [2024-04-25 03:28:52.003227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.764 qpair failed and we were unable to recover it. 00:28:17.764 [2024-04-25 03:28:52.003471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.764 [2024-04-25 03:28:52.003686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.764 [2024-04-25 03:28:52.003716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.764 qpair failed and we were unable to recover it. 00:28:17.764 [2024-04-25 03:28:52.003946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.764 [2024-04-25 03:28:52.004189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.764 [2024-04-25 03:28:52.004216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.764 qpair failed and we were unable to recover it. 00:28:17.764 [2024-04-25 03:28:52.004457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.764 [2024-04-25 03:28:52.004672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.764 [2024-04-25 03:28:52.004700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.764 qpair failed and we were unable to recover it. 00:28:17.764 [2024-04-25 03:28:52.004944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.764 [2024-04-25 03:28:52.005251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.764 [2024-04-25 03:28:52.005299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.764 qpair failed and we were unable to recover it. 00:28:17.764 [2024-04-25 03:28:52.005509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.764 [2024-04-25 03:28:52.005720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.764 [2024-04-25 03:28:52.005748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.764 qpair failed and we were unable to recover it. 00:28:17.764 [2024-04-25 03:28:52.005980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.764 [2024-04-25 03:28:52.006154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.764 [2024-04-25 03:28:52.006178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.764 qpair failed and we were unable to recover it. 00:28:17.764 [2024-04-25 03:28:52.006408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.764 [2024-04-25 03:28:52.006609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.764 [2024-04-25 03:28:52.006641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.764 qpair failed and we were unable to recover it. 00:28:17.764 [2024-04-25 03:28:52.006868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.764 [2024-04-25 03:28:52.007120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.764 [2024-04-25 03:28:52.007147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.764 qpair failed and we were unable to recover it. 00:28:17.764 [2024-04-25 03:28:52.007359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.764 [2024-04-25 03:28:52.007550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.764 [2024-04-25 03:28:52.007578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.764 qpair failed and we were unable to recover it. 00:28:17.764 [2024-04-25 03:28:52.007844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.764 [2024-04-25 03:28:52.008022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.764 [2024-04-25 03:28:52.008047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.764 qpair failed and we were unable to recover it. 00:28:17.764 [2024-04-25 03:28:52.008213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.764 [2024-04-25 03:28:52.008442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.764 [2024-04-25 03:28:52.008469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.764 qpair failed and we were unable to recover it. 00:28:17.764 [2024-04-25 03:28:52.008692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.764 [2024-04-25 03:28:52.008909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.764 [2024-04-25 03:28:52.008938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.764 qpair failed and we were unable to recover it. 00:28:17.764 [2024-04-25 03:28:52.009181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.764 [2024-04-25 03:28:52.009370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.764 [2024-04-25 03:28:52.009397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.764 qpair failed and we were unable to recover it. 00:28:17.764 [2024-04-25 03:28:52.009640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.764 [2024-04-25 03:28:52.009859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.764 [2024-04-25 03:28:52.009886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.764 qpair failed and we were unable to recover it. 00:28:17.764 [2024-04-25 03:28:52.010104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.764 [2024-04-25 03:28:52.010278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.764 [2024-04-25 03:28:52.010306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.764 qpair failed and we were unable to recover it. 00:28:17.764 [2024-04-25 03:28:52.010513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.764 [2024-04-25 03:28:52.010781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.764 [2024-04-25 03:28:52.010809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.764 qpair failed and we were unable to recover it. 00:28:17.764 [2024-04-25 03:28:52.011022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.764 [2024-04-25 03:28:52.011197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.764 [2024-04-25 03:28:52.011226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.764 qpair failed and we were unable to recover it. 00:28:17.764 [2024-04-25 03:28:52.011469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.764 [2024-04-25 03:28:52.011683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.764 [2024-04-25 03:28:52.011708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.764 qpair failed and we were unable to recover it. 00:28:17.764 [2024-04-25 03:28:52.011961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.764 [2024-04-25 03:28:52.012170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.764 [2024-04-25 03:28:52.012198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.764 qpair failed and we were unable to recover it. 00:28:17.764 [2024-04-25 03:28:52.012410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.764 [2024-04-25 03:28:52.012578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.764 [2024-04-25 03:28:52.012602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.764 qpair failed and we were unable to recover it. 00:28:17.764 [2024-04-25 03:28:52.012792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.764 [2024-04-25 03:28:52.013153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.764 [2024-04-25 03:28:52.013207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.764 qpair failed and we were unable to recover it. 00:28:17.765 [2024-04-25 03:28:52.013551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.765 [2024-04-25 03:28:52.013830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.765 [2024-04-25 03:28:52.013856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.765 qpair failed and we were unable to recover it. 00:28:17.765 [2024-04-25 03:28:52.014023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.765 [2024-04-25 03:28:52.014278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.765 [2024-04-25 03:28:52.014306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.765 qpair failed and we were unable to recover it. 00:28:17.765 [2024-04-25 03:28:52.014518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.765 [2024-04-25 03:28:52.014716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.765 [2024-04-25 03:28:52.014741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.765 qpair failed and we were unable to recover it. 00:28:17.765 [2024-04-25 03:28:52.014940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.765 [2024-04-25 03:28:52.015187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.765 [2024-04-25 03:28:52.015215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.765 qpair failed and we were unable to recover it. 00:28:17.765 [2024-04-25 03:28:52.015459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.765 [2024-04-25 03:28:52.015683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.765 [2024-04-25 03:28:52.015713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.765 qpair failed and we were unable to recover it. 00:28:17.765 [2024-04-25 03:28:52.015903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.765 [2024-04-25 03:28:52.016110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.765 [2024-04-25 03:28:52.016142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.765 qpair failed and we were unable to recover it. 00:28:17.765 [2024-04-25 03:28:52.016363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.765 [2024-04-25 03:28:52.016554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.765 [2024-04-25 03:28:52.016579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.765 qpair failed and we were unable to recover it. 00:28:17.765 [2024-04-25 03:28:52.016841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.765 [2024-04-25 03:28:52.017086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.765 [2024-04-25 03:28:52.017114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.765 qpair failed and we were unable to recover it. 00:28:17.765 [2024-04-25 03:28:52.017359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.765 [2024-04-25 03:28:52.017577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.765 [2024-04-25 03:28:52.017601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.765 qpair failed and we were unable to recover it. 00:28:17.765 [2024-04-25 03:28:52.017810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.765 [2024-04-25 03:28:52.018053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.765 [2024-04-25 03:28:52.018078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.765 qpair failed and we were unable to recover it. 00:28:17.765 [2024-04-25 03:28:52.018300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.765 [2024-04-25 03:28:52.018492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.765 [2024-04-25 03:28:52.018517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.765 qpair failed and we were unable to recover it. 00:28:17.765 [2024-04-25 03:28:52.018761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.765 [2024-04-25 03:28:52.018973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.765 [2024-04-25 03:28:52.019001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.765 qpair failed and we were unable to recover it. 00:28:17.765 [2024-04-25 03:28:52.019218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.765 [2024-04-25 03:28:52.019447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.765 [2024-04-25 03:28:52.019473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.765 qpair failed and we were unable to recover it. 00:28:17.765 [2024-04-25 03:28:52.019672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.765 [2024-04-25 03:28:52.019919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.765 [2024-04-25 03:28:52.019947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.765 qpair failed and we were unable to recover it. 00:28:17.765 [2024-04-25 03:28:52.020169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.765 [2024-04-25 03:28:52.020386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.765 [2024-04-25 03:28:52.020413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.765 qpair failed and we were unable to recover it. 00:28:17.765 [2024-04-25 03:28:52.020634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.765 [2024-04-25 03:28:52.020852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.765 [2024-04-25 03:28:52.020884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.765 qpair failed and we were unable to recover it. 00:28:17.765 [2024-04-25 03:28:52.021110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.765 [2024-04-25 03:28:52.021359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.765 [2024-04-25 03:28:52.021384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.765 qpair failed and we were unable to recover it. 00:28:17.765 [2024-04-25 03:28:52.021637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.765 [2024-04-25 03:28:52.021831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.765 [2024-04-25 03:28:52.021857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.765 qpair failed and we were unable to recover it. 00:28:17.765 [2024-04-25 03:28:52.022078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.765 [2024-04-25 03:28:52.022242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.765 [2024-04-25 03:28:52.022267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.765 qpair failed and we were unable to recover it. 00:28:17.765 [2024-04-25 03:28:52.022438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.765 [2024-04-25 03:28:52.022651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.765 [2024-04-25 03:28:52.022681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.765 qpair failed and we were unable to recover it. 00:28:17.765 [2024-04-25 03:28:52.022901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.765 [2024-04-25 03:28:52.023117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.765 [2024-04-25 03:28:52.023145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.765 qpair failed and we were unable to recover it. 00:28:17.765 [2024-04-25 03:28:52.023321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.765 [2024-04-25 03:28:52.023538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.765 [2024-04-25 03:28:52.023566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.765 qpair failed and we were unable to recover it. 00:28:17.765 [2024-04-25 03:28:52.023817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.765 [2024-04-25 03:28:52.024027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.765 [2024-04-25 03:28:52.024051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.765 qpair failed and we were unable to recover it. 00:28:17.765 [2024-04-25 03:28:52.024266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.765 [2024-04-25 03:28:52.024443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.765 [2024-04-25 03:28:52.024472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.765 qpair failed and we were unable to recover it. 00:28:17.765 [2024-04-25 03:28:52.024678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.765 [2024-04-25 03:28:52.024935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.765 [2024-04-25 03:28:52.024960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.765 qpair failed and we were unable to recover it. 00:28:17.765 [2024-04-25 03:28:52.025183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.765 [2024-04-25 03:28:52.025374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.765 [2024-04-25 03:28:52.025399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.765 qpair failed and we were unable to recover it. 00:28:17.765 [2024-04-25 03:28:52.025601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.765 [2024-04-25 03:28:52.025812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.765 [2024-04-25 03:28:52.025837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.765 qpair failed and we were unable to recover it. 00:28:17.765 [2024-04-25 03:28:52.026028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.765 [2024-04-25 03:28:52.026219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.765 [2024-04-25 03:28:52.026247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.765 qpair failed and we were unable to recover it. 00:28:17.765 [2024-04-25 03:28:52.026467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.765 [2024-04-25 03:28:52.026682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.765 [2024-04-25 03:28:52.026724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.765 qpair failed and we were unable to recover it. 00:28:17.766 [2024-04-25 03:28:52.026914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.766 [2024-04-25 03:28:52.027124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.766 [2024-04-25 03:28:52.027148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.766 qpair failed and we were unable to recover it. 00:28:17.766 [2024-04-25 03:28:52.027347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.766 [2024-04-25 03:28:52.027574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.766 [2024-04-25 03:28:52.027602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.766 qpair failed and we were unable to recover it. 00:28:17.766 [2024-04-25 03:28:52.027798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.766 [2024-04-25 03:28:52.028018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.766 [2024-04-25 03:28:52.028047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.766 qpair failed and we were unable to recover it. 00:28:17.766 [2024-04-25 03:28:52.028293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.766 [2024-04-25 03:28:52.028483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.766 [2024-04-25 03:28:52.028507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.766 qpair failed and we were unable to recover it. 00:28:17.766 [2024-04-25 03:28:52.028732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.766 [2024-04-25 03:28:52.028913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.766 [2024-04-25 03:28:52.028940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.766 qpair failed and we were unable to recover it. 00:28:17.766 [2024-04-25 03:28:52.029161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.766 [2024-04-25 03:28:52.029353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.766 [2024-04-25 03:28:52.029380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.766 qpair failed and we were unable to recover it. 00:28:17.766 [2024-04-25 03:28:52.029640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.766 [2024-04-25 03:28:52.029858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.766 [2024-04-25 03:28:52.029900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.766 qpair failed and we were unable to recover it. 00:28:17.766 [2024-04-25 03:28:52.030133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.766 [2024-04-25 03:28:52.030339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.766 [2024-04-25 03:28:52.030364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.766 qpair failed and we were unable to recover it. 00:28:17.766 [2024-04-25 03:28:52.030563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.766 [2024-04-25 03:28:52.030835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.766 [2024-04-25 03:28:52.030864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.766 qpair failed and we were unable to recover it. 00:28:17.766 [2024-04-25 03:28:52.031101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.766 [2024-04-25 03:28:52.031318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.766 [2024-04-25 03:28:52.031344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.766 qpair failed and we were unable to recover it. 00:28:17.766 [2024-04-25 03:28:52.031559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.766 [2024-04-25 03:28:52.031797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.766 [2024-04-25 03:28:52.031824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.766 qpair failed and we were unable to recover it. 00:28:17.766 [2024-04-25 03:28:52.032000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.766 [2024-04-25 03:28:52.032250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.766 [2024-04-25 03:28:52.032275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.766 qpair failed and we were unable to recover it. 00:28:17.766 [2024-04-25 03:28:52.032472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.766 [2024-04-25 03:28:52.032718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.766 [2024-04-25 03:28:52.032747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.766 qpair failed and we were unable to recover it. 00:28:17.766 [2024-04-25 03:28:52.032975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.766 [2024-04-25 03:28:52.033156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.766 [2024-04-25 03:28:52.033184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.766 qpair failed and we were unable to recover it. 00:28:17.766 [2024-04-25 03:28:52.033413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.766 [2024-04-25 03:28:52.033597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.766 [2024-04-25 03:28:52.033624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.766 qpair failed and we were unable to recover it. 00:28:17.766 [2024-04-25 03:28:52.033852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.766 [2024-04-25 03:28:52.034040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.766 [2024-04-25 03:28:52.034068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.766 qpair failed and we were unable to recover it. 00:28:17.766 [2024-04-25 03:28:52.034247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.766 [2024-04-25 03:28:52.034425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.766 [2024-04-25 03:28:52.034453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.766 qpair failed and we were unable to recover it. 00:28:17.766 [2024-04-25 03:28:52.034679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.766 [2024-04-25 03:28:52.034850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.766 [2024-04-25 03:28:52.034875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.766 qpair failed and we were unable to recover it. 00:28:17.766 [2024-04-25 03:28:52.035063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.766 [2024-04-25 03:28:52.035298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.766 [2024-04-25 03:28:52.035323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.766 qpair failed and we were unable to recover it. 00:28:17.766 [2024-04-25 03:28:52.035533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.766 [2024-04-25 03:28:52.035730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.766 [2024-04-25 03:28:52.035757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.766 qpair failed and we were unable to recover it. 00:28:17.766 [2024-04-25 03:28:52.036010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.766 [2024-04-25 03:28:52.036257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.766 [2024-04-25 03:28:52.036285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.766 qpair failed and we were unable to recover it. 00:28:17.766 [2024-04-25 03:28:52.036526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.766 [2024-04-25 03:28:52.036746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.766 [2024-04-25 03:28:52.036775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.766 qpair failed and we were unable to recover it. 00:28:17.766 [2024-04-25 03:28:52.036998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.766 [2024-04-25 03:28:52.037213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.766 [2024-04-25 03:28:52.037240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.766 qpair failed and we were unable to recover it. 00:28:17.766 [2024-04-25 03:28:52.037462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.766 [2024-04-25 03:28:52.037682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.766 [2024-04-25 03:28:52.037712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.766 qpair failed and we were unable to recover it. 00:28:17.766 [2024-04-25 03:28:52.037937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.766 [2024-04-25 03:28:52.038149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.766 [2024-04-25 03:28:52.038177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.766 qpair failed and we were unable to recover it. 00:28:17.766 [2024-04-25 03:28:52.038431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.766 [2024-04-25 03:28:52.038663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.766 [2024-04-25 03:28:52.038688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.766 qpair failed and we were unable to recover it. 00:28:17.766 [2024-04-25 03:28:52.038863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.766 [2024-04-25 03:28:52.039087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.766 [2024-04-25 03:28:52.039112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.766 qpair failed and we were unable to recover it. 00:28:17.766 [2024-04-25 03:28:52.039325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.766 [2024-04-25 03:28:52.039496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.766 [2024-04-25 03:28:52.039521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.766 qpair failed and we were unable to recover it. 00:28:17.766 [2024-04-25 03:28:52.039771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.766 [2024-04-25 03:28:52.039982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.766 [2024-04-25 03:28:52.040011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.767 qpair failed and we were unable to recover it. 00:28:17.767 [2024-04-25 03:28:52.040196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.767 [2024-04-25 03:28:52.040444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.767 [2024-04-25 03:28:52.040471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.767 qpair failed and we were unable to recover it. 00:28:17.767 [2024-04-25 03:28:52.040696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.767 [2024-04-25 03:28:52.040875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.767 [2024-04-25 03:28:52.040902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.767 qpair failed and we were unable to recover it. 00:28:17.767 [2024-04-25 03:28:52.041143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.767 [2024-04-25 03:28:52.041357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.767 [2024-04-25 03:28:52.041384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.767 qpair failed and we were unable to recover it. 00:28:17.767 [2024-04-25 03:28:52.041600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.767 [2024-04-25 03:28:52.041857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.767 [2024-04-25 03:28:52.041885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.767 qpair failed and we were unable to recover it. 00:28:17.767 [2024-04-25 03:28:52.042121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.767 [2024-04-25 03:28:52.042374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.767 [2024-04-25 03:28:52.042398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.767 qpair failed and we were unable to recover it. 00:28:17.767 [2024-04-25 03:28:52.042596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.767 [2024-04-25 03:28:52.042799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.767 [2024-04-25 03:28:52.042824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.767 qpair failed and we were unable to recover it. 00:28:17.767 [2024-04-25 03:28:52.043048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.767 [2024-04-25 03:28:52.043260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.767 [2024-04-25 03:28:52.043287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.767 qpair failed and we were unable to recover it. 00:28:17.767 [2024-04-25 03:28:52.043503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.767 [2024-04-25 03:28:52.043697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.767 [2024-04-25 03:28:52.043726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.767 qpair failed and we were unable to recover it. 00:28:17.767 [2024-04-25 03:28:52.043984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.767 [2024-04-25 03:28:52.044208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.767 [2024-04-25 03:28:52.044236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.767 qpair failed and we were unable to recover it. 00:28:17.767 [2024-04-25 03:28:52.044450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.767 [2024-04-25 03:28:52.044641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.767 [2024-04-25 03:28:52.044668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.767 qpair failed and we were unable to recover it. 00:28:17.767 [2024-04-25 03:28:52.044894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.767 [2024-04-25 03:28:52.045088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.767 [2024-04-25 03:28:52.045113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.767 qpair failed and we were unable to recover it. 00:28:17.767 [2024-04-25 03:28:52.045272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.767 [2024-04-25 03:28:52.045464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.767 [2024-04-25 03:28:52.045490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.767 qpair failed and we were unable to recover it. 00:28:17.767 [2024-04-25 03:28:52.045713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.767 [2024-04-25 03:28:52.045910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.767 [2024-04-25 03:28:52.045936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.767 qpair failed and we were unable to recover it. 00:28:17.767 [2024-04-25 03:28:52.046108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.767 [2024-04-25 03:28:52.046279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.767 [2024-04-25 03:28:52.046304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.767 qpair failed and we were unable to recover it. 00:28:17.767 [2024-04-25 03:28:52.046494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.767 [2024-04-25 03:28:52.046714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.767 [2024-04-25 03:28:52.046742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.767 qpair failed and we were unable to recover it. 00:28:17.767 [2024-04-25 03:28:52.046922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.767 [2024-04-25 03:28:52.047113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.767 [2024-04-25 03:28:52.047141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.767 qpair failed and we were unable to recover it. 00:28:17.767 [2024-04-25 03:28:52.047344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.767 [2024-04-25 03:28:52.047569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.767 [2024-04-25 03:28:52.047594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.767 qpair failed and we were unable to recover it. 00:28:17.767 [2024-04-25 03:28:52.047838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.767 [2024-04-25 03:28:52.048041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.767 [2024-04-25 03:28:52.048068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.767 qpair failed and we were unable to recover it. 00:28:17.767 [2024-04-25 03:28:52.048266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.767 [2024-04-25 03:28:52.048459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.767 [2024-04-25 03:28:52.048486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.767 qpair failed and we were unable to recover it. 00:28:17.767 [2024-04-25 03:28:52.048664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.767 [2024-04-25 03:28:52.048906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.767 [2024-04-25 03:28:52.048934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.767 qpair failed and we were unable to recover it. 00:28:17.767 [2024-04-25 03:28:52.049134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.767 [2024-04-25 03:28:52.049325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.767 [2024-04-25 03:28:52.049349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.767 qpair failed and we were unable to recover it. 00:28:17.767 [2024-04-25 03:28:52.049543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.767 [2024-04-25 03:28:52.049778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.767 [2024-04-25 03:28:52.049803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.767 qpair failed and we were unable to recover it. 00:28:17.767 [2024-04-25 03:28:52.050003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.767 [2024-04-25 03:28:52.050169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.767 [2024-04-25 03:28:52.050194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.767 qpair failed and we were unable to recover it. 00:28:17.767 [2024-04-25 03:28:52.050386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.767 [2024-04-25 03:28:52.050585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.767 [2024-04-25 03:28:52.050613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.767 qpair failed and we were unable to recover it. 00:28:17.767 [2024-04-25 03:28:52.050955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.767 [2024-04-25 03:28:52.051379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.767 [2024-04-25 03:28:52.051426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.767 qpair failed and we were unable to recover it. 00:28:17.767 [2024-04-25 03:28:52.051622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.767 [2024-04-25 03:28:52.051880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.767 [2024-04-25 03:28:52.051921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.767 qpair failed and we were unable to recover it. 00:28:17.767 [2024-04-25 03:28:52.052228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.767 [2024-04-25 03:28:52.052663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.767 [2024-04-25 03:28:52.052717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.767 qpair failed and we were unable to recover it. 00:28:17.767 [2024-04-25 03:28:52.052949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.767 [2024-04-25 03:28:52.053165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.767 [2024-04-25 03:28:52.053190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.767 qpair failed and we were unable to recover it. 00:28:17.767 [2024-04-25 03:28:52.053367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.767 [2024-04-25 03:28:52.053633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.767 [2024-04-25 03:28:52.053658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.768 qpair failed and we were unable to recover it. 00:28:17.768 [2024-04-25 03:28:52.053887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.768 [2024-04-25 03:28:52.054099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.768 [2024-04-25 03:28:52.054126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.768 qpair failed and we were unable to recover it. 00:28:17.768 [2024-04-25 03:28:52.054343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.768 [2024-04-25 03:28:52.054558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.768 [2024-04-25 03:28:52.054586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.768 qpair failed and we were unable to recover it. 00:28:17.768 [2024-04-25 03:28:52.054811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.768 [2024-04-25 03:28:52.055032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.768 [2024-04-25 03:28:52.055060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.768 qpair failed and we were unable to recover it. 00:28:17.768 [2024-04-25 03:28:52.055315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.768 [2024-04-25 03:28:52.055657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.768 [2024-04-25 03:28:52.055709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.768 qpair failed and we were unable to recover it. 00:28:17.768 [2024-04-25 03:28:52.055952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.768 [2024-04-25 03:28:52.056169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.768 [2024-04-25 03:28:52.056196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.768 qpair failed and we were unable to recover it. 00:28:17.768 [2024-04-25 03:28:52.056426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.768 [2024-04-25 03:28:52.056689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.768 [2024-04-25 03:28:52.056714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.768 qpair failed and we were unable to recover it. 00:28:17.768 [2024-04-25 03:28:52.056879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.768 [2024-04-25 03:28:52.057110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.768 [2024-04-25 03:28:52.057137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.768 qpair failed and we were unable to recover it. 00:28:17.768 [2024-04-25 03:28:52.057322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.768 [2024-04-25 03:28:52.057656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.768 [2024-04-25 03:28:52.057684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.768 qpair failed and we were unable to recover it. 00:28:17.768 [2024-04-25 03:28:52.057895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.768 [2024-04-25 03:28:52.058102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.768 [2024-04-25 03:28:52.058130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.768 qpair failed and we were unable to recover it. 00:28:17.768 [2024-04-25 03:28:52.058383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.768 [2024-04-25 03:28:52.058637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.768 [2024-04-25 03:28:52.058666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.768 qpair failed and we were unable to recover it. 00:28:17.768 [2024-04-25 03:28:52.058853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.768 [2024-04-25 03:28:52.059070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.768 [2024-04-25 03:28:52.059100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.768 qpair failed and we were unable to recover it. 00:28:17.768 [2024-04-25 03:28:52.059317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.768 [2024-04-25 03:28:52.059533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.768 [2024-04-25 03:28:52.059558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.768 qpair failed and we were unable to recover it. 00:28:17.768 [2024-04-25 03:28:52.059738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.768 [2024-04-25 03:28:52.059960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.768 [2024-04-25 03:28:52.060003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.768 qpair failed and we were unable to recover it. 00:28:17.768 [2024-04-25 03:28:52.060232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.768 [2024-04-25 03:28:52.060463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.768 [2024-04-25 03:28:52.060490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.768 qpair failed and we were unable to recover it. 00:28:17.768 [2024-04-25 03:28:52.060730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.768 [2024-04-25 03:28:52.060943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.768 [2024-04-25 03:28:52.060973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.768 qpair failed and we were unable to recover it. 00:28:17.768 [2024-04-25 03:28:52.061200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.768 [2024-04-25 03:28:52.061396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.768 [2024-04-25 03:28:52.061425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.768 qpair failed and we were unable to recover it. 00:28:17.768 [2024-04-25 03:28:52.061641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.768 [2024-04-25 03:28:52.061860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.768 [2024-04-25 03:28:52.061885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.768 qpair failed and we were unable to recover it. 00:28:17.768 [2024-04-25 03:28:52.062142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.768 [2024-04-25 03:28:52.062350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.768 [2024-04-25 03:28:52.062378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.768 qpair failed and we were unable to recover it. 00:28:17.768 [2024-04-25 03:28:52.062572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.768 [2024-04-25 03:28:52.062787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.768 [2024-04-25 03:28:52.062815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.768 qpair failed and we were unable to recover it. 00:28:17.768 [2024-04-25 03:28:52.063011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.768 [2024-04-25 03:28:52.063248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.768 [2024-04-25 03:28:52.063275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.768 qpair failed and we were unable to recover it. 00:28:17.768 [2024-04-25 03:28:52.063496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.768 [2024-04-25 03:28:52.063666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.768 [2024-04-25 03:28:52.063692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.768 qpair failed and we were unable to recover it. 00:28:17.768 [2024-04-25 03:28:52.063911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.768 [2024-04-25 03:28:52.064161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.768 [2024-04-25 03:28:52.064188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.768 qpair failed and we were unable to recover it. 00:28:17.768 [2024-04-25 03:28:52.064437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.768 [2024-04-25 03:28:52.064622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.768 [2024-04-25 03:28:52.064658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.768 qpair failed and we were unable to recover it. 00:28:17.768 [2024-04-25 03:28:52.064861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.768 [2024-04-25 03:28:52.065050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.768 [2024-04-25 03:28:52.065079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.768 qpair failed and we were unable to recover it. 00:28:17.768 [2024-04-25 03:28:52.065301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.768 [2024-04-25 03:28:52.065688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.768 [2024-04-25 03:28:52.065717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.768 qpair failed and we were unable to recover it. 00:28:17.768 [2024-04-25 03:28:52.065934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.768 [2024-04-25 03:28:52.066376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.768 [2024-04-25 03:28:52.066436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.768 qpair failed and we were unable to recover it. 00:28:17.768 [2024-04-25 03:28:52.066682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.768 [2024-04-25 03:28:52.066876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.768 [2024-04-25 03:28:52.066902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.768 qpair failed and we were unable to recover it. 00:28:17.768 [2024-04-25 03:28:52.067097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.768 [2024-04-25 03:28:52.067344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.768 [2024-04-25 03:28:52.067372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.768 qpair failed and we were unable to recover it. 00:28:17.768 [2024-04-25 03:28:52.067584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.768 [2024-04-25 03:28:52.067804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.769 [2024-04-25 03:28:52.067832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.769 qpair failed and we were unable to recover it. 00:28:17.769 [2024-04-25 03:28:52.068067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.769 [2024-04-25 03:28:52.068260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.769 [2024-04-25 03:28:52.068289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.769 qpair failed and we were unable to recover it. 00:28:17.769 [2024-04-25 03:28:52.068480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.769 [2024-04-25 03:28:52.068695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.769 [2024-04-25 03:28:52.068723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.769 qpair failed and we were unable to recover it. 00:28:17.769 [2024-04-25 03:28:52.068903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.769 [2024-04-25 03:28:52.069143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.769 [2024-04-25 03:28:52.069170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.769 qpair failed and we were unable to recover it. 00:28:17.769 [2024-04-25 03:28:52.069387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.769 [2024-04-25 03:28:52.069624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.769 [2024-04-25 03:28:52.069677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.769 qpair failed and we were unable to recover it. 00:28:17.769 [2024-04-25 03:28:52.069918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.769 [2024-04-25 03:28:52.070177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.769 [2024-04-25 03:28:52.070201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.769 qpair failed and we were unable to recover it. 00:28:17.769 [2024-04-25 03:28:52.070393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.769 [2024-04-25 03:28:52.070636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.769 [2024-04-25 03:28:52.070662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.769 qpair failed and we were unable to recover it. 00:28:17.769 [2024-04-25 03:28:52.070863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.769 [2024-04-25 03:28:52.071230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.769 [2024-04-25 03:28:52.071280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.769 qpair failed and we were unable to recover it. 00:28:17.769 [2024-04-25 03:28:52.071659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.769 [2024-04-25 03:28:52.071897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.769 [2024-04-25 03:28:52.071925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.769 qpair failed and we were unable to recover it. 00:28:17.769 [2024-04-25 03:28:52.072139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.769 [2024-04-25 03:28:52.072362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.769 [2024-04-25 03:28:52.072388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.769 qpair failed and we were unable to recover it. 00:28:17.769 [2024-04-25 03:28:52.072586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.769 [2024-04-25 03:28:52.072818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.769 [2024-04-25 03:28:52.072846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.769 qpair failed and we were unable to recover it. 00:28:17.769 [2024-04-25 03:28:52.073069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.769 [2024-04-25 03:28:52.073239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.769 [2024-04-25 03:28:52.073264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.769 qpair failed and we were unable to recover it. 00:28:17.769 [2024-04-25 03:28:52.073478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.769 [2024-04-25 03:28:52.073687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.769 [2024-04-25 03:28:52.073728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.769 qpair failed and we were unable to recover it. 00:28:17.769 [2024-04-25 03:28:52.073960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.769 [2024-04-25 03:28:52.074138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.769 [2024-04-25 03:28:52.074166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.769 qpair failed and we were unable to recover it. 00:28:17.769 [2024-04-25 03:28:52.074405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.769 [2024-04-25 03:28:52.074591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.769 [2024-04-25 03:28:52.074618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.769 qpair failed and we were unable to recover it. 00:28:17.769 [2024-04-25 03:28:52.074850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.769 [2024-04-25 03:28:52.075060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.769 [2024-04-25 03:28:52.075084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.769 qpair failed and we were unable to recover it. 00:28:17.769 [2024-04-25 03:28:52.075310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.769 [2024-04-25 03:28:52.075536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.769 [2024-04-25 03:28:52.075564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.769 qpair failed and we were unable to recover it. 00:28:17.769 [2024-04-25 03:28:52.075818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.769 [2024-04-25 03:28:52.076046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.769 [2024-04-25 03:28:52.076074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.769 qpair failed and we were unable to recover it. 00:28:17.769 [2024-04-25 03:28:52.076269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.769 [2024-04-25 03:28:52.076707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.769 [2024-04-25 03:28:52.076736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.769 qpair failed and we were unable to recover it. 00:28:17.769 [2024-04-25 03:28:52.076952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.769 [2024-04-25 03:28:52.077213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.769 [2024-04-25 03:28:52.077238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.769 qpair failed and we were unable to recover it. 00:28:17.769 [2024-04-25 03:28:52.077439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.769 [2024-04-25 03:28:52.077604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.769 [2024-04-25 03:28:52.077652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.769 qpair failed and we were unable to recover it. 00:28:17.769 [2024-04-25 03:28:52.077878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.769 [2024-04-25 03:28:52.078098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.769 [2024-04-25 03:28:52.078126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.769 qpair failed and we were unable to recover it. 00:28:17.769 [2024-04-25 03:28:52.078316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.769 [2024-04-25 03:28:52.078526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.769 [2024-04-25 03:28:52.078554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.769 qpair failed and we were unable to recover it. 00:28:17.769 [2024-04-25 03:28:52.078748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.769 [2024-04-25 03:28:52.078936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.769 [2024-04-25 03:28:52.078960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.769 qpair failed and we were unable to recover it. 00:28:17.769 [2024-04-25 03:28:52.079154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.769 [2024-04-25 03:28:52.079325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.769 [2024-04-25 03:28:52.079350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.769 qpair failed and we were unable to recover it. 00:28:17.770 [2024-04-25 03:28:52.079548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.770 [2024-04-25 03:28:52.079766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.770 [2024-04-25 03:28:52.079794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.770 qpair failed and we were unable to recover it. 00:28:17.770 [2024-04-25 03:28:52.079980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.770 [2024-04-25 03:28:52.080328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.770 [2024-04-25 03:28:52.080385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.770 qpair failed and we were unable to recover it. 00:28:17.770 [2024-04-25 03:28:52.080639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.770 [2024-04-25 03:28:52.080837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.770 [2024-04-25 03:28:52.080865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.770 qpair failed and we were unable to recover it. 00:28:17.770 [2024-04-25 03:28:52.081080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.770 [2024-04-25 03:28:52.081290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.770 [2024-04-25 03:28:52.081317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.770 qpair failed and we were unable to recover it. 00:28:17.770 [2024-04-25 03:28:52.081532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.770 [2024-04-25 03:28:52.081770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.770 [2024-04-25 03:28:52.081795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.770 qpair failed and we were unable to recover it. 00:28:17.770 [2024-04-25 03:28:52.082014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.770 [2024-04-25 03:28:52.082257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.770 [2024-04-25 03:28:52.082281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.770 qpair failed and we were unable to recover it. 00:28:17.770 [2024-04-25 03:28:52.082497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.770 [2024-04-25 03:28:52.082758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.770 [2024-04-25 03:28:52.082789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.770 qpair failed and we were unable to recover it. 00:28:17.770 [2024-04-25 03:28:52.082957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.770 [2024-04-25 03:28:52.083151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.770 [2024-04-25 03:28:52.083176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.770 qpair failed and we were unable to recover it. 00:28:17.770 [2024-04-25 03:28:52.083366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.770 [2024-04-25 03:28:52.083611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.770 [2024-04-25 03:28:52.083656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.770 qpair failed and we were unable to recover it. 00:28:17.770 [2024-04-25 03:28:52.083902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.770 [2024-04-25 03:28:52.084110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.770 [2024-04-25 03:28:52.084138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.770 qpair failed and we were unable to recover it. 00:28:17.770 [2024-04-25 03:28:52.084356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.770 [2024-04-25 03:28:52.084572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.770 [2024-04-25 03:28:52.084599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.770 qpair failed and we were unable to recover it. 00:28:17.770 [2024-04-25 03:28:52.084834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.770 [2024-04-25 03:28:52.085054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.770 [2024-04-25 03:28:52.085082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.770 qpair failed and we were unable to recover it. 00:28:17.770 [2024-04-25 03:28:52.085328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.770 [2024-04-25 03:28:52.085544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.770 [2024-04-25 03:28:52.085573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.770 qpair failed and we were unable to recover it. 00:28:17.770 [2024-04-25 03:28:52.085809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.770 [2024-04-25 03:28:52.085985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.770 [2024-04-25 03:28:52.086010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.770 qpair failed and we were unable to recover it. 00:28:17.770 [2024-04-25 03:28:52.086204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.770 [2024-04-25 03:28:52.086413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.770 [2024-04-25 03:28:52.086440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.770 qpair failed and we were unable to recover it. 00:28:17.770 [2024-04-25 03:28:52.086652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.770 [2024-04-25 03:28:52.086870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.770 [2024-04-25 03:28:52.086894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.770 qpair failed and we were unable to recover it. 00:28:17.770 [2024-04-25 03:28:52.087065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.770 [2024-04-25 03:28:52.087327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.770 [2024-04-25 03:28:52.087359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.770 qpair failed and we were unable to recover it. 00:28:17.770 [2024-04-25 03:28:52.087574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.770 [2024-04-25 03:28:52.087761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.770 [2024-04-25 03:28:52.087790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.770 qpair failed and we were unable to recover it. 00:28:17.770 [2024-04-25 03:28:52.088011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.770 [2024-04-25 03:28:52.088251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.770 [2024-04-25 03:28:52.088290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.770 qpair failed and we were unable to recover it. 00:28:17.770 [2024-04-25 03:28:52.088481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.770 [2024-04-25 03:28:52.088696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.770 [2024-04-25 03:28:52.088725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.770 qpair failed and we were unable to recover it. 00:28:17.770 [2024-04-25 03:28:52.088924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.770 [2024-04-25 03:28:52.089148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.770 [2024-04-25 03:28:52.089172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.770 qpair failed and we were unable to recover it. 00:28:17.770 [2024-04-25 03:28:52.089377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.770 [2024-04-25 03:28:52.089583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.770 [2024-04-25 03:28:52.089608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.770 qpair failed and we were unable to recover it. 00:28:17.770 [2024-04-25 03:28:52.089800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.770 [2024-04-25 03:28:52.090042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.770 [2024-04-25 03:28:52.090070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.770 qpair failed and we were unable to recover it. 00:28:17.770 [2024-04-25 03:28:52.090288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.770 [2024-04-25 03:28:52.090651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.770 [2024-04-25 03:28:52.090711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.770 qpair failed and we were unable to recover it. 00:28:17.770 [2024-04-25 03:28:52.090961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.770 [2024-04-25 03:28:52.091179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.770 [2024-04-25 03:28:52.091204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.770 qpair failed and we were unable to recover it. 00:28:17.770 [2024-04-25 03:28:52.091426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.770 [2024-04-25 03:28:52.091624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.770 [2024-04-25 03:28:52.091671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.770 qpair failed and we were unable to recover it. 00:28:17.770 [2024-04-25 03:28:52.091899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.770 [2024-04-25 03:28:52.092094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.770 [2024-04-25 03:28:52.092126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.770 qpair failed and we were unable to recover it. 00:28:17.770 [2024-04-25 03:28:52.092351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.770 [2024-04-25 03:28:52.092567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.770 [2024-04-25 03:28:52.092593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.770 qpair failed and we were unable to recover it. 00:28:17.770 [2024-04-25 03:28:52.092868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.770 [2024-04-25 03:28:52.093054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.770 [2024-04-25 03:28:52.093082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.770 qpair failed and we were unable to recover it. 00:28:17.771 [2024-04-25 03:28:52.093297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.771 [2024-04-25 03:28:52.093535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.771 [2024-04-25 03:28:52.093563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.771 qpair failed and we were unable to recover it. 00:28:17.771 [2024-04-25 03:28:52.093811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.771 [2024-04-25 03:28:52.093994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.771 [2024-04-25 03:28:52.094021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.771 qpair failed and we were unable to recover it. 00:28:17.771 [2024-04-25 03:28:52.094265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.771 [2024-04-25 03:28:52.094480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.771 [2024-04-25 03:28:52.094507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.771 qpair failed and we were unable to recover it. 00:28:17.771 [2024-04-25 03:28:52.094760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.771 [2024-04-25 03:28:52.094998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.771 [2024-04-25 03:28:52.095023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.771 qpair failed and we were unable to recover it. 00:28:17.771 [2024-04-25 03:28:52.095221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.771 [2024-04-25 03:28:52.095440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.771 [2024-04-25 03:28:52.095467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.771 qpair failed and we were unable to recover it. 00:28:17.771 [2024-04-25 03:28:52.095678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.771 [2024-04-25 03:28:52.095881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.771 [2024-04-25 03:28:52.095905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.771 qpair failed and we were unable to recover it. 00:28:17.771 [2024-04-25 03:28:52.096102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.771 [2024-04-25 03:28:52.096354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.771 [2024-04-25 03:28:52.096378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.771 qpair failed and we were unable to recover it. 00:28:17.771 [2024-04-25 03:28:52.096622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.771 [2024-04-25 03:28:52.096812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.771 [2024-04-25 03:28:52.096844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.771 qpair failed and we were unable to recover it. 00:28:17.771 [2024-04-25 03:28:52.097060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.771 [2024-04-25 03:28:52.097248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.771 [2024-04-25 03:28:52.097275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.771 qpair failed and we were unable to recover it. 00:28:17.771 [2024-04-25 03:28:52.097489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.771 [2024-04-25 03:28:52.097751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.771 [2024-04-25 03:28:52.097779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.771 qpair failed and we were unable to recover it. 00:28:17.771 [2024-04-25 03:28:52.098004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.771 [2024-04-25 03:28:52.098175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.771 [2024-04-25 03:28:52.098200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.771 qpair failed and we were unable to recover it. 00:28:17.771 [2024-04-25 03:28:52.098409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.771 [2024-04-25 03:28:52.098605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.771 [2024-04-25 03:28:52.098637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.771 qpair failed and we were unable to recover it. 00:28:17.771 [2024-04-25 03:28:52.098861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.771 [2024-04-25 03:28:52.099081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.771 [2024-04-25 03:28:52.099108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.771 qpair failed and we were unable to recover it. 00:28:17.771 [2024-04-25 03:28:52.099299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.771 [2024-04-25 03:28:52.099476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.771 [2024-04-25 03:28:52.099503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.771 qpair failed and we were unable to recover it. 00:28:17.771 [2024-04-25 03:28:52.099746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.771 [2024-04-25 03:28:52.099967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.771 [2024-04-25 03:28:52.099993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.771 qpair failed and we were unable to recover it. 00:28:17.771 [2024-04-25 03:28:52.100280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.771 [2024-04-25 03:28:52.100495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.771 [2024-04-25 03:28:52.100520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.771 qpair failed and we were unable to recover it. 00:28:17.771 [2024-04-25 03:28:52.100758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.771 [2024-04-25 03:28:52.100952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.771 [2024-04-25 03:28:52.100979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.771 qpair failed and we were unable to recover it. 00:28:17.771 [2024-04-25 03:28:52.101190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.771 [2024-04-25 03:28:52.101418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.771 [2024-04-25 03:28:52.101443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.771 qpair failed and we were unable to recover it. 00:28:17.771 [2024-04-25 03:28:52.101648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.771 [2024-04-25 03:28:52.101960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.771 [2024-04-25 03:28:52.101984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.771 qpair failed and we were unable to recover it. 00:28:17.771 [2024-04-25 03:28:52.102212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.771 [2024-04-25 03:28:52.102454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.771 [2024-04-25 03:28:52.102481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.771 qpair failed and we were unable to recover it. 00:28:17.771 [2024-04-25 03:28:52.102698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.771 [2024-04-25 03:28:52.102915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.771 [2024-04-25 03:28:52.102943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.771 qpair failed and we were unable to recover it. 00:28:17.771 [2024-04-25 03:28:52.103186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.771 [2024-04-25 03:28:52.103392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.771 [2024-04-25 03:28:52.103416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.771 qpair failed and we were unable to recover it. 00:28:17.771 [2024-04-25 03:28:52.103588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.771 [2024-04-25 03:28:52.103802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.771 [2024-04-25 03:28:52.103828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.771 qpair failed and we were unable to recover it. 00:28:17.771 [2024-04-25 03:28:52.104046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.771 [2024-04-25 03:28:52.104226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.771 [2024-04-25 03:28:52.104255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.771 qpair failed and we were unable to recover it. 00:28:17.771 [2024-04-25 03:28:52.104442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.771 [2024-04-25 03:28:52.104659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.771 [2024-04-25 03:28:52.104687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.771 qpair failed and we were unable to recover it. 00:28:17.771 [2024-04-25 03:28:52.104903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.771 [2024-04-25 03:28:52.105095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.771 [2024-04-25 03:28:52.105122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.771 qpair failed and we were unable to recover it. 00:28:17.771 [2024-04-25 03:28:52.105359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.771 [2024-04-25 03:28:52.105552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.771 [2024-04-25 03:28:52.105596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.771 qpair failed and we were unable to recover it. 00:28:17.771 [2024-04-25 03:28:52.105834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.771 [2024-04-25 03:28:52.106058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.771 [2024-04-25 03:28:52.106085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.771 qpair failed and we were unable to recover it. 00:28:17.771 [2024-04-25 03:28:52.106283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.771 [2024-04-25 03:28:52.106498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.771 [2024-04-25 03:28:52.106527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.772 qpair failed and we were unable to recover it. 00:28:17.772 [2024-04-25 03:28:52.106750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.772 [2024-04-25 03:28:52.106928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.772 [2024-04-25 03:28:52.106953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.772 qpair failed and we were unable to recover it. 00:28:17.772 [2024-04-25 03:28:52.107146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.772 [2024-04-25 03:28:52.107340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.772 [2024-04-25 03:28:52.107367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.772 qpair failed and we were unable to recover it. 00:28:17.772 [2024-04-25 03:28:52.107593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.772 [2024-04-25 03:28:52.107850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.772 [2024-04-25 03:28:52.107878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.772 qpair failed and we were unable to recover it. 00:28:17.772 [2024-04-25 03:28:52.108122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.772 [2024-04-25 03:28:52.108308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.772 [2024-04-25 03:28:52.108337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.772 qpair failed and we were unable to recover it. 00:28:17.772 [2024-04-25 03:28:52.108578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.772 [2024-04-25 03:28:52.108814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.772 [2024-04-25 03:28:52.108842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.772 qpair failed and we were unable to recover it. 00:28:17.772 [2024-04-25 03:28:52.109028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.772 [2024-04-25 03:28:52.109214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.772 [2024-04-25 03:28:52.109242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.772 qpair failed and we were unable to recover it. 00:28:17.772 [2024-04-25 03:28:52.109460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.772 [2024-04-25 03:28:52.109674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.772 [2024-04-25 03:28:52.109703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.772 qpair failed and we were unable to recover it. 00:28:17.772 [2024-04-25 03:28:52.109946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.772 [2024-04-25 03:28:52.110195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.772 [2024-04-25 03:28:52.110220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.772 qpair failed and we were unable to recover it. 00:28:17.772 [2024-04-25 03:28:52.110420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.772 [2024-04-25 03:28:52.110644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.772 [2024-04-25 03:28:52.110693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.772 qpair failed and we were unable to recover it. 00:28:17.772 [2024-04-25 03:28:52.110900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.772 [2024-04-25 03:28:52.111154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.772 [2024-04-25 03:28:52.111181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.772 qpair failed and we were unable to recover it. 00:28:17.772 [2024-04-25 03:28:52.111401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.772 [2024-04-25 03:28:52.111619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.772 [2024-04-25 03:28:52.111656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.772 qpair failed and we were unable to recover it. 00:28:17.772 [2024-04-25 03:28:52.111900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.772 [2024-04-25 03:28:52.112095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.772 [2024-04-25 03:28:52.112119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.772 qpair failed and we were unable to recover it. 00:28:17.772 [2024-04-25 03:28:52.112311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.772 [2024-04-25 03:28:52.112560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.772 [2024-04-25 03:28:52.112587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.772 qpair failed and we were unable to recover it. 00:28:17.772 [2024-04-25 03:28:52.112812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.772 [2024-04-25 03:28:52.113011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.772 [2024-04-25 03:28:52.113040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.772 qpair failed and we were unable to recover it. 00:28:17.772 [2024-04-25 03:28:52.113259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.772 [2024-04-25 03:28:52.113449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.772 [2024-04-25 03:28:52.113477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.772 qpair failed and we were unable to recover it. 00:28:17.772 [2024-04-25 03:28:52.113701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.772 [2024-04-25 03:28:52.113975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.772 [2024-04-25 03:28:52.114000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.772 qpair failed and we were unable to recover it. 00:28:17.772 [2024-04-25 03:28:52.114192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.772 [2024-04-25 03:28:52.114370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.772 [2024-04-25 03:28:52.114394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.772 qpair failed and we were unable to recover it. 00:28:17.772 [2024-04-25 03:28:52.114589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.772 [2024-04-25 03:28:52.114806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.772 [2024-04-25 03:28:52.114833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.772 qpair failed and we were unable to recover it. 00:28:17.772 [2024-04-25 03:28:52.115058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.772 [2024-04-25 03:28:52.115268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.772 [2024-04-25 03:28:52.115295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.772 qpair failed and we were unable to recover it. 00:28:17.772 [2024-04-25 03:28:52.115549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.772 [2024-04-25 03:28:52.115750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.772 [2024-04-25 03:28:52.115777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.772 qpair failed and we were unable to recover it. 00:28:17.772 [2024-04-25 03:28:52.115973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.772 [2024-04-25 03:28:52.116193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.772 [2024-04-25 03:28:52.116220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.772 qpair failed and we were unable to recover it. 00:28:17.772 [2024-04-25 03:28:52.116434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.772 [2024-04-25 03:28:52.116625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.772 [2024-04-25 03:28:52.116660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.772 qpair failed and we were unable to recover it. 00:28:17.772 [2024-04-25 03:28:52.116871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.772 [2024-04-25 03:28:52.117115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.772 [2024-04-25 03:28:52.117142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.772 qpair failed and we were unable to recover it. 00:28:17.772 [2024-04-25 03:28:52.117384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.772 [2024-04-25 03:28:52.117571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.772 [2024-04-25 03:28:52.117598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.772 qpair failed and we were unable to recover it. 00:28:17.772 [2024-04-25 03:28:52.117826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.772 [2024-04-25 03:28:52.118061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.772 [2024-04-25 03:28:52.118086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.772 qpair failed and we were unable to recover it. 00:28:17.772 [2024-04-25 03:28:52.118257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.772 [2024-04-25 03:28:52.118476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.772 [2024-04-25 03:28:52.118504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.772 qpair failed and we were unable to recover it. 00:28:17.772 [2024-04-25 03:28:52.118743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.772 [2024-04-25 03:28:52.118930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.772 [2024-04-25 03:28:52.118955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.772 qpair failed and we were unable to recover it. 00:28:17.772 [2024-04-25 03:28:52.119149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.772 [2024-04-25 03:28:52.119373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.772 [2024-04-25 03:28:52.119401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.772 qpair failed and we were unable to recover it. 00:28:17.772 [2024-04-25 03:28:52.119661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.772 [2024-04-25 03:28:52.119879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.773 [2024-04-25 03:28:52.119919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.773 qpair failed and we were unable to recover it. 00:28:17.773 [2024-04-25 03:28:52.120136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.773 [2024-04-25 03:28:52.120351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.773 [2024-04-25 03:28:52.120378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.773 qpair failed and we were unable to recover it. 00:28:17.773 [2024-04-25 03:28:52.120639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.773 [2024-04-25 03:28:52.120818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.773 [2024-04-25 03:28:52.120844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.773 qpair failed and we were unable to recover it. 00:28:17.773 [2024-04-25 03:28:52.121076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.773 [2024-04-25 03:28:52.121265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.773 [2024-04-25 03:28:52.121292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.773 qpair failed and we were unable to recover it. 00:28:17.773 [2024-04-25 03:28:52.121507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.773 [2024-04-25 03:28:52.121672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.773 [2024-04-25 03:28:52.121697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.773 qpair failed and we were unable to recover it. 00:28:17.773 [2024-04-25 03:28:52.121892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.773 [2024-04-25 03:28:52.122084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.773 [2024-04-25 03:28:52.122113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.773 qpair failed and we were unable to recover it. 00:28:17.773 [2024-04-25 03:28:52.122344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.773 [2024-04-25 03:28:52.122561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.773 [2024-04-25 03:28:52.122586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.773 qpair failed and we were unable to recover it. 00:28:17.773 [2024-04-25 03:28:52.122780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.773 [2024-04-25 03:28:52.122970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.773 [2024-04-25 03:28:52.122995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.773 qpair failed and we were unable to recover it. 00:28:17.773 [2024-04-25 03:28:52.123214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.773 [2024-04-25 03:28:52.123378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.773 [2024-04-25 03:28:52.123403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.773 qpair failed and we were unable to recover it. 00:28:17.773 [2024-04-25 03:28:52.123646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.773 [2024-04-25 03:28:52.123885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.773 [2024-04-25 03:28:52.123912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.773 qpair failed and we were unable to recover it. 00:28:17.773 [2024-04-25 03:28:52.124098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.773 [2024-04-25 03:28:52.124398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.773 [2024-04-25 03:28:52.124423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.773 qpair failed and we were unable to recover it. 00:28:17.773 [2024-04-25 03:28:52.124643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.773 [2024-04-25 03:28:52.124850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.773 [2024-04-25 03:28:52.124875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.773 qpair failed and we were unable to recover it. 00:28:17.773 [2024-04-25 03:28:52.125095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.773 [2024-04-25 03:28:52.125259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.773 [2024-04-25 03:28:52.125284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.773 qpair failed and we were unable to recover it. 00:28:17.773 [2024-04-25 03:28:52.125557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.773 [2024-04-25 03:28:52.125778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.773 [2024-04-25 03:28:52.125805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.773 qpair failed and we were unable to recover it. 00:28:17.773 [2024-04-25 03:28:52.126036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.773 [2024-04-25 03:28:52.126212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.773 [2024-04-25 03:28:52.126236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.773 qpair failed and we were unable to recover it. 00:28:17.773 [2024-04-25 03:28:52.126411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.773 [2024-04-25 03:28:52.126633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.773 [2024-04-25 03:28:52.126663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.773 qpair failed and we were unable to recover it. 00:28:17.773 [2024-04-25 03:28:52.126954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.773 [2024-04-25 03:28:52.127181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.773 [2024-04-25 03:28:52.127205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.773 qpair failed and we were unable to recover it. 00:28:17.773 [2024-04-25 03:28:52.127429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.773 [2024-04-25 03:28:52.127687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.773 [2024-04-25 03:28:52.127713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.773 qpair failed and we were unable to recover it. 00:28:17.773 [2024-04-25 03:28:52.127882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.773 [2024-04-25 03:28:52.128081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.773 [2024-04-25 03:28:52.128106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.773 qpair failed and we were unable to recover it. 00:28:17.773 [2024-04-25 03:28:52.128301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.773 [2024-04-25 03:28:52.128495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.773 [2024-04-25 03:28:52.128519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.773 qpair failed and we were unable to recover it. 00:28:17.773 [2024-04-25 03:28:52.128751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.773 [2024-04-25 03:28:52.128977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.773 [2024-04-25 03:28:52.129003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.773 qpair failed and we were unable to recover it. 00:28:17.773 [2024-04-25 03:28:52.129215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.773 [2024-04-25 03:28:52.129421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.773 [2024-04-25 03:28:52.129445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.773 qpair failed and we were unable to recover it. 00:28:17.773 [2024-04-25 03:28:52.129650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.773 [2024-04-25 03:28:52.129873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.773 [2024-04-25 03:28:52.129898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.773 qpair failed and we were unable to recover it. 00:28:17.773 [2024-04-25 03:28:52.130134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.773 [2024-04-25 03:28:52.130332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.773 [2024-04-25 03:28:52.130357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.773 qpair failed and we were unable to recover it. 00:28:17.773 [2024-04-25 03:28:52.130618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.773 [2024-04-25 03:28:52.130845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.773 [2024-04-25 03:28:52.130871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.773 qpair failed and we were unable to recover it. 00:28:17.774 [2024-04-25 03:28:52.131052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.774 [2024-04-25 03:28:52.131226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.774 [2024-04-25 03:28:52.131250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.774 qpair failed and we were unable to recover it. 00:28:17.774 [2024-04-25 03:28:52.131437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.774 [2024-04-25 03:28:52.131677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.774 [2024-04-25 03:28:52.131702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.774 qpair failed and we were unable to recover it. 00:28:17.774 [2024-04-25 03:28:52.131930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.774 [2024-04-25 03:28:52.132157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.774 [2024-04-25 03:28:52.132181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.774 qpair failed and we were unable to recover it. 00:28:17.774 [2024-04-25 03:28:52.132472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.774 [2024-04-25 03:28:52.132692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.774 [2024-04-25 03:28:52.132718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.774 qpair failed and we were unable to recover it. 00:28:17.774 [2024-04-25 03:28:52.132929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.774 [2024-04-25 03:28:52.133123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.774 [2024-04-25 03:28:52.133147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.774 qpair failed and we were unable to recover it. 00:28:17.774 [2024-04-25 03:28:52.133361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.774 [2024-04-25 03:28:52.133583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.774 [2024-04-25 03:28:52.133608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.774 qpair failed and we were unable to recover it. 00:28:17.774 [2024-04-25 03:28:52.133802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.774 [2024-04-25 03:28:52.134036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.774 [2024-04-25 03:28:52.134064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.774 qpair failed and we were unable to recover it. 00:28:17.774 [2024-04-25 03:28:52.134279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.774 [2024-04-25 03:28:52.134711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.774 [2024-04-25 03:28:52.134737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.774 qpair failed and we were unable to recover it. 00:28:17.774 [2024-04-25 03:28:52.134930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.774 [2024-04-25 03:28:52.135193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.774 [2024-04-25 03:28:52.135218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.774 qpair failed and we were unable to recover it. 00:28:17.774 [2024-04-25 03:28:52.135436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.774 [2024-04-25 03:28:52.135659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.774 [2024-04-25 03:28:52.135685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.774 qpair failed and we were unable to recover it. 00:28:17.774 [2024-04-25 03:28:52.135903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.774 [2024-04-25 03:28:52.136124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.774 [2024-04-25 03:28:52.136148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.774 qpair failed and we were unable to recover it. 00:28:17.774 [2024-04-25 03:28:52.136356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.774 [2024-04-25 03:28:52.136566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.774 [2024-04-25 03:28:52.136589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.774 qpair failed and we were unable to recover it. 00:28:17.774 [2024-04-25 03:28:52.136894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.774 [2024-04-25 03:28:52.137146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.774 [2024-04-25 03:28:52.137170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.774 qpair failed and we were unable to recover it. 00:28:17.774 [2024-04-25 03:28:52.137367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.774 [2024-04-25 03:28:52.137575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.774 [2024-04-25 03:28:52.137599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.774 qpair failed and we were unable to recover it. 00:28:17.774 [2024-04-25 03:28:52.137855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.774 [2024-04-25 03:28:52.138052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.774 [2024-04-25 03:28:52.138077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.774 qpair failed and we were unable to recover it. 00:28:17.774 [2024-04-25 03:28:52.138355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.774 [2024-04-25 03:28:52.138548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.774 [2024-04-25 03:28:52.138577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.774 qpair failed and we were unable to recover it. 00:28:17.774 [2024-04-25 03:28:52.138789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.774 [2024-04-25 03:28:52.138993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.774 [2024-04-25 03:28:52.139018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.774 qpair failed and we were unable to recover it. 00:28:17.774 [2024-04-25 03:28:52.139269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.774 [2024-04-25 03:28:52.139454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.774 [2024-04-25 03:28:52.139493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.774 qpair failed and we were unable to recover it. 00:28:17.774 [2024-04-25 03:28:52.139683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.774 [2024-04-25 03:28:52.139853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.774 [2024-04-25 03:28:52.139878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.774 qpair failed and we were unable to recover it. 00:28:17.774 [2024-04-25 03:28:52.140078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.774 [2024-04-25 03:28:52.140326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.774 [2024-04-25 03:28:52.140350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.774 qpair failed and we were unable to recover it. 00:28:17.774 [2024-04-25 03:28:52.140555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.774 [2024-04-25 03:28:52.140718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.774 [2024-04-25 03:28:52.140743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.774 qpair failed and we were unable to recover it. 00:28:17.774 [2024-04-25 03:28:52.140928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.774 [2024-04-25 03:28:52.141227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.774 [2024-04-25 03:28:52.141251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.774 qpair failed and we were unable to recover it. 00:28:17.774 [2024-04-25 03:28:52.141482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.774 [2024-04-25 03:28:52.141676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.774 [2024-04-25 03:28:52.141701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.774 qpair failed and we were unable to recover it. 00:28:17.774 [2024-04-25 03:28:52.141925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.774 [2024-04-25 03:28:52.142116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.774 [2024-04-25 03:28:52.142141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.774 qpair failed and we were unable to recover it. 00:28:17.774 [2024-04-25 03:28:52.142332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.774 [2024-04-25 03:28:52.142532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.774 [2024-04-25 03:28:52.142557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.774 qpair failed and we were unable to recover it. 00:28:17.774 [2024-04-25 03:28:52.142767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.774 [2024-04-25 03:28:52.142941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.774 [2024-04-25 03:28:52.142965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.774 qpair failed and we were unable to recover it. 00:28:17.774 [2024-04-25 03:28:52.143141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.774 [2024-04-25 03:28:52.143355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.774 [2024-04-25 03:28:52.143381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.774 qpair failed and we were unable to recover it. 00:28:17.774 [2024-04-25 03:28:52.143609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.774 [2024-04-25 03:28:52.143858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.774 [2024-04-25 03:28:52.143883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.774 qpair failed and we were unable to recover it. 00:28:17.774 [2024-04-25 03:28:52.144139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.774 [2024-04-25 03:28:52.144405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.774 [2024-04-25 03:28:52.144432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.774 qpair failed and we were unable to recover it. 00:28:17.775 [2024-04-25 03:28:52.144761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.775 [2024-04-25 03:28:52.144936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.775 [2024-04-25 03:28:52.144961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.775 qpair failed and we were unable to recover it. 00:28:17.775 [2024-04-25 03:28:52.145159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.775 [2024-04-25 03:28:52.145527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.775 [2024-04-25 03:28:52.145552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.775 qpair failed and we were unable to recover it. 00:28:17.775 [2024-04-25 03:28:52.145765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.775 [2024-04-25 03:28:52.145948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.775 [2024-04-25 03:28:52.145973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.775 qpair failed and we were unable to recover it. 00:28:17.775 [2024-04-25 03:28:52.146182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.775 [2024-04-25 03:28:52.146447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.775 [2024-04-25 03:28:52.146471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.775 qpair failed and we were unable to recover it. 00:28:17.775 [2024-04-25 03:28:52.146704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.775 [2024-04-25 03:28:52.146880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.775 [2024-04-25 03:28:52.146904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.775 qpair failed and we were unable to recover it. 00:28:17.775 [2024-04-25 03:28:52.147160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.775 [2024-04-25 03:28:52.147437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.775 [2024-04-25 03:28:52.147461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.775 qpair failed and we were unable to recover it. 00:28:17.775 [2024-04-25 03:28:52.147643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.775 [2024-04-25 03:28:52.147875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.775 [2024-04-25 03:28:52.147900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.775 qpair failed and we were unable to recover it. 00:28:17.775 [2024-04-25 03:28:52.148099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.775 [2024-04-25 03:28:52.148290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.775 [2024-04-25 03:28:52.148316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.775 qpair failed and we were unable to recover it. 00:28:17.775 [2024-04-25 03:28:52.148542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.775 [2024-04-25 03:28:52.148746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.775 [2024-04-25 03:28:52.148771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.775 qpair failed and we were unable to recover it. 00:28:17.775 [2024-04-25 03:28:52.148970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.775 [2024-04-25 03:28:52.149192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.775 [2024-04-25 03:28:52.149216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.775 qpair failed and we were unable to recover it. 00:28:17.775 [2024-04-25 03:28:52.149408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.775 [2024-04-25 03:28:52.149577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.775 [2024-04-25 03:28:52.149601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.775 qpair failed and we were unable to recover it. 00:28:17.775 [2024-04-25 03:28:52.149817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.775 [2024-04-25 03:28:52.149986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.775 [2024-04-25 03:28:52.150025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.775 qpair failed and we were unable to recover it. 00:28:17.775 [2024-04-25 03:28:52.150254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.775 [2024-04-25 03:28:52.150528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.775 [2024-04-25 03:28:52.150553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.775 qpair failed and we were unable to recover it. 00:28:17.775 [2024-04-25 03:28:52.150781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.775 [2024-04-25 03:28:52.150977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.775 [2024-04-25 03:28:52.151003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.775 qpair failed and we were unable to recover it. 00:28:17.775 [2024-04-25 03:28:52.151258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.775 [2024-04-25 03:28:52.151580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.775 [2024-04-25 03:28:52.151636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.775 qpair failed and we were unable to recover it. 00:28:17.775 [2024-04-25 03:28:52.151931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.775 [2024-04-25 03:28:52.152159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.775 [2024-04-25 03:28:52.152183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.775 qpair failed and we were unable to recover it. 00:28:17.775 [2024-04-25 03:28:52.152348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.775 [2024-04-25 03:28:52.152602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.775 [2024-04-25 03:28:52.152661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.775 qpair failed and we were unable to recover it. 00:28:17.775 [2024-04-25 03:28:52.152832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.775 [2024-04-25 03:28:52.153119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.775 [2024-04-25 03:28:52.153143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.775 qpair failed and we were unable to recover it. 00:28:17.775 [2024-04-25 03:28:52.153361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.775 [2024-04-25 03:28:52.153590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.775 [2024-04-25 03:28:52.153617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.775 qpair failed and we were unable to recover it. 00:28:17.775 [2024-04-25 03:28:52.153876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.775 [2024-04-25 03:28:52.154057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.775 [2024-04-25 03:28:52.154083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.775 qpair failed and we were unable to recover it. 00:28:17.775 [2024-04-25 03:28:52.154273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.775 [2024-04-25 03:28:52.154538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.775 [2024-04-25 03:28:52.154587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.775 qpair failed and we were unable to recover it. 00:28:17.775 [2024-04-25 03:28:52.154837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.775 [2024-04-25 03:28:52.155002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.775 [2024-04-25 03:28:52.155026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.775 qpair failed and we were unable to recover it. 00:28:17.775 [2024-04-25 03:28:52.155258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.775 [2024-04-25 03:28:52.155666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.775 [2024-04-25 03:28:52.155726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.775 qpair failed and we were unable to recover it. 00:28:17.775 [2024-04-25 03:28:52.155958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.775 [2024-04-25 03:28:52.156180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.775 [2024-04-25 03:28:52.156205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.775 qpair failed and we were unable to recover it. 00:28:17.775 [2024-04-25 03:28:52.156402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.775 [2024-04-25 03:28:52.156643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.775 [2024-04-25 03:28:52.156684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.775 qpair failed and we were unable to recover it. 00:28:17.775 [2024-04-25 03:28:52.156883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.775 [2024-04-25 03:28:52.157078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.775 [2024-04-25 03:28:52.157102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.775 qpair failed and we were unable to recover it. 00:28:17.775 [2024-04-25 03:28:52.157298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.775 [2024-04-25 03:28:52.157481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.775 [2024-04-25 03:28:52.157505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.775 qpair failed and we were unable to recover it. 00:28:17.775 [2024-04-25 03:28:52.157739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.775 [2024-04-25 03:28:52.157971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.775 [2024-04-25 03:28:52.158000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.775 qpair failed and we were unable to recover it. 00:28:17.775 [2024-04-25 03:28:52.158170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.775 [2024-04-25 03:28:52.158367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.775 [2024-04-25 03:28:52.158392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.776 qpair failed and we were unable to recover it. 00:28:17.776 [2024-04-25 03:28:52.158593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.776 [2024-04-25 03:28:52.158824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.776 [2024-04-25 03:28:52.158849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.776 qpair failed and we were unable to recover it. 00:28:17.776 [2024-04-25 03:28:52.159069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.776 [2024-04-25 03:28:52.159325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.776 [2024-04-25 03:28:52.159364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.776 qpair failed and we were unable to recover it. 00:28:17.776 [2024-04-25 03:28:52.159566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.776 [2024-04-25 03:28:52.159743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.776 [2024-04-25 03:28:52.159768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.776 qpair failed and we were unable to recover it. 00:28:17.776 [2024-04-25 03:28:52.159963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.776 [2024-04-25 03:28:52.160203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.776 [2024-04-25 03:28:52.160227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.776 qpair failed and we were unable to recover it. 00:28:17.776 [2024-04-25 03:28:52.160421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.776 [2024-04-25 03:28:52.160588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.776 [2024-04-25 03:28:52.160612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.776 qpair failed and we were unable to recover it. 00:28:17.776 [2024-04-25 03:28:52.160847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.776 [2024-04-25 03:28:52.161011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.776 [2024-04-25 03:28:52.161036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.776 qpair failed and we were unable to recover it. 00:28:17.776 [2024-04-25 03:28:52.161216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.776 [2024-04-25 03:28:52.161422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.776 [2024-04-25 03:28:52.161448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.776 qpair failed and we were unable to recover it. 00:28:17.776 [2024-04-25 03:28:52.161700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.776 [2024-04-25 03:28:52.161921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.776 [2024-04-25 03:28:52.161946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.776 qpair failed and we were unable to recover it. 00:28:17.776 [2024-04-25 03:28:52.162163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.776 [2024-04-25 03:28:52.162360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.776 [2024-04-25 03:28:52.162388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.776 qpair failed and we were unable to recover it. 00:28:17.776 [2024-04-25 03:28:52.162617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.776 [2024-04-25 03:28:52.162846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.776 [2024-04-25 03:28:52.162870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.776 qpair failed and we were unable to recover it. 00:28:17.776 [2024-04-25 03:28:52.163096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.776 [2024-04-25 03:28:52.163292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.776 [2024-04-25 03:28:52.163316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.776 qpair failed and we were unable to recover it. 00:28:17.776 [2024-04-25 03:28:52.163517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.776 [2024-04-25 03:28:52.163719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.776 [2024-04-25 03:28:52.163744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.776 qpair failed and we were unable to recover it. 00:28:17.776 [2024-04-25 03:28:52.163945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.776 [2024-04-25 03:28:52.164147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.776 [2024-04-25 03:28:52.164171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.776 qpair failed and we were unable to recover it. 00:28:17.776 [2024-04-25 03:28:52.164364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.776 [2024-04-25 03:28:52.164574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.776 [2024-04-25 03:28:52.164599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.776 qpair failed and we were unable to recover it. 00:28:17.776 [2024-04-25 03:28:52.164772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.776 [2024-04-25 03:28:52.164944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.776 [2024-04-25 03:28:52.164970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.776 qpair failed and we were unable to recover it. 00:28:17.776 [2024-04-25 03:28:52.165171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.776 [2024-04-25 03:28:52.165366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.776 [2024-04-25 03:28:52.165390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.776 qpair failed and we were unable to recover it. 00:28:17.776 [2024-04-25 03:28:52.165606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.776 [2024-04-25 03:28:52.165828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.776 [2024-04-25 03:28:52.165853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.776 qpair failed and we were unable to recover it. 00:28:17.776 [2024-04-25 03:28:52.166038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.776 [2024-04-25 03:28:52.166263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.776 [2024-04-25 03:28:52.166287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.776 qpair failed and we were unable to recover it. 00:28:17.776 [2024-04-25 03:28:52.166461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.776 [2024-04-25 03:28:52.166658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.776 [2024-04-25 03:28:52.166688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.776 qpair failed and we were unable to recover it. 00:28:17.776 [2024-04-25 03:28:52.166885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.776 [2024-04-25 03:28:52.167101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.776 [2024-04-25 03:28:52.167126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.776 qpair failed and we were unable to recover it. 00:28:17.776 [2024-04-25 03:28:52.167348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.776 [2024-04-25 03:28:52.167544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.776 [2024-04-25 03:28:52.167568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.776 qpair failed and we were unable to recover it. 00:28:17.776 [2024-04-25 03:28:52.167795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.776 [2024-04-25 03:28:52.167964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.776 [2024-04-25 03:28:52.167990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.776 qpair failed and we were unable to recover it. 00:28:17.776 [2024-04-25 03:28:52.168156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.776 [2024-04-25 03:28:52.168380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.776 [2024-04-25 03:28:52.168404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.776 qpair failed and we were unable to recover it. 00:28:17.776 [2024-04-25 03:28:52.168608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.776 [2024-04-25 03:28:52.168790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.776 [2024-04-25 03:28:52.168815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.776 qpair failed and we were unable to recover it. 00:28:17.776 [2024-04-25 03:28:52.169044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.776 [2024-04-25 03:28:52.169250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.776 [2024-04-25 03:28:52.169275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.776 qpair failed and we were unable to recover it. 00:28:17.776 [2024-04-25 03:28:52.169447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.776 [2024-04-25 03:28:52.169646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.776 [2024-04-25 03:28:52.169682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.776 qpair failed and we were unable to recover it. 00:28:17.776 [2024-04-25 03:28:52.169860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.776 [2024-04-25 03:28:52.170067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.776 [2024-04-25 03:28:52.170092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.776 qpair failed and we were unable to recover it. 00:28:17.776 [2024-04-25 03:28:52.170271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.776 [2024-04-25 03:28:52.170464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.776 [2024-04-25 03:28:52.170490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.776 qpair failed and we were unable to recover it. 00:28:17.776 [2024-04-25 03:28:52.170683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.776 [2024-04-25 03:28:52.170848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.777 [2024-04-25 03:28:52.170880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.777 qpair failed and we were unable to recover it. 00:28:17.777 [2024-04-25 03:28:52.171085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.777 [2024-04-25 03:28:52.171288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.777 [2024-04-25 03:28:52.171313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.777 qpair failed and we were unable to recover it. 00:28:17.777 [2024-04-25 03:28:52.171531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.777 [2024-04-25 03:28:52.171728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.777 [2024-04-25 03:28:52.171754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.777 qpair failed and we were unable to recover it. 00:28:17.777 [2024-04-25 03:28:52.171953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.777 [2024-04-25 03:28:52.172150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.777 [2024-04-25 03:28:52.172176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.777 qpair failed and we were unable to recover it. 00:28:17.777 [2024-04-25 03:28:52.172397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.777 [2024-04-25 03:28:52.172562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.777 [2024-04-25 03:28:52.172587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.777 qpair failed and we were unable to recover it. 00:28:17.777 [2024-04-25 03:28:52.172823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.777 [2024-04-25 03:28:52.173030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.777 [2024-04-25 03:28:52.173054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.777 qpair failed and we were unable to recover it. 00:28:17.777 [2024-04-25 03:28:52.173277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.777 [2024-04-25 03:28:52.173471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.777 [2024-04-25 03:28:52.173495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.777 qpair failed and we were unable to recover it. 00:28:17.777 [2024-04-25 03:28:52.173696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.777 [2024-04-25 03:28:52.173897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.777 [2024-04-25 03:28:52.173923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.777 qpair failed and we were unable to recover it. 00:28:17.777 [2024-04-25 03:28:52.174141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.777 [2024-04-25 03:28:52.174315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.777 [2024-04-25 03:28:52.174339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.777 qpair failed and we were unable to recover it. 00:28:17.777 [2024-04-25 03:28:52.174550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.777 [2024-04-25 03:28:52.174793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.777 [2024-04-25 03:28:52.174819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.777 qpair failed and we were unable to recover it. 00:28:17.777 [2024-04-25 03:28:52.175017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.777 [2024-04-25 03:28:52.175214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.777 [2024-04-25 03:28:52.175238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.777 qpair failed and we were unable to recover it. 00:28:17.777 [2024-04-25 03:28:52.175436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.777 [2024-04-25 03:28:52.175663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.777 [2024-04-25 03:28:52.175717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.777 qpair failed and we were unable to recover it. 00:28:17.777 [2024-04-25 03:28:52.175892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.777 [2024-04-25 03:28:52.176116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.777 [2024-04-25 03:28:52.176140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.777 qpair failed and we were unable to recover it. 00:28:17.777 [2024-04-25 03:28:52.176334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.777 [2024-04-25 03:28:52.176595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.777 [2024-04-25 03:28:52.176620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.777 qpair failed and we were unable to recover it. 00:28:17.777 [2024-04-25 03:28:52.176918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.777 [2024-04-25 03:28:52.177115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.777 [2024-04-25 03:28:52.177141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.777 qpair failed and we were unable to recover it. 00:28:17.777 [2024-04-25 03:28:52.177383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.777 [2024-04-25 03:28:52.177692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.777 [2024-04-25 03:28:52.177735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.777 qpair failed and we were unable to recover it. 00:28:17.777 [2024-04-25 03:28:52.177968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.777 [2024-04-25 03:28:52.178235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.777 [2024-04-25 03:28:52.178297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.777 qpair failed and we were unable to recover it. 00:28:17.777 [2024-04-25 03:28:52.178595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.777 [2024-04-25 03:28:52.178880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.777 [2024-04-25 03:28:52.178905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.777 qpair failed and we were unable to recover it. 00:28:17.777 [2024-04-25 03:28:52.179150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.777 [2024-04-25 03:28:52.179318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.777 [2024-04-25 03:28:52.179358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.777 qpair failed and we were unable to recover it. 00:28:17.777 [2024-04-25 03:28:52.179617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.777 [2024-04-25 03:28:52.179841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.777 [2024-04-25 03:28:52.179864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.777 qpair failed and we were unable to recover it. 00:28:17.777 [2024-04-25 03:28:52.180098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.777 [2024-04-25 03:28:52.180319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.777 [2024-04-25 03:28:52.180344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.777 qpair failed and we were unable to recover it. 00:28:17.777 [2024-04-25 03:28:52.180543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.777 [2024-04-25 03:28:52.180838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.777 [2024-04-25 03:28:52.180864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.777 qpair failed and we were unable to recover it. 00:28:17.777 [2024-04-25 03:28:52.181062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.777 [2024-04-25 03:28:52.181301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.777 [2024-04-25 03:28:52.181340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.777 qpair failed and we were unable to recover it. 00:28:17.777 [2024-04-25 03:28:52.181570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.777 [2024-04-25 03:28:52.181793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.777 [2024-04-25 03:28:52.181819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.777 qpair failed and we were unable to recover it. 00:28:17.777 [2024-04-25 03:28:52.182014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.777 [2024-04-25 03:28:52.182237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.777 [2024-04-25 03:28:52.182261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.777 qpair failed and we were unable to recover it. 00:28:17.777 [2024-04-25 03:28:52.182459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.777 [2024-04-25 03:28:52.182728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.777 [2024-04-25 03:28:52.182754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.777 qpair failed and we were unable to recover it. 00:28:17.777 [2024-04-25 03:28:52.183111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.777 [2024-04-25 03:28:52.183515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.777 [2024-04-25 03:28:52.183573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.777 qpair failed and we were unable to recover it. 00:28:17.777 [2024-04-25 03:28:52.183853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.777 [2024-04-25 03:28:52.184049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.777 [2024-04-25 03:28:52.184073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.777 qpair failed and we were unable to recover it. 00:28:17.777 [2024-04-25 03:28:52.184236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.777 [2024-04-25 03:28:52.184469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.777 [2024-04-25 03:28:52.184493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.777 qpair failed and we were unable to recover it. 00:28:17.777 [2024-04-25 03:28:52.184722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.777 [2024-04-25 03:28:52.184919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.778 [2024-04-25 03:28:52.184943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.778 qpair failed and we were unable to recover it. 00:28:17.778 [2024-04-25 03:28:52.185167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.778 [2024-04-25 03:28:52.185438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.778 [2024-04-25 03:28:52.185461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.778 qpair failed and we were unable to recover it. 00:28:17.778 [2024-04-25 03:28:52.185677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.778 [2024-04-25 03:28:52.185900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.778 [2024-04-25 03:28:52.185924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.778 qpair failed and we were unable to recover it. 00:28:17.778 [2024-04-25 03:28:52.186147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.778 [2024-04-25 03:28:52.186328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.778 [2024-04-25 03:28:52.186353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.778 qpair failed and we were unable to recover it. 00:28:17.778 [2024-04-25 03:28:52.186550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.778 [2024-04-25 03:28:52.186768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.778 [2024-04-25 03:28:52.186796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.778 qpair failed and we were unable to recover it. 00:28:17.778 [2024-04-25 03:28:52.187010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.778 [2024-04-25 03:28:52.187219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.778 [2024-04-25 03:28:52.187243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.778 qpair failed and we were unable to recover it. 00:28:17.778 [2024-04-25 03:28:52.187442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.778 [2024-04-25 03:28:52.187639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.778 [2024-04-25 03:28:52.187672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.778 qpair failed and we were unable to recover it. 00:28:17.778 [2024-04-25 03:28:52.187873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.778 [2024-04-25 03:28:52.188101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.778 [2024-04-25 03:28:52.188125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.778 qpair failed and we were unable to recover it. 00:28:17.778 [2024-04-25 03:28:52.188291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.778 [2024-04-25 03:28:52.188452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.778 [2024-04-25 03:28:52.188477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.778 qpair failed and we were unable to recover it. 00:28:17.778 [2024-04-25 03:28:52.188730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.778 [2024-04-25 03:28:52.188930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.778 [2024-04-25 03:28:52.188955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.778 qpair failed and we were unable to recover it. 00:28:17.778 [2024-04-25 03:28:52.189148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.778 [2024-04-25 03:28:52.189345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.778 [2024-04-25 03:28:52.189370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.778 qpair failed and we were unable to recover it. 00:28:17.778 [2024-04-25 03:28:52.189564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.778 [2024-04-25 03:28:52.189756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.778 [2024-04-25 03:28:52.189781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.778 qpair failed and we were unable to recover it. 00:28:17.778 [2024-04-25 03:28:52.189988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.778 [2024-04-25 03:28:52.190176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.778 [2024-04-25 03:28:52.190199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.778 qpair failed and we were unable to recover it. 00:28:17.778 [2024-04-25 03:28:52.190427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.778 [2024-04-25 03:28:52.190660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.778 [2024-04-25 03:28:52.190685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.778 qpair failed and we were unable to recover it. 00:28:17.778 [2024-04-25 03:28:52.190881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.778 [2024-04-25 03:28:52.191085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.778 [2024-04-25 03:28:52.191110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.778 qpair failed and we were unable to recover it. 00:28:17.778 [2024-04-25 03:28:52.191301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.778 [2024-04-25 03:28:52.191527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.778 [2024-04-25 03:28:52.191554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.778 qpair failed and we were unable to recover it. 00:28:17.778 [2024-04-25 03:28:52.191761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.778 [2024-04-25 03:28:52.192024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.778 [2024-04-25 03:28:52.192050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.778 qpair failed and we were unable to recover it. 00:28:17.778 [2024-04-25 03:28:52.192271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.778 [2024-04-25 03:28:52.192577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.778 [2024-04-25 03:28:52.192617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.778 qpair failed and we were unable to recover it. 00:28:17.778 [2024-04-25 03:28:52.192833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.778 [2024-04-25 03:28:52.193064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.778 [2024-04-25 03:28:52.193089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.778 qpair failed and we were unable to recover it. 00:28:17.778 [2024-04-25 03:28:52.193282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.778 [2024-04-25 03:28:52.193461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.778 [2024-04-25 03:28:52.193485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.778 qpair failed and we were unable to recover it. 00:28:17.778 [2024-04-25 03:28:52.193741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.778 [2024-04-25 03:28:52.193937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.778 [2024-04-25 03:28:52.193972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.778 qpair failed and we were unable to recover it. 00:28:17.778 [2024-04-25 03:28:52.194182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.778 [2024-04-25 03:28:52.194383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.778 [2024-04-25 03:28:52.194407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.778 qpair failed and we were unable to recover it. 00:28:17.778 [2024-04-25 03:28:52.194597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.778 [2024-04-25 03:28:52.194812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.778 [2024-04-25 03:28:52.194836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.778 qpair failed and we were unable to recover it. 00:28:17.778 [2024-04-25 03:28:52.195065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.778 [2024-04-25 03:28:52.195247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.778 [2024-04-25 03:28:52.195272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.778 qpair failed and we were unable to recover it. 00:28:17.778 [2024-04-25 03:28:52.195531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.778 [2024-04-25 03:28:52.195735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.778 [2024-04-25 03:28:52.195760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.778 qpair failed and we were unable to recover it. 00:28:17.778 [2024-04-25 03:28:52.195959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.779 [2024-04-25 03:28:52.196145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.779 [2024-04-25 03:28:52.196169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.779 qpair failed and we were unable to recover it. 00:28:17.779 [2024-04-25 03:28:52.196400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.779 [2024-04-25 03:28:52.196570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.779 [2024-04-25 03:28:52.196594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.779 qpair failed and we were unable to recover it. 00:28:17.779 [2024-04-25 03:28:52.196827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.779 [2024-04-25 03:28:52.197029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.779 [2024-04-25 03:28:52.197053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.779 qpair failed and we were unable to recover it. 00:28:17.779 [2024-04-25 03:28:52.197339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.779 [2024-04-25 03:28:52.197535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.779 [2024-04-25 03:28:52.197560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.779 qpair failed and we were unable to recover it. 00:28:17.779 [2024-04-25 03:28:52.197762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.779 [2024-04-25 03:28:52.197960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.779 [2024-04-25 03:28:52.197984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.779 qpair failed and we were unable to recover it. 00:28:17.779 [2024-04-25 03:28:52.198178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.779 [2024-04-25 03:28:52.198366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.779 [2024-04-25 03:28:52.198391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.779 qpair failed and we were unable to recover it. 00:28:17.779 [2024-04-25 03:28:52.198591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.779 [2024-04-25 03:28:52.198807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.779 [2024-04-25 03:28:52.198832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.779 qpair failed and we were unable to recover it. 00:28:17.779 [2024-04-25 03:28:52.199173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.779 [2024-04-25 03:28:52.199404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.779 [2024-04-25 03:28:52.199431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.779 qpair failed and we were unable to recover it. 00:28:17.779 [2024-04-25 03:28:52.199661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.779 [2024-04-25 03:28:52.199845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.779 [2024-04-25 03:28:52.199869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.779 qpair failed and we were unable to recover it. 00:28:17.779 [2024-04-25 03:28:52.200063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.779 [2024-04-25 03:28:52.200233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.779 [2024-04-25 03:28:52.200272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.779 qpair failed and we were unable to recover it. 00:28:17.779 [2024-04-25 03:28:52.200472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.779 [2024-04-25 03:28:52.200672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.779 [2024-04-25 03:28:52.200698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.779 qpair failed and we were unable to recover it. 00:28:17.779 [2024-04-25 03:28:52.200903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.779 [2024-04-25 03:28:52.201119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.779 [2024-04-25 03:28:52.201142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.779 qpair failed and we were unable to recover it. 00:28:17.779 [2024-04-25 03:28:52.201349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.779 [2024-04-25 03:28:52.201548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.779 [2024-04-25 03:28:52.201573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.779 qpair failed and we were unable to recover it. 00:28:17.779 [2024-04-25 03:28:52.201736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.779 [2024-04-25 03:28:52.201952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.779 [2024-04-25 03:28:52.201976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.779 qpair failed and we were unable to recover it. 00:28:17.779 [2024-04-25 03:28:52.202160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.779 [2024-04-25 03:28:52.202380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.779 [2024-04-25 03:28:52.202404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.779 qpair failed and we were unable to recover it. 00:28:17.779 [2024-04-25 03:28:52.202659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.779 [2024-04-25 03:28:52.202869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.779 [2024-04-25 03:28:52.202895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.779 qpair failed and we were unable to recover it. 00:28:17.779 [2024-04-25 03:28:52.203132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.779 [2024-04-25 03:28:52.203313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.779 [2024-04-25 03:28:52.203336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.779 qpair failed and we were unable to recover it. 00:28:17.779 [2024-04-25 03:28:52.203571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.779 [2024-04-25 03:28:52.203779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.779 [2024-04-25 03:28:52.203804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.779 qpair failed and we were unable to recover it. 00:28:17.779 [2024-04-25 03:28:52.204033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.779 [2024-04-25 03:28:52.204217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.779 [2024-04-25 03:28:52.204242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.779 qpair failed and we were unable to recover it. 00:28:17.779 [2024-04-25 03:28:52.204439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.779 [2024-04-25 03:28:52.204663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.779 [2024-04-25 03:28:52.204688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.779 qpair failed and we were unable to recover it. 00:28:17.779 [2024-04-25 03:28:52.204861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.779 [2024-04-25 03:28:52.205038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.779 [2024-04-25 03:28:52.205062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.779 qpair failed and we were unable to recover it. 00:28:17.779 [2024-04-25 03:28:52.205335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.779 [2024-04-25 03:28:52.205559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.779 [2024-04-25 03:28:52.205585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.779 qpair failed and we were unable to recover it. 00:28:17.779 [2024-04-25 03:28:52.205826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.779 [2024-04-25 03:28:52.206038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.779 [2024-04-25 03:28:52.206063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.779 qpair failed and we were unable to recover it. 00:28:17.779 [2024-04-25 03:28:52.206245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.779 [2024-04-25 03:28:52.206587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.779 [2024-04-25 03:28:52.206655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.779 qpair failed and we were unable to recover it. 00:28:17.779 [2024-04-25 03:28:52.206898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.779 [2024-04-25 03:28:52.207111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.779 [2024-04-25 03:28:52.207135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.779 qpair failed and we were unable to recover it. 00:28:17.779 [2024-04-25 03:28:52.207372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.779 [2024-04-25 03:28:52.207569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.779 [2024-04-25 03:28:52.207595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.779 qpair failed and we were unable to recover it. 00:28:17.779 [2024-04-25 03:28:52.207813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.779 [2024-04-25 03:28:52.208012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.779 [2024-04-25 03:28:52.208037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.779 qpair failed and we were unable to recover it. 00:28:17.779 [2024-04-25 03:28:52.208249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.779 [2024-04-25 03:28:52.208453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.779 [2024-04-25 03:28:52.208477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.779 qpair failed and we were unable to recover it. 00:28:17.779 [2024-04-25 03:28:52.208751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.779 [2024-04-25 03:28:52.209002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.779 [2024-04-25 03:28:52.209026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.779 qpair failed and we were unable to recover it. 00:28:17.780 [2024-04-25 03:28:52.209265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.780 [2024-04-25 03:28:52.209475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.780 [2024-04-25 03:28:52.209500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.780 qpair failed and we were unable to recover it. 00:28:17.780 [2024-04-25 03:28:52.209708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.780 [2024-04-25 03:28:52.209913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.780 [2024-04-25 03:28:52.209938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.780 qpair failed and we were unable to recover it. 00:28:17.780 [2024-04-25 03:28:52.210166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.780 [2024-04-25 03:28:52.210362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.780 [2024-04-25 03:28:52.210402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.780 qpair failed and we were unable to recover it. 00:28:17.780 [2024-04-25 03:28:52.210761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.780 [2024-04-25 03:28:52.210927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.780 [2024-04-25 03:28:52.210968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.780 qpair failed and we were unable to recover it. 00:28:17.780 [2024-04-25 03:28:52.211186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.780 [2024-04-25 03:28:52.211401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.780 [2024-04-25 03:28:52.211427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.780 qpair failed and we were unable to recover it. 00:28:17.780 [2024-04-25 03:28:52.211735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.780 [2024-04-25 03:28:52.211938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.780 [2024-04-25 03:28:52.211962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.780 qpair failed and we were unable to recover it. 00:28:17.780 [2024-04-25 03:28:52.212130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.780 [2024-04-25 03:28:52.212360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.780 [2024-04-25 03:28:52.212384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.780 qpair failed and we were unable to recover it. 00:28:17.780 [2024-04-25 03:28:52.212609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.780 [2024-04-25 03:28:52.212843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.780 [2024-04-25 03:28:52.212870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.780 qpair failed and we were unable to recover it. 00:28:17.780 [2024-04-25 03:28:52.213091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.780 [2024-04-25 03:28:52.213458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.780 [2024-04-25 03:28:52.213482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.780 qpair failed and we were unable to recover it. 00:28:17.780 [2024-04-25 03:28:52.213728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.780 [2024-04-25 03:28:52.213928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.780 [2024-04-25 03:28:52.213952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.780 qpair failed and we were unable to recover it. 00:28:17.780 [2024-04-25 03:28:52.214223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.780 [2024-04-25 03:28:52.214416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.780 [2024-04-25 03:28:52.214441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.780 qpair failed and we were unable to recover it. 00:28:17.780 [2024-04-25 03:28:52.214676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.780 [2024-04-25 03:28:52.214838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.780 [2024-04-25 03:28:52.214862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.780 qpair failed and we were unable to recover it. 00:28:17.780 [2024-04-25 03:28:52.215091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.780 [2024-04-25 03:28:52.215316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.780 [2024-04-25 03:28:52.215341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.780 qpair failed and we were unable to recover it. 00:28:17.780 [2024-04-25 03:28:52.215641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.780 [2024-04-25 03:28:52.215851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.780 [2024-04-25 03:28:52.215876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.780 qpair failed and we were unable to recover it. 00:28:17.780 [2024-04-25 03:28:52.216070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.780 [2024-04-25 03:28:52.216294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.780 [2024-04-25 03:28:52.216318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.780 qpair failed and we were unable to recover it. 00:28:17.780 [2024-04-25 03:28:52.216484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.780 [2024-04-25 03:28:52.216683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.780 [2024-04-25 03:28:52.216708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.780 qpair failed and we were unable to recover it. 00:28:17.780 [2024-04-25 03:28:52.216999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.780 [2024-04-25 03:28:52.217212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.780 [2024-04-25 03:28:52.217236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.780 qpair failed and we were unable to recover it. 00:28:17.780 [2024-04-25 03:28:52.217446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.780 [2024-04-25 03:28:52.217715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.780 [2024-04-25 03:28:52.217742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.780 qpair failed and we were unable to recover it. 00:28:17.780 [2024-04-25 03:28:52.217921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.780 [2024-04-25 03:28:52.218215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.780 [2024-04-25 03:28:52.218239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.780 qpair failed and we were unable to recover it. 00:28:17.780 [2024-04-25 03:28:52.218433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.780 [2024-04-25 03:28:52.218615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.780 [2024-04-25 03:28:52.218662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.780 qpair failed and we were unable to recover it. 00:28:17.780 [2024-04-25 03:28:52.218849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.780 [2024-04-25 03:28:52.219086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.780 [2024-04-25 03:28:52.219110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.780 qpair failed and we were unable to recover it. 00:28:17.780 [2024-04-25 03:28:52.219328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.780 [2024-04-25 03:28:52.219537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.780 [2024-04-25 03:28:52.219561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.780 qpair failed and we were unable to recover it. 00:28:17.780 [2024-04-25 03:28:52.219803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.780 [2024-04-25 03:28:52.220027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.780 [2024-04-25 03:28:52.220051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.780 qpair failed and we were unable to recover it. 00:28:17.780 [2024-04-25 03:28:52.220307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.780 [2024-04-25 03:28:52.220494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.780 [2024-04-25 03:28:52.220518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.780 qpair failed and we were unable to recover it. 00:28:17.780 [2024-04-25 03:28:52.220755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.780 [2024-04-25 03:28:52.220934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.780 [2024-04-25 03:28:52.220958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.780 qpair failed and we were unable to recover it. 00:28:17.780 [2024-04-25 03:28:52.221160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.780 [2024-04-25 03:28:52.221353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.780 [2024-04-25 03:28:52.221378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.780 qpair failed and we were unable to recover it. 00:28:17.780 [2024-04-25 03:28:52.221575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.780 [2024-04-25 03:28:52.221823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.780 [2024-04-25 03:28:52.221850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.780 qpair failed and we were unable to recover it. 00:28:17.780 [2024-04-25 03:28:52.222071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.780 [2024-04-25 03:28:52.222271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.780 [2024-04-25 03:28:52.222295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.780 qpair failed and we were unable to recover it. 00:28:17.780 [2024-04-25 03:28:52.222496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.780 [2024-04-25 03:28:52.222738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.780 [2024-04-25 03:28:52.222763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.780 qpair failed and we were unable to recover it. 00:28:17.781 [2024-04-25 03:28:52.222929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.781 [2024-04-25 03:28:52.223156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.781 [2024-04-25 03:28:52.223181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.781 qpair failed and we were unable to recover it. 00:28:17.781 [2024-04-25 03:28:52.223381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.781 [2024-04-25 03:28:52.223670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.781 [2024-04-25 03:28:52.223695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.781 qpair failed and we were unable to recover it. 00:28:17.781 [2024-04-25 03:28:52.223869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.781 [2024-04-25 03:28:52.224074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.781 [2024-04-25 03:28:52.224100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.781 qpair failed and we were unable to recover it. 00:28:17.781 [2024-04-25 03:28:52.224300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.781 [2024-04-25 03:28:52.224501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.781 [2024-04-25 03:28:52.224526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.781 qpair failed and we were unable to recover it. 00:28:17.781 [2024-04-25 03:28:52.224725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.781 [2024-04-25 03:28:52.224944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.781 [2024-04-25 03:28:52.224969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.781 qpair failed and we were unable to recover it. 00:28:17.781 [2024-04-25 03:28:52.225136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.781 [2024-04-25 03:28:52.225350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.781 [2024-04-25 03:28:52.225374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.781 qpair failed and we were unable to recover it. 00:28:17.781 [2024-04-25 03:28:52.225667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.781 [2024-04-25 03:28:52.225890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.781 [2024-04-25 03:28:52.225915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.781 qpair failed and we were unable to recover it. 00:28:17.781 [2024-04-25 03:28:52.226087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.781 [2024-04-25 03:28:52.226280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.781 [2024-04-25 03:28:52.226304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.781 qpair failed and we were unable to recover it. 00:28:17.781 [2024-04-25 03:28:52.226510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.781 [2024-04-25 03:28:52.226711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.781 [2024-04-25 03:28:52.226736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.781 qpair failed and we were unable to recover it. 00:28:17.781 [2024-04-25 03:28:52.226967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.781 [2024-04-25 03:28:52.227163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.781 [2024-04-25 03:28:52.227187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.781 qpair failed and we were unable to recover it. 00:28:17.781 [2024-04-25 03:28:52.227417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.781 [2024-04-25 03:28:52.227605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.781 [2024-04-25 03:28:52.227636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.781 qpair failed and we were unable to recover it. 00:28:17.781 [2024-04-25 03:28:52.227836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.781 [2024-04-25 03:28:52.228054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.781 [2024-04-25 03:28:52.228079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.781 qpair failed and we were unable to recover it. 00:28:17.781 [2024-04-25 03:28:52.228336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.781 [2024-04-25 03:28:52.228550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.781 [2024-04-25 03:28:52.228575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.781 qpair failed and we were unable to recover it. 00:28:17.781 [2024-04-25 03:28:52.228805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.781 [2024-04-25 03:28:52.228995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.781 [2024-04-25 03:28:52.229020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.781 qpair failed and we were unable to recover it. 00:28:17.781 [2024-04-25 03:28:52.229201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.781 [2024-04-25 03:28:52.229439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.781 [2024-04-25 03:28:52.229464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.781 qpair failed and we were unable to recover it. 00:28:17.781 [2024-04-25 03:28:52.229685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.781 [2024-04-25 03:28:52.229893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.781 [2024-04-25 03:28:52.229918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.781 qpair failed and we were unable to recover it. 00:28:17.781 [2024-04-25 03:28:52.230089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.781 [2024-04-25 03:28:52.230313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.781 [2024-04-25 03:28:52.230337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.781 qpair failed and we were unable to recover it. 00:28:17.781 [2024-04-25 03:28:52.230587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.781 [2024-04-25 03:28:52.230830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.781 [2024-04-25 03:28:52.230856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.781 qpair failed and we were unable to recover it. 00:28:17.781 [2024-04-25 03:28:52.231064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.781 [2024-04-25 03:28:52.231321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.781 [2024-04-25 03:28:52.231361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.781 qpair failed and we were unable to recover it. 00:28:17.781 [2024-04-25 03:28:52.231600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.781 [2024-04-25 03:28:52.231841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.781 [2024-04-25 03:28:52.231870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.781 qpair failed and we were unable to recover it. 00:28:17.781 [2024-04-25 03:28:52.232057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.781 [2024-04-25 03:28:52.232238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.781 [2024-04-25 03:28:52.232263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.781 qpair failed and we were unable to recover it. 00:28:17.781 [2024-04-25 03:28:52.232490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.781 [2024-04-25 03:28:52.232674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.781 [2024-04-25 03:28:52.232702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.781 qpair failed and we were unable to recover it. 00:28:17.781 [2024-04-25 03:28:52.232925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.781 [2024-04-25 03:28:52.233144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.781 [2024-04-25 03:28:52.233169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.781 qpair failed and we were unable to recover it. 00:28:17.781 [2024-04-25 03:28:52.233365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.781 [2024-04-25 03:28:52.233562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.781 [2024-04-25 03:28:52.233589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.781 qpair failed and we were unable to recover it. 00:28:17.781 [2024-04-25 03:28:52.233815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.781 [2024-04-25 03:28:52.234024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.781 [2024-04-25 03:28:52.234049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.781 qpair failed and we were unable to recover it. 00:28:17.781 [2024-04-25 03:28:52.234248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.781 [2024-04-25 03:28:52.234446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.781 [2024-04-25 03:28:52.234471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.781 qpair failed and we were unable to recover it. 00:28:17.781 [2024-04-25 03:28:52.234668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.781 [2024-04-25 03:28:52.234889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.781 [2024-04-25 03:28:52.234914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.781 qpair failed and we were unable to recover it. 00:28:17.781 [2024-04-25 03:28:52.235138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.781 [2024-04-25 03:28:52.235311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.781 [2024-04-25 03:28:52.235336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.781 qpair failed and we were unable to recover it. 00:28:17.781 [2024-04-25 03:28:52.235523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.781 [2024-04-25 03:28:52.235716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.781 [2024-04-25 03:28:52.235741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.782 qpair failed and we were unable to recover it. 00:28:17.782 [2024-04-25 03:28:52.235967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.782 [2024-04-25 03:28:52.236186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.782 [2024-04-25 03:28:52.236216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.782 qpair failed and we were unable to recover it. 00:28:17.782 [2024-04-25 03:28:52.236441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.782 [2024-04-25 03:28:52.236667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.782 [2024-04-25 03:28:52.236692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.782 qpair failed and we were unable to recover it. 00:28:17.782 [2024-04-25 03:28:52.236883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.782 [2024-04-25 03:28:52.237053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.782 [2024-04-25 03:28:52.237079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.782 qpair failed and we were unable to recover it. 00:28:17.782 [2024-04-25 03:28:52.237271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.782 [2024-04-25 03:28:52.237468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.782 [2024-04-25 03:28:52.237492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.782 qpair failed and we were unable to recover it. 00:28:17.782 [2024-04-25 03:28:52.237711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.782 [2024-04-25 03:28:52.237905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.782 [2024-04-25 03:28:52.237930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.782 qpair failed and we were unable to recover it. 00:28:17.782 [2024-04-25 03:28:52.238161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.782 [2024-04-25 03:28:52.238359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.782 [2024-04-25 03:28:52.238383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.782 qpair failed and we were unable to recover it. 00:28:17.782 [2024-04-25 03:28:52.238549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.782 [2024-04-25 03:28:52.238765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.782 [2024-04-25 03:28:52.238791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.782 qpair failed and we were unable to recover it. 00:28:17.782 [2024-04-25 03:28:52.239011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.782 [2024-04-25 03:28:52.239200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.782 [2024-04-25 03:28:52.239224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.782 qpair failed and we were unable to recover it. 00:28:17.782 [2024-04-25 03:28:52.239414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.782 [2024-04-25 03:28:52.239586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.782 [2024-04-25 03:28:52.239610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.782 qpair failed and we were unable to recover it. 00:28:17.782 [2024-04-25 03:28:52.239822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.782 [2024-04-25 03:28:52.239995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.782 [2024-04-25 03:28:52.240020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.782 qpair failed and we were unable to recover it. 00:28:17.782 [2024-04-25 03:28:52.240215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.782 [2024-04-25 03:28:52.240442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.782 [2024-04-25 03:28:52.240473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.782 qpair failed and we were unable to recover it. 00:28:17.782 [2024-04-25 03:28:52.240714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.782 [2024-04-25 03:28:52.240940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.782 [2024-04-25 03:28:52.240965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.782 qpair failed and we were unable to recover it. 00:28:17.782 [2024-04-25 03:28:52.241135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.782 [2024-04-25 03:28:52.241357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.782 [2024-04-25 03:28:52.241383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.782 qpair failed and we were unable to recover it. 00:28:17.782 [2024-04-25 03:28:52.241558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.782 [2024-04-25 03:28:52.241764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.782 [2024-04-25 03:28:52.241791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.782 qpair failed and we were unable to recover it. 00:28:17.782 [2024-04-25 03:28:52.242006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.782 [2024-04-25 03:28:52.242201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.782 [2024-04-25 03:28:52.242227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.782 qpair failed and we were unable to recover it. 00:28:17.782 [2024-04-25 03:28:52.242465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.782 [2024-04-25 03:28:52.242640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.782 [2024-04-25 03:28:52.242673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.782 qpair failed and we were unable to recover it. 00:28:17.782 [2024-04-25 03:28:52.242840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.782 [2024-04-25 03:28:52.243011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.782 [2024-04-25 03:28:52.243036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.782 qpair failed and we were unable to recover it. 00:28:17.782 [2024-04-25 03:28:52.243202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.782 [2024-04-25 03:28:52.243423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.782 [2024-04-25 03:28:52.243448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.782 qpair failed and we were unable to recover it. 00:28:17.782 [2024-04-25 03:28:52.243655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.782 [2024-04-25 03:28:52.243857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.782 [2024-04-25 03:28:52.243882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.782 qpair failed and we were unable to recover it. 00:28:17.782 [2024-04-25 03:28:52.244077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.782 [2024-04-25 03:28:52.244278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.782 [2024-04-25 03:28:52.244303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.782 qpair failed and we were unable to recover it. 00:28:17.782 [2024-04-25 03:28:52.244522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.782 [2024-04-25 03:28:52.244722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.782 [2024-04-25 03:28:52.244748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.782 qpair failed and we were unable to recover it. 00:28:17.782 [2024-04-25 03:28:52.244956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.782 [2024-04-25 03:28:52.245157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.782 [2024-04-25 03:28:52.245181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.782 qpair failed and we were unable to recover it. 00:28:17.782 [2024-04-25 03:28:52.245349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.782 [2024-04-25 03:28:52.245572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.782 [2024-04-25 03:28:52.245596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.782 qpair failed and we were unable to recover it. 00:28:17.782 [2024-04-25 03:28:52.245776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.782 [2024-04-25 03:28:52.246084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.782 [2024-04-25 03:28:52.246133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.782 qpair failed and we were unable to recover it. 00:28:17.782 [2024-04-25 03:28:52.246376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.782 [2024-04-25 03:28:52.246540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.783 [2024-04-25 03:28:52.246566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.783 qpair failed and we were unable to recover it. 00:28:17.783 [2024-04-25 03:28:52.246740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.783 [2024-04-25 03:28:52.246912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.783 [2024-04-25 03:28:52.246937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:17.783 qpair failed and we were unable to recover it. 00:28:18.054 [2024-04-25 03:28:52.247104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.054 [2024-04-25 03:28:52.247296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.054 [2024-04-25 03:28:52.247322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.054 qpair failed and we were unable to recover it. 00:28:18.054 [2024-04-25 03:28:52.247483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.054 [2024-04-25 03:28:52.247705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.054 [2024-04-25 03:28:52.247730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.054 qpair failed and we were unable to recover it. 00:28:18.055 [2024-04-25 03:28:52.247957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.055 [2024-04-25 03:28:52.248151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.055 [2024-04-25 03:28:52.248175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.055 qpair failed and we were unable to recover it. 00:28:18.055 [2024-04-25 03:28:52.248382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.055 [2024-04-25 03:28:52.248579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.055 [2024-04-25 03:28:52.248606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.055 qpair failed and we were unable to recover it. 00:28:18.055 [2024-04-25 03:28:52.248811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.055 [2024-04-25 03:28:52.249004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.055 [2024-04-25 03:28:52.249029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.055 qpair failed and we were unable to recover it. 00:28:18.055 [2024-04-25 03:28:52.249260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.055 [2024-04-25 03:28:52.249495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.055 [2024-04-25 03:28:52.249521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.055 qpair failed and we were unable to recover it. 00:28:18.055 [2024-04-25 03:28:52.249756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.055 [2024-04-25 03:28:52.249956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.055 [2024-04-25 03:28:52.249982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.055 qpair failed and we were unable to recover it. 00:28:18.055 [2024-04-25 03:28:52.250175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.055 [2024-04-25 03:28:52.250403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.055 [2024-04-25 03:28:52.250428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.055 qpair failed and we were unable to recover it. 00:28:18.055 [2024-04-25 03:28:52.250659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.055 [2024-04-25 03:28:52.250836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.055 [2024-04-25 03:28:52.250861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.055 qpair failed and we were unable to recover it. 00:28:18.055 [2024-04-25 03:28:52.251053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.055 [2024-04-25 03:28:52.251247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.055 [2024-04-25 03:28:52.251273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.055 qpair failed and we were unable to recover it. 00:28:18.055 [2024-04-25 03:28:52.251506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.055 [2024-04-25 03:28:52.251742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.055 [2024-04-25 03:28:52.251768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.055 qpair failed and we were unable to recover it. 00:28:18.055 [2024-04-25 03:28:52.251967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.055 [2024-04-25 03:28:52.252161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.055 [2024-04-25 03:28:52.252186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.055 qpair failed and we were unable to recover it. 00:28:18.055 [2024-04-25 03:28:52.252410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.055 [2024-04-25 03:28:52.252647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.055 [2024-04-25 03:28:52.252676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.055 qpair failed and we were unable to recover it. 00:28:18.055 [2024-04-25 03:28:52.252912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.055 [2024-04-25 03:28:52.253144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.055 [2024-04-25 03:28:52.253171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.055 qpair failed and we were unable to recover it. 00:28:18.055 [2024-04-25 03:28:52.253393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.055 [2024-04-25 03:28:52.253603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.055 [2024-04-25 03:28:52.253645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.055 qpair failed and we were unable to recover it. 00:28:18.055 [2024-04-25 03:28:52.253856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.055 [2024-04-25 03:28:52.254034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.055 [2024-04-25 03:28:52.254060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.055 qpair failed and we were unable to recover it. 00:28:18.055 [2024-04-25 03:28:52.254252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.055 [2024-04-25 03:28:52.254451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.055 [2024-04-25 03:28:52.254475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.055 qpair failed and we were unable to recover it. 00:28:18.055 [2024-04-25 03:28:52.254701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.055 [2024-04-25 03:28:52.254891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.055 [2024-04-25 03:28:52.254927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.055 qpair failed and we were unable to recover it. 00:28:18.055 [2024-04-25 03:28:52.255146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.055 [2024-04-25 03:28:52.255370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.055 [2024-04-25 03:28:52.255395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.055 qpair failed and we were unable to recover it. 00:28:18.055 [2024-04-25 03:28:52.255567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.055 [2024-04-25 03:28:52.255779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.055 [2024-04-25 03:28:52.255805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.055 qpair failed and we were unable to recover it. 00:28:18.055 [2024-04-25 03:28:52.255998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.055 [2024-04-25 03:28:52.256186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.055 [2024-04-25 03:28:52.256211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.055 qpair failed and we were unable to recover it. 00:28:18.055 [2024-04-25 03:28:52.256382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.055 [2024-04-25 03:28:52.256578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.055 [2024-04-25 03:28:52.256603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.055 qpair failed and we were unable to recover it. 00:28:18.055 [2024-04-25 03:28:52.256826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.055 [2024-04-25 03:28:52.257053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.055 [2024-04-25 03:28:52.257091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.055 qpair failed and we were unable to recover it. 00:28:18.055 [2024-04-25 03:28:52.257316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.055 [2024-04-25 03:28:52.257512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.055 [2024-04-25 03:28:52.257537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.055 qpair failed and we were unable to recover it. 00:28:18.055 [2024-04-25 03:28:52.257738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.055 [2024-04-25 03:28:52.257919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.055 [2024-04-25 03:28:52.257944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.055 qpair failed and we were unable to recover it. 00:28:18.055 [2024-04-25 03:28:52.258149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.055 [2024-04-25 03:28:52.258347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.055 [2024-04-25 03:28:52.258372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.055 qpair failed and we were unable to recover it. 00:28:18.055 [2024-04-25 03:28:52.258597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.055 [2024-04-25 03:28:52.258797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.055 [2024-04-25 03:28:52.258822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.055 qpair failed and we were unable to recover it. 00:28:18.055 [2024-04-25 03:28:52.259032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.055 [2024-04-25 03:28:52.259199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.055 [2024-04-25 03:28:52.259224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.055 qpair failed and we were unable to recover it. 00:28:18.055 [2024-04-25 03:28:52.259394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.055 [2024-04-25 03:28:52.259596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.055 [2024-04-25 03:28:52.259620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.055 qpair failed and we were unable to recover it. 00:28:18.055 [2024-04-25 03:28:52.259852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.055 [2024-04-25 03:28:52.260022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.055 [2024-04-25 03:28:52.260047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.055 qpair failed and we were unable to recover it. 00:28:18.055 [2024-04-25 03:28:52.260275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.055 [2024-04-25 03:28:52.260521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.056 [2024-04-25 03:28:52.260567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.056 qpair failed and we were unable to recover it. 00:28:18.056 [2024-04-25 03:28:52.260800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.056 [2024-04-25 03:28:52.261007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.056 [2024-04-25 03:28:52.261032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.056 qpair failed and we were unable to recover it. 00:28:18.056 [2024-04-25 03:28:52.261264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.056 [2024-04-25 03:28:52.261475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.056 [2024-04-25 03:28:52.261502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.056 qpair failed and we were unable to recover it. 00:28:18.056 [2024-04-25 03:28:52.261727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.056 [2024-04-25 03:28:52.261951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.056 [2024-04-25 03:28:52.261976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.056 qpair failed and we were unable to recover it. 00:28:18.056 [2024-04-25 03:28:52.262205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.056 [2024-04-25 03:28:52.262370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.056 [2024-04-25 03:28:52.262395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.056 qpair failed and we were unable to recover it. 00:28:18.056 [2024-04-25 03:28:52.262590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.056 [2024-04-25 03:28:52.262781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.056 [2024-04-25 03:28:52.262806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.056 qpair failed and we were unable to recover it. 00:28:18.056 [2024-04-25 03:28:52.263010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.056 [2024-04-25 03:28:52.263207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.056 [2024-04-25 03:28:52.263231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.056 qpair failed and we were unable to recover it. 00:28:18.056 [2024-04-25 03:28:52.263437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.056 [2024-04-25 03:28:52.263657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.056 [2024-04-25 03:28:52.263682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.056 qpair failed and we were unable to recover it. 00:28:18.056 [2024-04-25 03:28:52.263906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.056 [2024-04-25 03:28:52.264126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.056 [2024-04-25 03:28:52.264151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.056 qpair failed and we were unable to recover it. 00:28:18.056 [2024-04-25 03:28:52.264346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.056 [2024-04-25 03:28:52.264536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.056 [2024-04-25 03:28:52.264560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.056 qpair failed and we were unable to recover it. 00:28:18.056 [2024-04-25 03:28:52.264758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.056 [2024-04-25 03:28:52.264950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.056 [2024-04-25 03:28:52.264974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.056 qpair failed and we were unable to recover it. 00:28:18.056 [2024-04-25 03:28:52.265167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.056 [2024-04-25 03:28:52.265362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.056 [2024-04-25 03:28:52.265386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.056 qpair failed and we were unable to recover it. 00:28:18.056 [2024-04-25 03:28:52.265583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.056 [2024-04-25 03:28:52.265817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.056 [2024-04-25 03:28:52.265843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.056 qpair failed and we were unable to recover it. 00:28:18.056 [2024-04-25 03:28:52.266015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.056 [2024-04-25 03:28:52.266202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.056 [2024-04-25 03:28:52.266226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.056 qpair failed and we were unable to recover it. 00:28:18.056 [2024-04-25 03:28:52.266398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.056 [2024-04-25 03:28:52.266614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.056 [2024-04-25 03:28:52.266652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.056 qpair failed and we were unable to recover it. 00:28:18.056 [2024-04-25 03:28:52.266904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.056 [2024-04-25 03:28:52.267076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.056 [2024-04-25 03:28:52.267111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.056 qpair failed and we were unable to recover it. 00:28:18.056 [2024-04-25 03:28:52.267307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.056 [2024-04-25 03:28:52.267517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.056 [2024-04-25 03:28:52.267542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.056 qpair failed and we were unable to recover it. 00:28:18.056 [2024-04-25 03:28:52.267752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.056 [2024-04-25 03:28:52.267921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.056 [2024-04-25 03:28:52.267945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.056 qpair failed and we were unable to recover it. 00:28:18.056 [2024-04-25 03:28:52.268149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.056 [2024-04-25 03:28:52.268351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.056 [2024-04-25 03:28:52.268375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.056 qpair failed and we were unable to recover it. 00:28:18.056 [2024-04-25 03:28:52.268599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.056 [2024-04-25 03:28:52.268783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.056 [2024-04-25 03:28:52.268808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.056 qpair failed and we were unable to recover it. 00:28:18.056 [2024-04-25 03:28:52.269014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.056 [2024-04-25 03:28:52.269207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.056 [2024-04-25 03:28:52.269231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.056 qpair failed and we were unable to recover it. 00:28:18.056 [2024-04-25 03:28:52.269405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.056 [2024-04-25 03:28:52.269636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.056 [2024-04-25 03:28:52.269665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.056 qpair failed and we were unable to recover it. 00:28:18.056 [2024-04-25 03:28:52.269887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.056 [2024-04-25 03:28:52.270093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.056 [2024-04-25 03:28:52.270118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.056 qpair failed and we were unable to recover it. 00:28:18.056 [2024-04-25 03:28:52.270344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.056 [2024-04-25 03:28:52.270541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.056 [2024-04-25 03:28:52.270565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.056 qpair failed and we were unable to recover it. 00:28:18.056 [2024-04-25 03:28:52.270772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.056 [2024-04-25 03:28:52.270939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.056 [2024-04-25 03:28:52.270964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.056 qpair failed and we were unable to recover it. 00:28:18.056 [2024-04-25 03:28:52.271137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.056 [2024-04-25 03:28:52.271302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.056 [2024-04-25 03:28:52.271327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.056 qpair failed and we were unable to recover it. 00:28:18.056 [2024-04-25 03:28:52.271545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.056 [2024-04-25 03:28:52.271738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.056 [2024-04-25 03:28:52.271765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.056 qpair failed and we were unable to recover it. 00:28:18.056 [2024-04-25 03:28:52.272006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.056 [2024-04-25 03:28:52.272202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.056 [2024-04-25 03:28:52.272226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.056 qpair failed and we were unable to recover it. 00:28:18.056 [2024-04-25 03:28:52.272456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.056 [2024-04-25 03:28:52.272720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.056 [2024-04-25 03:28:52.272746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.056 qpair failed and we were unable to recover it. 00:28:18.056 [2024-04-25 03:28:52.272945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.057 [2024-04-25 03:28:52.273109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.057 [2024-04-25 03:28:52.273134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.057 qpair failed and we were unable to recover it. 00:28:18.057 [2024-04-25 03:28:52.273300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.057 [2024-04-25 03:28:52.273506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.057 [2024-04-25 03:28:52.273531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.057 qpair failed and we were unable to recover it. 00:28:18.057 [2024-04-25 03:28:52.273741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.057 [2024-04-25 03:28:52.273950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.057 [2024-04-25 03:28:52.273974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.057 qpair failed and we were unable to recover it. 00:28:18.057 [2024-04-25 03:28:52.274200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.057 [2024-04-25 03:28:52.274391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.057 [2024-04-25 03:28:52.274416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.057 qpair failed and we were unable to recover it. 00:28:18.057 [2024-04-25 03:28:52.274613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.057 [2024-04-25 03:28:52.274815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.057 [2024-04-25 03:28:52.274840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.057 qpair failed and we were unable to recover it. 00:28:18.057 [2024-04-25 03:28:52.275005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.057 [2024-04-25 03:28:52.275172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.057 [2024-04-25 03:28:52.275198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.057 qpair failed and we were unable to recover it. 00:28:18.057 [2024-04-25 03:28:52.275399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.057 [2024-04-25 03:28:52.275602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.057 [2024-04-25 03:28:52.275626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.057 qpair failed and we were unable to recover it. 00:28:18.057 [2024-04-25 03:28:52.275819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.057 [2024-04-25 03:28:52.276014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.057 [2024-04-25 03:28:52.276038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.057 qpair failed and we were unable to recover it. 00:28:18.057 [2024-04-25 03:28:52.276222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.057 [2024-04-25 03:28:52.276440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.057 [2024-04-25 03:28:52.276464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.057 qpair failed and we were unable to recover it. 00:28:18.057 [2024-04-25 03:28:52.276657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.057 [2024-04-25 03:28:52.276832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.057 [2024-04-25 03:28:52.276856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.057 qpair failed and we were unable to recover it. 00:28:18.057 [2024-04-25 03:28:52.277047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.057 [2024-04-25 03:28:52.277277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.057 [2024-04-25 03:28:52.277301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.057 qpair failed and we were unable to recover it. 00:28:18.057 [2024-04-25 03:28:52.277523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.057 [2024-04-25 03:28:52.277740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.057 [2024-04-25 03:28:52.277766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.057 qpair failed and we were unable to recover it. 00:28:18.057 [2024-04-25 03:28:52.277961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.057 [2024-04-25 03:28:52.278156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.057 [2024-04-25 03:28:52.278180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.057 qpair failed and we were unable to recover it. 00:28:18.057 [2024-04-25 03:28:52.278375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.057 [2024-04-25 03:28:52.278569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.057 [2024-04-25 03:28:52.278595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.057 qpair failed and we were unable to recover it. 00:28:18.057 [2024-04-25 03:28:52.278782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.057 [2024-04-25 03:28:52.278972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.057 [2024-04-25 03:28:52.278998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.057 qpair failed and we were unable to recover it. 00:28:18.057 [2024-04-25 03:28:52.279202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.057 [2024-04-25 03:28:52.279396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.057 [2024-04-25 03:28:52.279421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.057 qpair failed and we were unable to recover it. 00:28:18.057 [2024-04-25 03:28:52.279622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.057 [2024-04-25 03:28:52.279805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.057 [2024-04-25 03:28:52.279831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.057 qpair failed and we were unable to recover it. 00:28:18.057 [2024-04-25 03:28:52.279999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.057 [2024-04-25 03:28:52.280190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.057 [2024-04-25 03:28:52.280215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.057 qpair failed and we were unable to recover it. 00:28:18.057 [2024-04-25 03:28:52.280438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.057 [2024-04-25 03:28:52.280644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.057 [2024-04-25 03:28:52.280671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.057 qpair failed and we were unable to recover it. 00:28:18.057 [2024-04-25 03:28:52.280867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.057 [2024-04-25 03:28:52.281060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.057 [2024-04-25 03:28:52.281085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.057 qpair failed and we were unable to recover it. 00:28:18.057 [2024-04-25 03:28:52.281276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.057 [2024-04-25 03:28:52.281467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.057 [2024-04-25 03:28:52.281492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.057 qpair failed and we were unable to recover it. 00:28:18.057 [2024-04-25 03:28:52.281699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.057 [2024-04-25 03:28:52.281869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.057 [2024-04-25 03:28:52.281895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.057 qpair failed and we were unable to recover it. 00:28:18.057 [2024-04-25 03:28:52.282120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.057 [2024-04-25 03:28:52.282288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.057 [2024-04-25 03:28:52.282314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.057 qpair failed and we were unable to recover it. 00:28:18.057 [2024-04-25 03:28:52.282490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.057 [2024-04-25 03:28:52.282679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.057 [2024-04-25 03:28:52.282705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.057 qpair failed and we were unable to recover it. 00:28:18.057 [2024-04-25 03:28:52.282883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.057 [2024-04-25 03:28:52.283112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.057 [2024-04-25 03:28:52.283137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.057 qpair failed and we were unable to recover it. 00:28:18.057 [2024-04-25 03:28:52.283310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.057 [2024-04-25 03:28:52.283523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.057 [2024-04-25 03:28:52.283549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.057 qpair failed and we were unable to recover it. 00:28:18.057 [2024-04-25 03:28:52.283727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.057 [2024-04-25 03:28:52.283927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.057 [2024-04-25 03:28:52.283953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.057 qpair failed and we were unable to recover it. 00:28:18.057 [2024-04-25 03:28:52.284166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.057 [2024-04-25 03:28:52.284362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.057 [2024-04-25 03:28:52.284387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.057 qpair failed and we were unable to recover it. 00:28:18.057 [2024-04-25 03:28:52.284589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.057 [2024-04-25 03:28:52.284794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.057 [2024-04-25 03:28:52.284821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.057 qpair failed and we were unable to recover it. 00:28:18.058 [2024-04-25 03:28:52.285047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.058 [2024-04-25 03:28:52.285230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.058 [2024-04-25 03:28:52.285255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.058 qpair failed and we were unable to recover it. 00:28:18.058 [2024-04-25 03:28:52.285481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.058 [2024-04-25 03:28:52.285684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.058 [2024-04-25 03:28:52.285712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.058 qpair failed and we were unable to recover it. 00:28:18.058 [2024-04-25 03:28:52.285917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.058 [2024-04-25 03:28:52.286118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.058 [2024-04-25 03:28:52.286144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.058 qpair failed and we were unable to recover it. 00:28:18.058 [2024-04-25 03:28:52.286334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.058 [2024-04-25 03:28:52.286545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.058 [2024-04-25 03:28:52.286569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.058 qpair failed and we were unable to recover it. 00:28:18.058 [2024-04-25 03:28:52.286753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.058 [2024-04-25 03:28:52.286950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.058 [2024-04-25 03:28:52.286975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.058 qpair failed and we were unable to recover it. 00:28:18.058 [2024-04-25 03:28:52.287209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.058 [2024-04-25 03:28:52.287375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.058 [2024-04-25 03:28:52.287414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.058 qpair failed and we were unable to recover it. 00:28:18.058 [2024-04-25 03:28:52.287617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.058 [2024-04-25 03:28:52.287821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.058 [2024-04-25 03:28:52.287848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.058 qpair failed and we were unable to recover it. 00:28:18.058 [2024-04-25 03:28:52.288043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.058 [2024-04-25 03:28:52.288241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.058 [2024-04-25 03:28:52.288267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.058 qpair failed and we were unable to recover it. 00:28:18.058 [2024-04-25 03:28:52.288441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.058 [2024-04-25 03:28:52.288612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.058 [2024-04-25 03:28:52.288644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.058 qpair failed and we were unable to recover it. 00:28:18.058 [2024-04-25 03:28:52.288841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.058 [2024-04-25 03:28:52.289047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.058 [2024-04-25 03:28:52.289071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.058 qpair failed and we were unable to recover it. 00:28:18.058 [2024-04-25 03:28:52.289299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.058 [2024-04-25 03:28:52.289523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.058 [2024-04-25 03:28:52.289547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.058 qpair failed and we were unable to recover it. 00:28:18.058 [2024-04-25 03:28:52.289722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.058 [2024-04-25 03:28:52.289937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.058 [2024-04-25 03:28:52.289962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.058 qpair failed and we were unable to recover it. 00:28:18.058 [2024-04-25 03:28:52.290158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.058 [2024-04-25 03:28:52.290323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.058 [2024-04-25 03:28:52.290350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.058 qpair failed and we were unable to recover it. 00:28:18.058 [2024-04-25 03:28:52.290554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.058 [2024-04-25 03:28:52.290745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.058 [2024-04-25 03:28:52.290770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.058 qpair failed and we were unable to recover it. 00:28:18.058 [2024-04-25 03:28:52.290997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.058 [2024-04-25 03:28:52.291204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.058 [2024-04-25 03:28:52.291228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.058 qpair failed and we were unable to recover it. 00:28:18.058 [2024-04-25 03:28:52.291427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.058 [2024-04-25 03:28:52.291655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.058 [2024-04-25 03:28:52.291700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.058 qpair failed and we were unable to recover it. 00:28:18.058 [2024-04-25 03:28:52.291872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.058 [2024-04-25 03:28:52.292121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.058 [2024-04-25 03:28:52.292146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.058 qpair failed and we were unable to recover it. 00:28:18.058 [2024-04-25 03:28:52.292381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.058 [2024-04-25 03:28:52.292609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.058 [2024-04-25 03:28:52.292643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.058 qpair failed and we were unable to recover it. 00:28:18.058 [2024-04-25 03:28:52.292849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.058 [2024-04-25 03:28:52.293040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.058 [2024-04-25 03:28:52.293065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.058 qpair failed and we were unable to recover it. 00:28:18.058 [2024-04-25 03:28:52.293259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.058 [2024-04-25 03:28:52.293464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.058 [2024-04-25 03:28:52.293489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.058 qpair failed and we were unable to recover it. 00:28:18.058 [2024-04-25 03:28:52.293655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.058 [2024-04-25 03:28:52.293849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.058 [2024-04-25 03:28:52.293874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.058 qpair failed and we were unable to recover it. 00:28:18.058 [2024-04-25 03:28:52.294095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.058 [2024-04-25 03:28:52.294292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.058 [2024-04-25 03:28:52.294316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.058 qpair failed and we were unable to recover it. 00:28:18.058 [2024-04-25 03:28:52.294541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.058 [2024-04-25 03:28:52.294715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.058 [2024-04-25 03:28:52.294740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.058 qpair failed and we were unable to recover it. 00:28:18.058 [2024-04-25 03:28:52.294908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.058 [2024-04-25 03:28:52.295134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.058 [2024-04-25 03:28:52.295159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.058 qpair failed and we were unable to recover it. 00:28:18.058 [2024-04-25 03:28:52.295363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.058 [2024-04-25 03:28:52.295560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.058 [2024-04-25 03:28:52.295585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.058 qpair failed and we were unable to recover it. 00:28:18.058 [2024-04-25 03:28:52.295787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.058 [2024-04-25 03:28:52.295982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.058 [2024-04-25 03:28:52.296007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.058 qpair failed and we were unable to recover it. 00:28:18.058 [2024-04-25 03:28:52.296216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.058 [2024-04-25 03:28:52.296414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.058 [2024-04-25 03:28:52.296438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.058 qpair failed and we were unable to recover it. 00:28:18.058 [2024-04-25 03:28:52.296640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.058 [2024-04-25 03:28:52.296838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.058 [2024-04-25 03:28:52.296868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.058 qpair failed and we were unable to recover it. 00:28:18.058 [2024-04-25 03:28:52.297072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.058 [2024-04-25 03:28:52.297245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.059 [2024-04-25 03:28:52.297271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.059 qpair failed and we were unable to recover it. 00:28:18.059 [2024-04-25 03:28:52.297498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.059 [2024-04-25 03:28:52.297699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.059 [2024-04-25 03:28:52.297725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.059 qpair failed and we were unable to recover it. 00:28:18.059 [2024-04-25 03:28:52.297946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.059 [2024-04-25 03:28:52.298139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.059 [2024-04-25 03:28:52.298164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.059 qpair failed and we were unable to recover it. 00:28:18.059 [2024-04-25 03:28:52.298387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.059 [2024-04-25 03:28:52.298583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.059 [2024-04-25 03:28:52.298607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.059 qpair failed and we were unable to recover it. 00:28:18.059 [2024-04-25 03:28:52.298823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.059 [2024-04-25 03:28:52.299030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.059 [2024-04-25 03:28:52.299055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.059 qpair failed and we were unable to recover it. 00:28:18.059 [2024-04-25 03:28:52.299250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.059 [2024-04-25 03:28:52.299445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.059 [2024-04-25 03:28:52.299469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.059 qpair failed and we were unable to recover it. 00:28:18.059 [2024-04-25 03:28:52.299685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.059 [2024-04-25 03:28:52.299882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.059 [2024-04-25 03:28:52.299907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.059 qpair failed and we were unable to recover it. 00:28:18.059 [2024-04-25 03:28:52.300084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.059 [2024-04-25 03:28:52.300274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.059 [2024-04-25 03:28:52.300299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.059 qpair failed and we were unable to recover it. 00:28:18.059 [2024-04-25 03:28:52.300502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.059 [2024-04-25 03:28:52.300726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.059 [2024-04-25 03:28:52.300752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.059 qpair failed and we were unable to recover it. 00:28:18.059 [2024-04-25 03:28:52.300927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.059 [2024-04-25 03:28:52.301154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.059 [2024-04-25 03:28:52.301183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.059 qpair failed and we were unable to recover it. 00:28:18.059 [2024-04-25 03:28:52.301399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.059 [2024-04-25 03:28:52.301674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.059 [2024-04-25 03:28:52.301701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.059 qpair failed and we were unable to recover it. 00:28:18.059 [2024-04-25 03:28:52.301867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.059 [2024-04-25 03:28:52.302061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.059 [2024-04-25 03:28:52.302086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.059 qpair failed and we were unable to recover it. 00:28:18.059 [2024-04-25 03:28:52.302307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.059 [2024-04-25 03:28:52.302657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.059 [2024-04-25 03:28:52.302686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.059 qpair failed and we were unable to recover it. 00:28:18.059 [2024-04-25 03:28:52.302930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.059 [2024-04-25 03:28:52.303124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.059 [2024-04-25 03:28:52.303148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.059 qpair failed and we were unable to recover it. 00:28:18.059 [2024-04-25 03:28:52.303423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.059 [2024-04-25 03:28:52.303701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.059 [2024-04-25 03:28:52.303727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.059 qpair failed and we were unable to recover it. 00:28:18.059 [2024-04-25 03:28:52.303929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.059 [2024-04-25 03:28:52.304124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.059 [2024-04-25 03:28:52.304149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.059 qpair failed and we were unable to recover it. 00:28:18.059 [2024-04-25 03:28:52.304409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.059 [2024-04-25 03:28:52.304609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.059 [2024-04-25 03:28:52.304643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.059 qpair failed and we were unable to recover it. 00:28:18.059 [2024-04-25 03:28:52.304847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.059 [2024-04-25 03:28:52.305037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.059 [2024-04-25 03:28:52.305063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.059 qpair failed and we were unable to recover it. 00:28:18.059 [2024-04-25 03:28:52.305296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.059 [2024-04-25 03:28:52.305710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.059 [2024-04-25 03:28:52.305737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.059 qpair failed and we were unable to recover it. 00:28:18.059 [2024-04-25 03:28:52.305912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.059 [2024-04-25 03:28:52.306106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.059 [2024-04-25 03:28:52.306138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.059 qpair failed and we were unable to recover it. 00:28:18.059 [2024-04-25 03:28:52.306306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.059 [2024-04-25 03:28:52.306503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.059 [2024-04-25 03:28:52.306528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.059 qpair failed and we were unable to recover it. 00:28:18.059 [2024-04-25 03:28:52.306727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.059 [2024-04-25 03:28:52.306928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.059 [2024-04-25 03:28:52.306955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.059 qpair failed and we were unable to recover it. 00:28:18.059 [2024-04-25 03:28:52.307231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.059 [2024-04-25 03:28:52.307492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.059 [2024-04-25 03:28:52.307531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.059 qpair failed and we were unable to recover it. 00:28:18.059 [2024-04-25 03:28:52.307734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.060 [2024-04-25 03:28:52.307934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.060 [2024-04-25 03:28:52.307974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.060 qpair failed and we were unable to recover it. 00:28:18.060 [2024-04-25 03:28:52.308239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.060 [2024-04-25 03:28:52.308460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.060 [2024-04-25 03:28:52.308500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.060 qpair failed and we were unable to recover it. 00:28:18.060 [2024-04-25 03:28:52.308704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.060 [2024-04-25 03:28:52.308929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.060 [2024-04-25 03:28:52.308954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.060 qpair failed and we were unable to recover it. 00:28:18.060 [2024-04-25 03:28:52.309144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.060 [2024-04-25 03:28:52.309333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.060 [2024-04-25 03:28:52.309357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.060 qpair failed and we were unable to recover it. 00:28:18.060 [2024-04-25 03:28:52.309590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.060 [2024-04-25 03:28:52.309788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.060 [2024-04-25 03:28:52.309814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.060 qpair failed and we were unable to recover it. 00:28:18.060 [2024-04-25 03:28:52.310004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.060 [2024-04-25 03:28:52.310233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.060 [2024-04-25 03:28:52.310258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.060 qpair failed and we were unable to recover it. 00:28:18.060 [2024-04-25 03:28:52.310450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.060 [2024-04-25 03:28:52.310651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.060 [2024-04-25 03:28:52.310682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.060 qpair failed and we were unable to recover it. 00:28:18.060 [2024-04-25 03:28:52.310910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.060 [2024-04-25 03:28:52.311107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.060 [2024-04-25 03:28:52.311132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.060 qpair failed and we were unable to recover it. 00:28:18.060 [2024-04-25 03:28:52.311418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.060 [2024-04-25 03:28:52.311666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.060 [2024-04-25 03:28:52.311691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.060 qpair failed and we were unable to recover it. 00:28:18.060 [2024-04-25 03:28:52.311869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.060 [2024-04-25 03:28:52.312081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.060 [2024-04-25 03:28:52.312104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.060 qpair failed and we were unable to recover it. 00:28:18.060 [2024-04-25 03:28:52.312274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.060 [2024-04-25 03:28:52.312586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.060 [2024-04-25 03:28:52.312643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.060 qpair failed and we were unable to recover it. 00:28:18.060 [2024-04-25 03:28:52.312888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.060 [2024-04-25 03:28:52.313160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.060 [2024-04-25 03:28:52.313183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.060 qpair failed and we were unable to recover it. 00:28:18.060 [2024-04-25 03:28:52.313401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.060 [2024-04-25 03:28:52.313639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.060 [2024-04-25 03:28:52.313689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.060 qpair failed and we were unable to recover it. 00:28:18.060 [2024-04-25 03:28:52.313880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.060 [2024-04-25 03:28:52.314131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.060 [2024-04-25 03:28:52.314155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.060 qpair failed and we were unable to recover it. 00:28:18.060 [2024-04-25 03:28:52.314365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.060 [2024-04-25 03:28:52.314646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.060 [2024-04-25 03:28:52.314672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.060 qpair failed and we were unable to recover it. 00:28:18.060 [2024-04-25 03:28:52.314871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.060 [2024-04-25 03:28:52.315103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.060 [2024-04-25 03:28:52.315128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.060 qpair failed and we were unable to recover it. 00:28:18.060 [2024-04-25 03:28:52.315391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.060 [2024-04-25 03:28:52.315672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.060 [2024-04-25 03:28:52.315700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.060 qpair failed and we were unable to recover it. 00:28:18.060 [2024-04-25 03:28:52.315948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.060 [2024-04-25 03:28:52.316235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.060 [2024-04-25 03:28:52.316280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.060 qpair failed and we were unable to recover it. 00:28:18.060 [2024-04-25 03:28:52.316544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.060 [2024-04-25 03:28:52.316751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.060 [2024-04-25 03:28:52.316779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.060 qpair failed and we were unable to recover it. 00:28:18.060 [2024-04-25 03:28:52.317081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.060 [2024-04-25 03:28:52.317398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.060 [2024-04-25 03:28:52.317422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.060 qpair failed and we were unable to recover it. 00:28:18.060 [2024-04-25 03:28:52.317619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.060 [2024-04-25 03:28:52.317797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.060 [2024-04-25 03:28:52.317821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.060 qpair failed and we were unable to recover it. 00:28:18.060 [2024-04-25 03:28:52.318011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.060 [2024-04-25 03:28:52.318197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.060 [2024-04-25 03:28:52.318221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.060 qpair failed and we were unable to recover it. 00:28:18.060 [2024-04-25 03:28:52.318422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.060 [2024-04-25 03:28:52.318721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.060 [2024-04-25 03:28:52.318746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.060 qpair failed and we were unable to recover it. 00:28:18.060 [2024-04-25 03:28:52.318950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.060 [2024-04-25 03:28:52.319146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.060 [2024-04-25 03:28:52.319170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.060 qpair failed and we were unable to recover it. 00:28:18.060 [2024-04-25 03:28:52.319388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.060 [2024-04-25 03:28:52.319569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.060 [2024-04-25 03:28:52.319594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.060 qpair failed and we were unable to recover it. 00:28:18.060 [2024-04-25 03:28:52.319843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.060 [2024-04-25 03:28:52.320046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.060 [2024-04-25 03:28:52.320071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.060 qpair failed and we were unable to recover it. 00:28:18.060 [2024-04-25 03:28:52.320300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.060 [2024-04-25 03:28:52.320497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.060 [2024-04-25 03:28:52.320521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.060 qpair failed and we were unable to recover it. 00:28:18.060 [2024-04-25 03:28:52.320793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.060 [2024-04-25 03:28:52.321005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.060 [2024-04-25 03:28:52.321028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.060 qpair failed and we were unable to recover it. 00:28:18.060 [2024-04-25 03:28:52.321224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.060 [2024-04-25 03:28:52.321447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.061 [2024-04-25 03:28:52.321472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.061 qpair failed and we were unable to recover it. 00:28:18.061 [2024-04-25 03:28:52.321694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.061 [2024-04-25 03:28:52.321888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.061 [2024-04-25 03:28:52.321913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.061 qpair failed and we were unable to recover it. 00:28:18.061 [2024-04-25 03:28:52.322111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.061 [2024-04-25 03:28:52.322384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.061 [2024-04-25 03:28:52.322408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.061 qpair failed and we were unable to recover it. 00:28:18.061 [2024-04-25 03:28:52.322602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.061 [2024-04-25 03:28:52.322850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.061 [2024-04-25 03:28:52.322874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.061 qpair failed and we were unable to recover it. 00:28:18.061 [2024-04-25 03:28:52.323069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.061 [2024-04-25 03:28:52.323267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.061 [2024-04-25 03:28:52.323292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.061 qpair failed and we were unable to recover it. 00:28:18.061 [2024-04-25 03:28:52.323467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.061 [2024-04-25 03:28:52.323673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.061 [2024-04-25 03:28:52.323699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.061 qpair failed and we were unable to recover it. 00:28:18.061 [2024-04-25 03:28:52.323890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.061 [2024-04-25 03:28:52.324112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.061 [2024-04-25 03:28:52.324136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.061 qpair failed and we were unable to recover it. 00:28:18.061 [2024-04-25 03:28:52.324369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.061 [2024-04-25 03:28:52.324593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.061 [2024-04-25 03:28:52.324618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.061 qpair failed and we were unable to recover it. 00:28:18.061 [2024-04-25 03:28:52.324801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.061 [2024-04-25 03:28:52.325021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.061 [2024-04-25 03:28:52.325045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.061 qpair failed and we were unable to recover it. 00:28:18.061 [2024-04-25 03:28:52.325217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.061 [2024-04-25 03:28:52.325399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.061 [2024-04-25 03:28:52.325423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.061 qpair failed and we were unable to recover it. 00:28:18.061 [2024-04-25 03:28:52.325649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.061 [2024-04-25 03:28:52.325820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.061 [2024-04-25 03:28:52.325844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.061 qpair failed and we were unable to recover it. 00:28:18.061 [2024-04-25 03:28:52.326053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.061 [2024-04-25 03:28:52.326272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.061 [2024-04-25 03:28:52.326296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.061 qpair failed and we were unable to recover it. 00:28:18.061 [2024-04-25 03:28:52.326486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.061 [2024-04-25 03:28:52.326714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.061 [2024-04-25 03:28:52.326739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.061 qpair failed and we were unable to recover it. 00:28:18.061 [2024-04-25 03:28:52.326962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.061 [2024-04-25 03:28:52.327139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.061 [2024-04-25 03:28:52.327163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.061 qpair failed and we were unable to recover it. 00:28:18.061 [2024-04-25 03:28:52.327371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.061 [2024-04-25 03:28:52.327591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.061 [2024-04-25 03:28:52.327615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.061 qpair failed and we were unable to recover it. 00:28:18.061 [2024-04-25 03:28:52.327824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.061 [2024-04-25 03:28:52.328004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.061 [2024-04-25 03:28:52.328029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.061 qpair failed and we were unable to recover it. 00:28:18.061 [2024-04-25 03:28:52.328223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.061 [2024-04-25 03:28:52.328388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.061 [2024-04-25 03:28:52.328427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.061 qpair failed and we were unable to recover it. 00:28:18.061 [2024-04-25 03:28:52.328638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.061 [2024-04-25 03:28:52.328856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.061 [2024-04-25 03:28:52.328881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.061 qpair failed and we were unable to recover it. 00:28:18.061 [2024-04-25 03:28:52.329103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.061 [2024-04-25 03:28:52.329290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.061 [2024-04-25 03:28:52.329314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.061 qpair failed and we were unable to recover it. 00:28:18.061 [2024-04-25 03:28:52.329545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.061 [2024-04-25 03:28:52.329806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.061 [2024-04-25 03:28:52.329837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.061 qpair failed and we were unable to recover it. 00:28:18.061 [2024-04-25 03:28:52.330064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.061 [2024-04-25 03:28:52.330442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.061 [2024-04-25 03:28:52.330500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.061 qpair failed and we were unable to recover it. 00:28:18.061 [2024-04-25 03:28:52.330761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.061 [2024-04-25 03:28:52.331041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.061 [2024-04-25 03:28:52.331065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.061 qpair failed and we were unable to recover it. 00:28:18.061 [2024-04-25 03:28:52.331269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.061 [2024-04-25 03:28:52.331468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.061 [2024-04-25 03:28:52.331493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.061 qpair failed and we were unable to recover it. 00:28:18.061 [2024-04-25 03:28:52.331700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.061 [2024-04-25 03:28:52.331896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.061 [2024-04-25 03:28:52.331922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.061 qpair failed and we were unable to recover it. 00:28:18.061 [2024-04-25 03:28:52.332194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.061 [2024-04-25 03:28:52.332419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.061 [2024-04-25 03:28:52.332444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.061 qpair failed and we were unable to recover it. 00:28:18.061 [2024-04-25 03:28:52.332651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.061 [2024-04-25 03:28:52.332846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.061 [2024-04-25 03:28:52.332870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.061 qpair failed and we were unable to recover it. 00:28:18.061 [2024-04-25 03:28:52.333089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.061 [2024-04-25 03:28:52.333264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.061 [2024-04-25 03:28:52.333290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.061 qpair failed and we were unable to recover it. 00:28:18.061 [2024-04-25 03:28:52.333485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.061 [2024-04-25 03:28:52.333703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.061 [2024-04-25 03:28:52.333742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.061 qpair failed and we were unable to recover it. 00:28:18.061 [2024-04-25 03:28:52.333991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.061 [2024-04-25 03:28:52.334191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.061 [2024-04-25 03:28:52.334229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.061 qpair failed and we were unable to recover it. 00:28:18.061 [2024-04-25 03:28:52.334439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.061 [2024-04-25 03:28:52.334683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.062 [2024-04-25 03:28:52.334709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.062 qpair failed and we were unable to recover it. 00:28:18.062 [2024-04-25 03:28:52.334915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.062 [2024-04-25 03:28:52.335115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.062 [2024-04-25 03:28:52.335141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.062 qpair failed and we were unable to recover it. 00:28:18.062 [2024-04-25 03:28:52.335373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.062 [2024-04-25 03:28:52.335570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.062 [2024-04-25 03:28:52.335595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.062 qpair failed and we were unable to recover it. 00:28:18.062 [2024-04-25 03:28:52.335781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.062 [2024-04-25 03:28:52.335953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.062 [2024-04-25 03:28:52.335978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.062 qpair failed and we were unable to recover it. 00:28:18.062 [2024-04-25 03:28:52.336185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.062 [2024-04-25 03:28:52.336382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.062 [2024-04-25 03:28:52.336409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.062 qpair failed and we were unable to recover it. 00:28:18.062 [2024-04-25 03:28:52.336621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.062 [2024-04-25 03:28:52.336827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.062 [2024-04-25 03:28:52.336852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.062 qpair failed and we were unable to recover it. 00:28:18.062 [2024-04-25 03:28:52.337033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.062 [2024-04-25 03:28:52.337231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.062 [2024-04-25 03:28:52.337255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.062 qpair failed and we were unable to recover it. 00:28:18.062 [2024-04-25 03:28:52.337448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.062 [2024-04-25 03:28:52.337614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.062 [2024-04-25 03:28:52.337650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.062 qpair failed and we were unable to recover it. 00:28:18.062 [2024-04-25 03:28:52.337848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.062 [2024-04-25 03:28:52.338076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.062 [2024-04-25 03:28:52.338100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.062 qpair failed and we were unable to recover it. 00:28:18.062 [2024-04-25 03:28:52.338327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.062 [2024-04-25 03:28:52.338487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.062 [2024-04-25 03:28:52.338511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.062 qpair failed and we were unable to recover it. 00:28:18.062 [2024-04-25 03:28:52.338698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.062 [2024-04-25 03:28:52.338906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.062 [2024-04-25 03:28:52.338931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.062 qpair failed and we were unable to recover it. 00:28:18.062 [2024-04-25 03:28:52.339107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.062 [2024-04-25 03:28:52.339274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.062 [2024-04-25 03:28:52.339299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.062 qpair failed and we were unable to recover it. 00:28:18.062 [2024-04-25 03:28:52.339467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.062 [2024-04-25 03:28:52.339667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.062 [2024-04-25 03:28:52.339696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.062 qpair failed and we were unable to recover it. 00:28:18.062 [2024-04-25 03:28:52.339869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.062 [2024-04-25 03:28:52.340076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.062 [2024-04-25 03:28:52.340103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.062 qpair failed and we were unable to recover it. 00:28:18.062 [2024-04-25 03:28:52.340308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.062 [2024-04-25 03:28:52.340500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.062 [2024-04-25 03:28:52.340525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.062 qpair failed and we were unable to recover it. 00:28:18.062 [2024-04-25 03:28:52.340698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.062 [2024-04-25 03:28:52.340897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.062 [2024-04-25 03:28:52.340922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.062 qpair failed and we were unable to recover it. 00:28:18.062 [2024-04-25 03:28:52.341149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.062 [2024-04-25 03:28:52.341326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.062 [2024-04-25 03:28:52.341351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.062 qpair failed and we were unable to recover it. 00:28:18.062 [2024-04-25 03:28:52.341547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.062 [2024-04-25 03:28:52.341741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.062 [2024-04-25 03:28:52.341766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.062 qpair failed and we were unable to recover it. 00:28:18.062 [2024-04-25 03:28:52.341966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.062 [2024-04-25 03:28:52.342188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.062 [2024-04-25 03:28:52.342212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.062 qpair failed and we were unable to recover it. 00:28:18.062 [2024-04-25 03:28:52.342437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.062 [2024-04-25 03:28:52.342688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.062 [2024-04-25 03:28:52.342715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.062 qpair failed and we were unable to recover it. 00:28:18.062 [2024-04-25 03:28:52.342881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.062 [2024-04-25 03:28:52.343082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.062 [2024-04-25 03:28:52.343108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.062 qpair failed and we were unable to recover it. 00:28:18.062 [2024-04-25 03:28:52.343331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.062 [2024-04-25 03:28:52.343502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.062 [2024-04-25 03:28:52.343528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.062 qpair failed and we were unable to recover it. 00:28:18.062 [2024-04-25 03:28:52.343722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.062 [2024-04-25 03:28:52.343923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.062 [2024-04-25 03:28:52.343948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.062 qpair failed and we were unable to recover it. 00:28:18.062 [2024-04-25 03:28:52.344145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.062 [2024-04-25 03:28:52.344366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.062 [2024-04-25 03:28:52.344391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.062 qpair failed and we were unable to recover it. 00:28:18.062 [2024-04-25 03:28:52.344563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.062 [2024-04-25 03:28:52.344773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.062 [2024-04-25 03:28:52.344798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.062 qpair failed and we were unable to recover it. 00:28:18.062 [2024-04-25 03:28:52.344994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.062 [2024-04-25 03:28:52.345187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.062 [2024-04-25 03:28:52.345214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.062 qpair failed and we were unable to recover it. 00:28:18.062 [2024-04-25 03:28:52.345415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.062 [2024-04-25 03:28:52.345653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.062 [2024-04-25 03:28:52.345679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.062 qpair failed and we were unable to recover it. 00:28:18.062 [2024-04-25 03:28:52.345881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.062 [2024-04-25 03:28:52.346105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.062 [2024-04-25 03:28:52.346131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.062 qpair failed and we were unable to recover it. 00:28:18.062 [2024-04-25 03:28:52.346333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.062 [2024-04-25 03:28:52.346529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.062 [2024-04-25 03:28:52.346556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.062 qpair failed and we were unable to recover it. 00:28:18.062 [2024-04-25 03:28:52.346727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.062 [2024-04-25 03:28:52.346927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.063 [2024-04-25 03:28:52.346952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.063 qpair failed and we were unable to recover it. 00:28:18.063 [2024-04-25 03:28:52.347151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.063 [2024-04-25 03:28:52.347359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.063 [2024-04-25 03:28:52.347385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.063 qpair failed and we were unable to recover it. 00:28:18.063 [2024-04-25 03:28:52.347583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.063 [2024-04-25 03:28:52.347783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.063 [2024-04-25 03:28:52.347808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.063 qpair failed and we were unable to recover it. 00:28:18.063 [2024-04-25 03:28:52.347986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.063 [2024-04-25 03:28:52.348191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.063 [2024-04-25 03:28:52.348215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.063 qpair failed and we were unable to recover it. 00:28:18.063 [2024-04-25 03:28:52.348415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.063 [2024-04-25 03:28:52.348648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.063 [2024-04-25 03:28:52.348676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.063 qpair failed and we were unable to recover it. 00:28:18.063 [2024-04-25 03:28:52.348886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.063 [2024-04-25 03:28:52.349085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.063 [2024-04-25 03:28:52.349110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.063 qpair failed and we were unable to recover it. 00:28:18.063 [2024-04-25 03:28:52.349339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.063 [2024-04-25 03:28:52.349536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.063 [2024-04-25 03:28:52.349561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.063 qpair failed and we were unable to recover it. 00:28:18.063 [2024-04-25 03:28:52.349759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.063 [2024-04-25 03:28:52.349947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.063 [2024-04-25 03:28:52.349972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.063 qpair failed and we were unable to recover it. 00:28:18.063 [2024-04-25 03:28:52.350165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.063 [2024-04-25 03:28:52.350389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.063 [2024-04-25 03:28:52.350414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.063 qpair failed and we were unable to recover it. 00:28:18.063 [2024-04-25 03:28:52.350617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.063 [2024-04-25 03:28:52.350830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.063 [2024-04-25 03:28:52.350855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.063 qpair failed and we were unable to recover it. 00:28:18.063 [2024-04-25 03:28:52.351068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.063 [2024-04-25 03:28:52.351236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.063 [2024-04-25 03:28:52.351262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.063 qpair failed and we were unable to recover it. 00:28:18.063 [2024-04-25 03:28:52.351508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.063 [2024-04-25 03:28:52.351756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.063 [2024-04-25 03:28:52.351782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.063 qpair failed and we were unable to recover it. 00:28:18.063 [2024-04-25 03:28:52.351958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.063 [2024-04-25 03:28:52.352182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.063 [2024-04-25 03:28:52.352207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.063 qpair failed and we were unable to recover it. 00:28:18.063 [2024-04-25 03:28:52.352378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.063 [2024-04-25 03:28:52.352571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.063 [2024-04-25 03:28:52.352598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.063 qpair failed and we were unable to recover it. 00:28:18.063 [2024-04-25 03:28:52.352778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.063 [2024-04-25 03:28:52.352971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.063 [2024-04-25 03:28:52.352996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.063 qpair failed and we were unable to recover it. 00:28:18.063 [2024-04-25 03:28:52.353221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.063 [2024-04-25 03:28:52.353382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.063 [2024-04-25 03:28:52.353406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.063 qpair failed and we were unable to recover it. 00:28:18.063 [2024-04-25 03:28:52.353575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.063 [2024-04-25 03:28:52.353766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.063 [2024-04-25 03:28:52.353792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.063 qpair failed and we were unable to recover it. 00:28:18.063 [2024-04-25 03:28:52.354018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.063 [2024-04-25 03:28:52.354214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.063 [2024-04-25 03:28:52.354239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.063 qpair failed and we were unable to recover it. 00:28:18.063 [2024-04-25 03:28:52.354466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.063 [2024-04-25 03:28:52.354622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.063 [2024-04-25 03:28:52.354655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.063 qpair failed and we were unable to recover it. 00:28:18.063 [2024-04-25 03:28:52.354854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.063 [2024-04-25 03:28:52.355023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.063 [2024-04-25 03:28:52.355049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.063 qpair failed and we were unable to recover it. 00:28:18.063 [2024-04-25 03:28:52.355249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.063 [2024-04-25 03:28:52.355424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.063 [2024-04-25 03:28:52.355448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.063 qpair failed and we were unable to recover it. 00:28:18.063 [2024-04-25 03:28:52.355652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.063 [2024-04-25 03:28:52.355880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.063 [2024-04-25 03:28:52.355905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.063 qpair failed and we were unable to recover it. 00:28:18.063 [2024-04-25 03:28:52.356112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.063 [2024-04-25 03:28:52.356309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.063 [2024-04-25 03:28:52.356333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.063 qpair failed and we were unable to recover it. 00:28:18.063 [2024-04-25 03:28:52.356555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.063 [2024-04-25 03:28:52.356718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.063 [2024-04-25 03:28:52.356744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.063 qpair failed and we were unable to recover it. 00:28:18.063 [2024-04-25 03:28:52.356933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.063 [2024-04-25 03:28:52.357092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.063 [2024-04-25 03:28:52.357119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.063 qpair failed and we were unable to recover it. 00:28:18.063 [2024-04-25 03:28:52.357299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.063 [2024-04-25 03:28:52.357516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.063 [2024-04-25 03:28:52.357541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.063 qpair failed and we were unable to recover it. 00:28:18.063 [2024-04-25 03:28:52.357740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.063 [2024-04-25 03:28:52.357913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.063 [2024-04-25 03:28:52.357937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.063 qpair failed and we were unable to recover it. 00:28:18.063 [2024-04-25 03:28:52.358132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.063 [2024-04-25 03:28:52.358359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.063 [2024-04-25 03:28:52.358384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.063 qpair failed and we were unable to recover it. 00:28:18.063 [2024-04-25 03:28:52.358553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.063 [2024-04-25 03:28:52.358719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.063 [2024-04-25 03:28:52.358746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.063 qpair failed and we were unable to recover it. 00:28:18.063 [2024-04-25 03:28:52.358945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.064 [2024-04-25 03:28:52.359171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.064 [2024-04-25 03:28:52.359196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.064 qpair failed and we were unable to recover it. 00:28:18.064 [2024-04-25 03:28:52.359364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.064 [2024-04-25 03:28:52.359618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.064 [2024-04-25 03:28:52.359655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.064 qpair failed and we were unable to recover it. 00:28:18.064 [2024-04-25 03:28:52.359822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.064 [2024-04-25 03:28:52.360014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.064 [2024-04-25 03:28:52.360039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.064 qpair failed and we were unable to recover it. 00:28:18.064 [2024-04-25 03:28:52.360241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.064 [2024-04-25 03:28:52.360441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.064 [2024-04-25 03:28:52.360466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.064 qpair failed and we were unable to recover it. 00:28:18.064 [2024-04-25 03:28:52.360642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.064 [2024-04-25 03:28:52.360863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.064 [2024-04-25 03:28:52.360889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.064 qpair failed and we were unable to recover it. 00:28:18.064 [2024-04-25 03:28:52.364092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.064 [2024-04-25 03:28:52.364410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.064 [2024-04-25 03:28:52.364439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.064 qpair failed and we were unable to recover it. 00:28:18.064 [2024-04-25 03:28:52.364704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.064 [2024-04-25 03:28:52.364906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.064 [2024-04-25 03:28:52.364931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.064 qpair failed and we were unable to recover it. 00:28:18.064 [2024-04-25 03:28:52.365130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.064 [2024-04-25 03:28:52.365352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.064 [2024-04-25 03:28:52.365377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.064 qpair failed and we were unable to recover it. 00:28:18.064 [2024-04-25 03:28:52.365686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.064 [2024-04-25 03:28:52.365970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.064 [2024-04-25 03:28:52.365995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.064 qpair failed and we were unable to recover it. 00:28:18.064 [2024-04-25 03:28:52.366164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.064 [2024-04-25 03:28:52.366388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.064 [2024-04-25 03:28:52.366413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.064 qpair failed and we were unable to recover it. 00:28:18.064 [2024-04-25 03:28:52.366642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.064 [2024-04-25 03:28:52.366841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.064 [2024-04-25 03:28:52.366866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.064 qpair failed and we were unable to recover it. 00:28:18.064 [2024-04-25 03:28:52.367116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.064 [2024-04-25 03:28:52.367393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.064 [2024-04-25 03:28:52.367418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.064 qpair failed and we were unable to recover it. 00:28:18.064 [2024-04-25 03:28:52.367688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.064 [2024-04-25 03:28:52.367865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.064 [2024-04-25 03:28:52.367890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.064 qpair failed and we were unable to recover it. 00:28:18.064 [2024-04-25 03:28:52.368095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.064 [2024-04-25 03:28:52.368277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.064 [2024-04-25 03:28:52.368302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.064 qpair failed and we were unable to recover it. 00:28:18.064 [2024-04-25 03:28:52.368535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.064 [2024-04-25 03:28:52.368732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.064 [2024-04-25 03:28:52.368758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.064 qpair failed and we were unable to recover it. 00:28:18.064 [2024-04-25 03:28:52.368925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.064 [2024-04-25 03:28:52.369118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.064 [2024-04-25 03:28:52.369144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.064 qpair failed and we were unable to recover it. 00:28:18.064 [2024-04-25 03:28:52.369407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.064 [2024-04-25 03:28:52.369641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.064 [2024-04-25 03:28:52.369685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.064 qpair failed and we were unable to recover it. 00:28:18.064 [2024-04-25 03:28:52.371422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.064 [2024-04-25 03:28:52.371650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.064 [2024-04-25 03:28:52.371678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.064 qpair failed and we were unable to recover it. 00:28:18.064 [2024-04-25 03:28:52.371902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.064 [2024-04-25 03:28:52.372105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.064 [2024-04-25 03:28:52.372130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.064 qpair failed and we were unable to recover it. 00:28:18.064 [2024-04-25 03:28:52.372340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.064 [2024-04-25 03:28:52.372559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.064 [2024-04-25 03:28:52.372584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.064 qpair failed and we were unable to recover it. 00:28:18.064 [2024-04-25 03:28:52.372804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.064 [2024-04-25 03:28:52.372978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.064 [2024-04-25 03:28:52.373003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.064 qpair failed and we were unable to recover it. 00:28:18.064 [2024-04-25 03:28:52.373228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.064 [2024-04-25 03:28:52.373404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.064 [2024-04-25 03:28:52.373430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.064 qpair failed and we were unable to recover it. 00:28:18.064 [2024-04-25 03:28:52.373641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.064 [2024-04-25 03:28:52.373866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.064 [2024-04-25 03:28:52.373896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.064 qpair failed and we were unable to recover it. 00:28:18.064 [2024-04-25 03:28:52.374117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.064 [2024-04-25 03:28:52.374306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.064 [2024-04-25 03:28:52.374333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.064 qpair failed and we were unable to recover it. 00:28:18.064 [2024-04-25 03:28:52.374574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.064 [2024-04-25 03:28:52.374839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.064 [2024-04-25 03:28:52.374866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.065 qpair failed and we were unable to recover it. 00:28:18.065 [2024-04-25 03:28:52.375075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.065 [2024-04-25 03:28:52.375272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.065 [2024-04-25 03:28:52.375297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.065 qpair failed and we were unable to recover it. 00:28:18.065 [2024-04-25 03:28:52.375518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.065 [2024-04-25 03:28:52.375741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.065 [2024-04-25 03:28:52.375770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.065 qpair failed and we were unable to recover it. 00:28:18.065 [2024-04-25 03:28:52.375976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.065 [2024-04-25 03:28:52.376183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.065 [2024-04-25 03:28:52.376209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.065 qpair failed and we were unable to recover it. 00:28:18.065 [2024-04-25 03:28:52.376398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.065 [2024-04-25 03:28:52.376694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.065 [2024-04-25 03:28:52.376719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.065 qpair failed and we were unable to recover it. 00:28:18.065 [2024-04-25 03:28:52.376914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.065 [2024-04-25 03:28:52.377092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.065 [2024-04-25 03:28:52.377119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.065 qpair failed and we were unable to recover it. 00:28:18.065 [2024-04-25 03:28:52.377348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.065 [2024-04-25 03:28:52.377591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.065 [2024-04-25 03:28:52.377620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.065 qpair failed and we were unable to recover it. 00:28:18.065 [2024-04-25 03:28:52.377847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.065 [2024-04-25 03:28:52.378050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.065 [2024-04-25 03:28:52.378074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.065 qpair failed and we were unable to recover it. 00:28:18.065 [2024-04-25 03:28:52.378249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.065 [2024-04-25 03:28:52.378453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.065 [2024-04-25 03:28:52.378482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.065 qpair failed and we were unable to recover it. 00:28:18.065 [2024-04-25 03:28:52.378705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.065 [2024-04-25 03:28:52.378915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.065 [2024-04-25 03:28:52.378948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.065 qpair failed and we were unable to recover it. 00:28:18.065 [2024-04-25 03:28:52.379168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.065 [2024-04-25 03:28:52.379351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.065 [2024-04-25 03:28:52.379379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.065 qpair failed and we were unable to recover it. 00:28:18.065 [2024-04-25 03:28:52.379608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.065 [2024-04-25 03:28:52.379839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.065 [2024-04-25 03:28:52.379864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.065 qpair failed and we were unable to recover it. 00:28:18.065 [2024-04-25 03:28:52.380039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.065 [2024-04-25 03:28:52.380259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.065 [2024-04-25 03:28:52.380286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.065 qpair failed and we were unable to recover it. 00:28:18.065 [2024-04-25 03:28:52.380505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.065 [2024-04-25 03:28:52.380741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.065 [2024-04-25 03:28:52.380767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.065 qpair failed and we were unable to recover it. 00:28:18.065 [2024-04-25 03:28:52.380984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.065 [2024-04-25 03:28:52.381214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.065 [2024-04-25 03:28:52.381239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.065 qpair failed and we were unable to recover it. 00:28:18.065 [2024-04-25 03:28:52.381460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.065 [2024-04-25 03:28:52.381688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.065 [2024-04-25 03:28:52.381718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.065 qpair failed and we were unable to recover it. 00:28:18.065 [2024-04-25 03:28:52.381939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.065 [2024-04-25 03:28:52.382151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.065 [2024-04-25 03:28:52.382176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.065 qpair failed and we were unable to recover it. 00:28:18.065 [2024-04-25 03:28:52.382373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.065 [2024-04-25 03:28:52.382568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.065 [2024-04-25 03:28:52.382596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.065 qpair failed and we were unable to recover it. 00:28:18.065 [2024-04-25 03:28:52.382856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.065 [2024-04-25 03:28:52.383029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.065 [2024-04-25 03:28:52.383059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.065 qpair failed and we were unable to recover it. 00:28:18.065 [2024-04-25 03:28:52.383284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.065 [2024-04-25 03:28:52.383531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.065 [2024-04-25 03:28:52.383576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.065 qpair failed and we were unable to recover it. 00:28:18.065 [2024-04-25 03:28:52.383771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.065 [2024-04-25 03:28:52.383985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.065 [2024-04-25 03:28:52.384012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.065 qpair failed and we were unable to recover it. 00:28:18.065 [2024-04-25 03:28:52.384244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.065 [2024-04-25 03:28:52.384427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.065 [2024-04-25 03:28:52.384470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.065 qpair failed and we were unable to recover it. 00:28:18.065 [2024-04-25 03:28:52.384656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.065 [2024-04-25 03:28:52.384841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.065 [2024-04-25 03:28:52.384867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.065 qpair failed and we were unable to recover it. 00:28:18.065 [2024-04-25 03:28:52.385044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.065 [2024-04-25 03:28:52.385274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.065 [2024-04-25 03:28:52.385302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.065 qpair failed and we were unable to recover it. 00:28:18.065 [2024-04-25 03:28:52.385557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.065 [2024-04-25 03:28:52.385786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.065 [2024-04-25 03:28:52.385812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.065 qpair failed and we were unable to recover it. 00:28:18.065 [2024-04-25 03:28:52.386027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.065 [2024-04-25 03:28:52.386197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.065 [2024-04-25 03:28:52.386221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.065 qpair failed and we were unable to recover it. 00:28:18.065 [2024-04-25 03:28:52.386439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.065 [2024-04-25 03:28:52.386653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.065 [2024-04-25 03:28:52.386679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.065 qpair failed and we were unable to recover it. 00:28:18.065 [2024-04-25 03:28:52.386916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.065 [2024-04-25 03:28:52.387136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.065 [2024-04-25 03:28:52.387177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.065 qpair failed and we were unable to recover it. 00:28:18.065 [2024-04-25 03:28:52.387374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.065 [2024-04-25 03:28:52.387561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.065 [2024-04-25 03:28:52.387594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.065 qpair failed and we were unable to recover it. 00:28:18.065 [2024-04-25 03:28:52.387848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.065 [2024-04-25 03:28:52.388046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.066 [2024-04-25 03:28:52.388075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.066 qpair failed and we were unable to recover it. 00:28:18.066 [2024-04-25 03:28:52.388274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.066 [2024-04-25 03:28:52.388495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.066 [2024-04-25 03:28:52.388539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.066 qpair failed and we were unable to recover it. 00:28:18.066 [2024-04-25 03:28:52.388727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.066 [2024-04-25 03:28:52.388908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.066 [2024-04-25 03:28:52.388934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.066 qpair failed and we were unable to recover it. 00:28:18.066 [2024-04-25 03:28:52.389655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.066 [2024-04-25 03:28:52.389830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.066 [2024-04-25 03:28:52.389856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.066 qpair failed and we were unable to recover it. 00:28:18.066 [2024-04-25 03:28:52.390126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.066 [2024-04-25 03:28:52.390336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.066 [2024-04-25 03:28:52.390362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.066 qpair failed and we were unable to recover it. 00:28:18.066 [2024-04-25 03:28:52.390634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.066 [2024-04-25 03:28:52.390871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.066 [2024-04-25 03:28:52.390896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.066 qpair failed and we were unable to recover it. 00:28:18.066 [2024-04-25 03:28:52.391115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.066 [2024-04-25 03:28:52.391289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.066 [2024-04-25 03:28:52.391314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.066 qpair failed and we were unable to recover it. 00:28:18.066 [2024-04-25 03:28:52.391555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.066 [2024-04-25 03:28:52.391758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.066 [2024-04-25 03:28:52.391783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.066 qpair failed and we were unable to recover it. 00:28:18.066 [2024-04-25 03:28:52.391963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.066 [2024-04-25 03:28:52.392152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.066 [2024-04-25 03:28:52.392176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.066 qpair failed and we were unable to recover it. 00:28:18.066 [2024-04-25 03:28:52.392378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.066 [2024-04-25 03:28:52.392616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.066 [2024-04-25 03:28:52.392665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.066 qpair failed and we were unable to recover it. 00:28:18.066 [2024-04-25 03:28:52.392851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.066 [2024-04-25 03:28:52.393028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.066 [2024-04-25 03:28:52.393055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.066 qpair failed and we were unable to recover it. 00:28:18.066 [2024-04-25 03:28:52.393245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.066 [2024-04-25 03:28:52.393441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.066 [2024-04-25 03:28:52.393467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.066 qpair failed and we were unable to recover it. 00:28:18.066 [2024-04-25 03:28:52.393651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.066 [2024-04-25 03:28:52.393849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.066 [2024-04-25 03:28:52.393874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.066 qpair failed and we were unable to recover it. 00:28:18.066 [2024-04-25 03:28:52.394105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.066 [2024-04-25 03:28:52.394381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.066 [2024-04-25 03:28:52.394406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.066 qpair failed and we were unable to recover it. 00:28:18.066 [2024-04-25 03:28:52.394574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.066 [2024-04-25 03:28:52.394804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.066 [2024-04-25 03:28:52.394832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.066 qpair failed and we were unable to recover it. 00:28:18.066 [2024-04-25 03:28:52.395059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.066 [2024-04-25 03:28:52.395258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.066 [2024-04-25 03:28:52.395282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.066 qpair failed and we were unable to recover it. 00:28:18.066 [2024-04-25 03:28:52.395515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.066 [2024-04-25 03:28:52.395733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.066 [2024-04-25 03:28:52.395758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.066 qpair failed and we were unable to recover it. 00:28:18.066 [2024-04-25 03:28:52.395928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.066 [2024-04-25 03:28:52.396099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.066 [2024-04-25 03:28:52.396126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.066 qpair failed and we were unable to recover it. 00:28:18.066 [2024-04-25 03:28:52.396344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.066 [2024-04-25 03:28:52.396564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.066 [2024-04-25 03:28:52.396589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.066 qpair failed and we were unable to recover it. 00:28:18.066 [2024-04-25 03:28:52.396773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.066 [2024-04-25 03:28:52.396965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.066 [2024-04-25 03:28:52.396991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.066 qpair failed and we were unable to recover it. 00:28:18.066 [2024-04-25 03:28:52.397239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.066 [2024-04-25 03:28:52.397414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.066 [2024-04-25 03:28:52.397439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.066 qpair failed and we were unable to recover it. 00:28:18.066 [2024-04-25 03:28:52.397680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.066 [2024-04-25 03:28:52.397869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.066 [2024-04-25 03:28:52.397894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.066 qpair failed and we were unable to recover it. 00:28:18.066 [2024-04-25 03:28:52.398108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.066 [2024-04-25 03:28:52.398274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.066 [2024-04-25 03:28:52.398301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.066 qpair failed and we were unable to recover it. 00:28:18.066 [2024-04-25 03:28:52.398525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.066 [2024-04-25 03:28:52.398718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.066 [2024-04-25 03:28:52.398742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.066 qpair failed and we were unable to recover it. 00:28:18.066 [2024-04-25 03:28:52.398966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.066 [2024-04-25 03:28:52.399260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.066 [2024-04-25 03:28:52.399288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.066 qpair failed and we were unable to recover it. 00:28:18.066 [2024-04-25 03:28:52.399549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.066 [2024-04-25 03:28:52.399774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.066 [2024-04-25 03:28:52.399799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.066 qpair failed and we were unable to recover it. 00:28:18.066 [2024-04-25 03:28:52.399968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.066 [2024-04-25 03:28:52.400211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.066 [2024-04-25 03:28:52.400238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.066 qpair failed and we were unable to recover it. 00:28:18.066 [2024-04-25 03:28:52.400426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.066 [2024-04-25 03:28:52.400626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.066 [2024-04-25 03:28:52.400658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.066 qpair failed and we were unable to recover it. 00:28:18.066 [2024-04-25 03:28:52.400831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.066 [2024-04-25 03:28:52.401077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.066 [2024-04-25 03:28:52.401103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.066 qpair failed and we were unable to recover it. 00:28:18.066 [2024-04-25 03:28:52.401306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.066 [2024-04-25 03:28:52.401489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.067 [2024-04-25 03:28:52.401514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.067 qpair failed and we were unable to recover it. 00:28:18.067 [2024-04-25 03:28:52.401740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.067 [2024-04-25 03:28:52.401920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.067 [2024-04-25 03:28:52.401947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.067 qpair failed and we were unable to recover it. 00:28:18.067 [2024-04-25 03:28:52.402168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.067 [2024-04-25 03:28:52.402410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.067 [2024-04-25 03:28:52.402438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.067 qpair failed and we were unable to recover it. 00:28:18.067 [2024-04-25 03:28:52.402694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.067 [2024-04-25 03:28:52.402866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.067 [2024-04-25 03:28:52.402892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.067 qpair failed and we were unable to recover it. 00:28:18.067 [2024-04-25 03:28:52.403146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.067 [2024-04-25 03:28:52.403364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.067 [2024-04-25 03:28:52.403391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.067 qpair failed and we were unable to recover it. 00:28:18.067 [2024-04-25 03:28:52.403589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.067 [2024-04-25 03:28:52.403805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.067 [2024-04-25 03:28:52.403832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.067 qpair failed and we were unable to recover it. 00:28:18.067 [2024-04-25 03:28:52.404042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.067 [2024-04-25 03:28:52.404220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.067 [2024-04-25 03:28:52.404245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.067 qpair failed and we were unable to recover it. 00:28:18.067 [2024-04-25 03:28:52.404512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.067 [2024-04-25 03:28:52.404740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.067 [2024-04-25 03:28:52.404766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.067 qpair failed and we were unable to recover it. 00:28:18.067 [2024-04-25 03:28:52.404999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.067 [2024-04-25 03:28:52.405260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.067 [2024-04-25 03:28:52.405285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.067 qpair failed and we were unable to recover it. 00:28:18.067 [2024-04-25 03:28:52.405501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.067 [2024-04-25 03:28:52.405729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.067 [2024-04-25 03:28:52.405759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.067 qpair failed and we were unable to recover it. 00:28:18.067 [2024-04-25 03:28:52.405948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.067 [2024-04-25 03:28:52.406150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.067 [2024-04-25 03:28:52.406175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.067 qpair failed and we were unable to recover it. 00:28:18.067 [2024-04-25 03:28:52.406371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.067 [2024-04-25 03:28:52.406565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.067 [2024-04-25 03:28:52.406592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.067 qpair failed and we were unable to recover it. 00:28:18.067 [2024-04-25 03:28:52.406855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.067 [2024-04-25 03:28:52.407860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.067 [2024-04-25 03:28:52.407894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.067 qpair failed and we were unable to recover it. 00:28:18.067 [2024-04-25 03:28:52.408151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.067 [2024-04-25 03:28:52.408413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.067 [2024-04-25 03:28:52.408438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.067 qpair failed and we were unable to recover it. 00:28:18.067 [2024-04-25 03:28:52.408615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.067 [2024-04-25 03:28:52.408811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.067 [2024-04-25 03:28:52.408835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.067 qpair failed and we were unable to recover it. 00:28:18.067 [2024-04-25 03:28:52.409064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.067 [2024-04-25 03:28:52.409294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.067 [2024-04-25 03:28:52.409321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.067 qpair failed and we were unable to recover it. 00:28:18.067 [2024-04-25 03:28:52.409570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.067 [2024-04-25 03:28:52.409770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.067 [2024-04-25 03:28:52.409799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.067 qpair failed and we were unable to recover it. 00:28:18.067 [2024-04-25 03:28:52.410047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.067 [2024-04-25 03:28:52.410270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.067 [2024-04-25 03:28:52.410295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.067 qpair failed and we were unable to recover it. 00:28:18.067 [2024-04-25 03:28:52.410537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.067 [2024-04-25 03:28:52.410765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.067 [2024-04-25 03:28:52.410792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.067 qpair failed and we were unable to recover it. 00:28:18.067 [2024-04-25 03:28:52.410960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.067 [2024-04-25 03:28:52.411153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.067 [2024-04-25 03:28:52.411179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.067 qpair failed and we were unable to recover it. 00:28:18.067 [2024-04-25 03:28:52.411397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.067 [2024-04-25 03:28:52.411578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.067 [2024-04-25 03:28:52.411603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.067 qpair failed and we were unable to recover it. 00:28:18.067 [2024-04-25 03:28:52.411812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.067 [2024-04-25 03:28:52.411987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.067 [2024-04-25 03:28:52.412013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.067 qpair failed and we were unable to recover it. 00:28:18.067 [2024-04-25 03:28:52.412218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.067 [2024-04-25 03:28:52.412393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.067 [2024-04-25 03:28:52.412425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.067 qpair failed and we were unable to recover it. 00:28:18.067 [2024-04-25 03:28:52.412588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.067 [2024-04-25 03:28:52.412799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.067 [2024-04-25 03:28:52.412827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.067 qpair failed and we were unable to recover it. 00:28:18.067 [2024-04-25 03:28:52.413069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.067 [2024-04-25 03:28:52.413270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.067 [2024-04-25 03:28:52.413298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.067 qpair failed and we were unable to recover it. 00:28:18.067 [2024-04-25 03:28:52.413502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.067 [2024-04-25 03:28:52.413686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.067 [2024-04-25 03:28:52.413711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.067 qpair failed and we were unable to recover it. 00:28:18.067 [2024-04-25 03:28:52.413930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.067 [2024-04-25 03:28:52.414167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.067 [2024-04-25 03:28:52.414211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.067 qpair failed and we were unable to recover it. 00:28:18.067 [2024-04-25 03:28:52.414518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.067 [2024-04-25 03:28:52.414724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.067 [2024-04-25 03:28:52.414751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.067 qpair failed and we were unable to recover it. 00:28:18.067 [2024-04-25 03:28:52.414999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.067 [2024-04-25 03:28:52.415267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.067 [2024-04-25 03:28:52.415310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.067 qpair failed and we were unable to recover it. 00:28:18.067 [2024-04-25 03:28:52.415524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.068 [2024-04-25 03:28:52.415754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.068 [2024-04-25 03:28:52.415780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.068 qpair failed and we were unable to recover it. 00:28:18.068 [2024-04-25 03:28:52.415981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.068 [2024-04-25 03:28:52.416203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.068 [2024-04-25 03:28:52.416229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.068 qpair failed and we were unable to recover it. 00:28:18.068 [2024-04-25 03:28:52.416468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.068 [2024-04-25 03:28:52.416669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.068 [2024-04-25 03:28:52.416701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.068 qpair failed and we were unable to recover it. 00:28:18.068 [2024-04-25 03:28:52.416898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.068 [2024-04-25 03:28:52.417122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.068 [2024-04-25 03:28:52.417151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.068 qpair failed and we were unable to recover it. 00:28:18.068 [2024-04-25 03:28:52.417392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.068 [2024-04-25 03:28:52.417602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.068 [2024-04-25 03:28:52.417639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.068 qpair failed and we were unable to recover it. 00:28:18.068 [2024-04-25 03:28:52.417884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.068 [2024-04-25 03:28:52.418081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.068 [2024-04-25 03:28:52.418109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.068 qpair failed and we were unable to recover it. 00:28:18.068 [2024-04-25 03:28:52.418420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.068 [2024-04-25 03:28:52.418654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.068 [2024-04-25 03:28:52.418679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.068 qpair failed and we were unable to recover it. 00:28:18.068 [2024-04-25 03:28:52.418852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.068 [2024-04-25 03:28:52.419105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.068 [2024-04-25 03:28:52.419131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.068 qpair failed and we were unable to recover it. 00:28:18.068 [2024-04-25 03:28:52.419315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.068 [2024-04-25 03:28:52.419529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.068 [2024-04-25 03:28:52.419557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.068 qpair failed and we were unable to recover it. 00:28:18.068 [2024-04-25 03:28:52.419780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.068 [2024-04-25 03:28:52.419996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.068 [2024-04-25 03:28:52.420021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.068 qpair failed and we were unable to recover it. 00:28:18.068 [2024-04-25 03:28:52.420234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.068 [2024-04-25 03:28:52.420437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.068 [2024-04-25 03:28:52.420464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.068 qpair failed and we were unable to recover it. 00:28:18.068 [2024-04-25 03:28:52.420713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.068 [2024-04-25 03:28:52.420946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.068 [2024-04-25 03:28:52.420972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.068 qpair failed and we were unable to recover it. 00:28:18.068 [2024-04-25 03:28:52.421247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.068 [2024-04-25 03:28:52.421463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.068 [2024-04-25 03:28:52.421489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.068 qpair failed and we were unable to recover it. 00:28:18.068 [2024-04-25 03:28:52.421735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.068 [2024-04-25 03:28:52.421947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.068 [2024-04-25 03:28:52.421973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.068 qpair failed and we were unable to recover it. 00:28:18.068 [2024-04-25 03:28:52.422188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.068 [2024-04-25 03:28:52.422426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.068 [2024-04-25 03:28:52.422450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.068 qpair failed and we were unable to recover it. 00:28:18.068 [2024-04-25 03:28:52.422693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.068 [2024-04-25 03:28:52.422900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.068 [2024-04-25 03:28:52.422927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.068 qpair failed and we were unable to recover it. 00:28:18.068 [2024-04-25 03:28:52.423161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.068 [2024-04-25 03:28:52.423355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.068 [2024-04-25 03:28:52.423380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.068 qpair failed and we were unable to recover it. 00:28:18.068 [2024-04-25 03:28:52.423577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.068 [2024-04-25 03:28:52.423773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.068 [2024-04-25 03:28:52.423799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.068 qpair failed and we were unable to recover it. 00:28:18.068 [2024-04-25 03:28:52.423999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.068 [2024-04-25 03:28:52.424264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.068 [2024-04-25 03:28:52.424289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.068 qpair failed and we were unable to recover it. 00:28:18.068 [2024-04-25 03:28:52.424458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.068 [2024-04-25 03:28:52.424670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.068 [2024-04-25 03:28:52.424711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.068 qpair failed and we were unable to recover it. 00:28:18.068 [2024-04-25 03:28:52.424962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.068 [2024-04-25 03:28:52.425170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.068 [2024-04-25 03:28:52.425196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.068 qpair failed and we were unable to recover it. 00:28:18.068 [2024-04-25 03:28:52.425387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.068 [2024-04-25 03:28:52.425579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.068 [2024-04-25 03:28:52.425604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a80000b90 with addr=10.0.0.2, port=4420 00:28:18.068 qpair failed and we were unable to recover it. 00:28:18.068 [2024-04-25 03:28:52.425817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.068 [2024-04-25 03:28:52.426044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.068 [2024-04-25 03:28:52.426089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.068 qpair failed and we were unable to recover it. 00:28:18.068 [2024-04-25 03:28:52.426345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.068 [2024-04-25 03:28:52.426523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.068 [2024-04-25 03:28:52.426548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.068 qpair failed and we were unable to recover it. 00:28:18.068 [2024-04-25 03:28:52.426734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.068 [2024-04-25 03:28:52.426905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.069 [2024-04-25 03:28:52.426932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.069 qpair failed and we were unable to recover it. 00:28:18.069 [2024-04-25 03:28:52.427139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.069 [2024-04-25 03:28:52.427310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.069 [2024-04-25 03:28:52.427334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.069 qpair failed and we were unable to recover it. 00:28:18.069 [2024-04-25 03:28:52.427538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.069 [2024-04-25 03:28:52.427740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.069 [2024-04-25 03:28:52.427766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.069 qpair failed and we were unable to recover it. 00:28:18.069 [2024-04-25 03:28:52.427974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.069 [2024-04-25 03:28:52.428202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.069 [2024-04-25 03:28:52.428227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.069 qpair failed and we were unable to recover it. 00:28:18.069 [2024-04-25 03:28:52.428436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.069 [2024-04-25 03:28:52.428613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.069 [2024-04-25 03:28:52.428650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.069 qpair failed and we were unable to recover it. 00:28:18.069 [2024-04-25 03:28:52.428855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.069 [2024-04-25 03:28:52.429056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.069 [2024-04-25 03:28:52.429081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.069 qpair failed and we were unable to recover it. 00:28:18.069 [2024-04-25 03:28:52.429300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.069 [2024-04-25 03:28:52.429474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.069 [2024-04-25 03:28:52.429499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.069 qpair failed and we were unable to recover it. 00:28:18.069 [2024-04-25 03:28:52.429697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.069 [2024-04-25 03:28:52.429875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.069 [2024-04-25 03:28:52.429910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.069 qpair failed and we were unable to recover it. 00:28:18.069 [2024-04-25 03:28:52.430137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.069 [2024-04-25 03:28:52.430344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.069 [2024-04-25 03:28:52.430369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.069 qpair failed and we were unable to recover it. 00:28:18.069 [2024-04-25 03:28:52.430568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.069 [2024-04-25 03:28:52.430783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.069 [2024-04-25 03:28:52.430809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.069 qpair failed and we were unable to recover it. 00:28:18.069 [2024-04-25 03:28:52.430997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.069 [2024-04-25 03:28:52.431224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.069 [2024-04-25 03:28:52.431249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.069 qpair failed and we were unable to recover it. 00:28:18.069 [2024-04-25 03:28:52.431473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.069 [2024-04-25 03:28:52.431650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.069 [2024-04-25 03:28:52.431682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.069 qpair failed and we were unable to recover it. 00:28:18.069 [2024-04-25 03:28:52.431875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.069 [2024-04-25 03:28:52.432087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.069 [2024-04-25 03:28:52.432112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.069 qpair failed and we were unable to recover it. 00:28:18.069 [2024-04-25 03:28:52.432286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.069 [2024-04-25 03:28:52.432482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.069 [2024-04-25 03:28:52.432507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.069 qpair failed and we were unable to recover it. 00:28:18.069 [2024-04-25 03:28:52.432707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.069 [2024-04-25 03:28:52.432899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.069 [2024-04-25 03:28:52.432924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.069 qpair failed and we were unable to recover it. 00:28:18.069 [2024-04-25 03:28:52.433097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.069 [2024-04-25 03:28:52.433317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.069 [2024-04-25 03:28:52.433342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.069 qpair failed and we were unable to recover it. 00:28:18.069 [2024-04-25 03:28:52.433518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.069 [2024-04-25 03:28:52.433726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.069 [2024-04-25 03:28:52.433752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.069 qpair failed and we were unable to recover it. 00:28:18.069 [2024-04-25 03:28:52.433931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.069 [2024-04-25 03:28:52.434157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.069 [2024-04-25 03:28:52.434182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.069 qpair failed and we were unable to recover it. 00:28:18.069 [2024-04-25 03:28:52.434375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.069 [2024-04-25 03:28:52.434578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.069 [2024-04-25 03:28:52.434603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.069 qpair failed and we were unable to recover it. 00:28:18.069 [2024-04-25 03:28:52.434793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.069 [2024-04-25 03:28:52.435027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.069 [2024-04-25 03:28:52.435051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.069 qpair failed and we were unable to recover it. 00:28:18.069 [2024-04-25 03:28:52.435331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.069 [2024-04-25 03:28:52.435538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.069 [2024-04-25 03:28:52.435563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.069 qpair failed and we were unable to recover it. 00:28:18.069 [2024-04-25 03:28:52.435764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.069 [2024-04-25 03:28:52.435983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.069 [2024-04-25 03:28:52.436007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.069 qpair failed and we were unable to recover it. 00:28:18.069 [2024-04-25 03:28:52.436186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.069 [2024-04-25 03:28:52.436378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.069 [2024-04-25 03:28:52.436405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.069 qpair failed and we were unable to recover it. 00:28:18.069 [2024-04-25 03:28:52.436579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.069 [2024-04-25 03:28:52.436786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.069 [2024-04-25 03:28:52.436811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.069 qpair failed and we were unable to recover it. 00:28:18.069 [2024-04-25 03:28:52.437010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.069 [2024-04-25 03:28:52.437204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.069 [2024-04-25 03:28:52.437230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.069 qpair failed and we were unable to recover it. 00:28:18.069 [2024-04-25 03:28:52.437429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.069 [2024-04-25 03:28:52.437673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.069 [2024-04-25 03:28:52.437699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.069 qpair failed and we were unable to recover it. 00:28:18.069 [2024-04-25 03:28:52.437927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.069 [2024-04-25 03:28:52.438140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.069 [2024-04-25 03:28:52.438164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.069 qpair failed and we were unable to recover it. 00:28:18.069 [2024-04-25 03:28:52.438332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.069 [2024-04-25 03:28:52.438553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.069 [2024-04-25 03:28:52.438577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.069 qpair failed and we were unable to recover it. 00:28:18.069 [2024-04-25 03:28:52.438773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.069 [2024-04-25 03:28:52.438952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.069 [2024-04-25 03:28:52.438979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.069 qpair failed and we were unable to recover it. 00:28:18.069 [2024-04-25 03:28:52.439182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.070 [2024-04-25 03:28:52.439356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.070 [2024-04-25 03:28:52.439381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.070 qpair failed and we were unable to recover it. 00:28:18.070 [2024-04-25 03:28:52.439557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.070 [2024-04-25 03:28:52.439738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.070 [2024-04-25 03:28:52.439767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.070 qpair failed and we were unable to recover it. 00:28:18.070 [2024-04-25 03:28:52.439968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.070 [2024-04-25 03:28:52.440141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.070 [2024-04-25 03:28:52.440166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.070 qpair failed and we were unable to recover it. 00:28:18.070 [2024-04-25 03:28:52.440340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.070 [2024-04-25 03:28:52.440559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.070 [2024-04-25 03:28:52.440584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.070 qpair failed and we were unable to recover it. 00:28:18.070 [2024-04-25 03:28:52.440783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.070 [2024-04-25 03:28:52.440982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.070 [2024-04-25 03:28:52.441008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.070 qpair failed and we were unable to recover it. 00:28:18.070 [2024-04-25 03:28:52.441211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.070 [2024-04-25 03:28:52.441413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.070 [2024-04-25 03:28:52.441439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.070 qpair failed and we were unable to recover it. 00:28:18.070 [2024-04-25 03:28:52.441614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.070 [2024-04-25 03:28:52.441804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.070 [2024-04-25 03:28:52.441830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.070 qpair failed and we were unable to recover it. 00:28:18.070 [2024-04-25 03:28:52.442074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.070 [2024-04-25 03:28:52.442262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.070 [2024-04-25 03:28:52.442287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.070 qpair failed and we were unable to recover it. 00:28:18.070 [2024-04-25 03:28:52.442491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.070 [2024-04-25 03:28:52.442717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.070 [2024-04-25 03:28:52.442742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.070 qpair failed and we were unable to recover it. 00:28:18.070 [2024-04-25 03:28:52.442937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.070 [2024-04-25 03:28:52.443137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.070 [2024-04-25 03:28:52.443168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.070 qpair failed and we were unable to recover it. 00:28:18.070 [2024-04-25 03:28:52.443386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.070 [2024-04-25 03:28:52.443579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.070 [2024-04-25 03:28:52.443605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.070 qpair failed and we were unable to recover it. 00:28:18.070 [2024-04-25 03:28:52.443831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.070 [2024-04-25 03:28:52.444000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.070 [2024-04-25 03:28:52.444025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.070 qpair failed and we were unable to recover it. 00:28:18.070 [2024-04-25 03:28:52.444196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.070 [2024-04-25 03:28:52.444417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.070 [2024-04-25 03:28:52.444442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.070 qpair failed and we were unable to recover it. 00:28:18.070 [2024-04-25 03:28:52.444612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.070 [2024-04-25 03:28:52.444799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.070 [2024-04-25 03:28:52.444825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.070 qpair failed and we were unable to recover it. 00:28:18.070 [2024-04-25 03:28:52.445050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.070 [2024-04-25 03:28:52.445246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.070 [2024-04-25 03:28:52.445272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.070 qpair failed and we were unable to recover it. 00:28:18.070 [2024-04-25 03:28:52.445468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.070 [2024-04-25 03:28:52.445646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.070 [2024-04-25 03:28:52.445671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.070 qpair failed and we were unable to recover it. 00:28:18.070 [2024-04-25 03:28:52.445862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.070 [2024-04-25 03:28:52.446072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.070 [2024-04-25 03:28:52.446097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.070 qpair failed and we were unable to recover it. 00:28:18.070 [2024-04-25 03:28:52.446321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.070 [2024-04-25 03:28:52.446499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.070 [2024-04-25 03:28:52.446523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.070 qpair failed and we were unable to recover it. 00:28:18.070 [2024-04-25 03:28:52.446723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.070 [2024-04-25 03:28:52.446900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.070 [2024-04-25 03:28:52.446927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.070 qpair failed and we were unable to recover it. 00:28:18.070 [2024-04-25 03:28:52.447123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.070 [2024-04-25 03:28:52.447320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.070 [2024-04-25 03:28:52.447349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.070 qpair failed and we were unable to recover it. 00:28:18.070 [2024-04-25 03:28:52.447548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.070 [2024-04-25 03:28:52.447751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.070 [2024-04-25 03:28:52.447776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.070 qpair failed and we were unable to recover it. 00:28:18.070 [2024-04-25 03:28:52.447973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.070 [2024-04-25 03:28:52.448199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.070 [2024-04-25 03:28:52.448223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.070 qpair failed and we were unable to recover it. 00:28:18.070 [2024-04-25 03:28:52.448426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.070 [2024-04-25 03:28:52.448622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.070 [2024-04-25 03:28:52.448652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.070 qpair failed and we were unable to recover it. 00:28:18.070 [2024-04-25 03:28:52.448846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.070 [2024-04-25 03:28:52.449083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.070 [2024-04-25 03:28:52.449108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.070 qpair failed and we were unable to recover it. 00:28:18.070 [2024-04-25 03:28:52.449326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.070 [2024-04-25 03:28:52.449563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.070 [2024-04-25 03:28:52.449588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.070 qpair failed and we were unable to recover it. 00:28:18.070 [2024-04-25 03:28:52.449771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.070 [2024-04-25 03:28:52.449939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.070 [2024-04-25 03:28:52.449964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.070 qpair failed and we were unable to recover it. 00:28:18.070 [2024-04-25 03:28:52.450184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.070 [2024-04-25 03:28:52.450349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.070 [2024-04-25 03:28:52.450376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.070 qpair failed and we were unable to recover it. 00:28:18.070 [2024-04-25 03:28:52.450542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.070 [2024-04-25 03:28:52.450715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.070 [2024-04-25 03:28:52.450741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.070 qpair failed and we were unable to recover it. 00:28:18.070 [2024-04-25 03:28:52.450964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.070 [2024-04-25 03:28:52.451166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.070 [2024-04-25 03:28:52.451191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.070 qpair failed and we were unable to recover it. 00:28:18.070 [2024-04-25 03:28:52.451367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.071 [2024-04-25 03:28:52.451567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.071 [2024-04-25 03:28:52.451597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.071 qpair failed and we were unable to recover it. 00:28:18.071 [2024-04-25 03:28:52.451824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.071 [2024-04-25 03:28:52.452035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.071 [2024-04-25 03:28:52.452061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.071 qpair failed and we were unable to recover it. 00:28:18.071 [2024-04-25 03:28:52.452258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.071 [2024-04-25 03:28:52.452495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.071 [2024-04-25 03:28:52.452521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.071 qpair failed and we were unable to recover it. 00:28:18.071 [2024-04-25 03:28:52.452739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.071 [2024-04-25 03:28:52.452940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.071 [2024-04-25 03:28:52.452965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.071 qpair failed and we were unable to recover it. 00:28:18.071 [2024-04-25 03:28:52.453141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.071 [2024-04-25 03:28:52.453361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.071 [2024-04-25 03:28:52.453385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.071 qpair failed and we were unable to recover it. 00:28:18.071 [2024-04-25 03:28:52.453587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.071 [2024-04-25 03:28:52.453790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.071 [2024-04-25 03:28:52.453816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.071 qpair failed and we were unable to recover it. 00:28:18.071 [2024-04-25 03:28:52.454033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.071 [2024-04-25 03:28:52.454257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.071 [2024-04-25 03:28:52.454282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.071 qpair failed and we were unable to recover it. 00:28:18.071 [2024-04-25 03:28:52.454501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.071 [2024-04-25 03:28:52.454695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.071 [2024-04-25 03:28:52.454721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.071 qpair failed and we were unable to recover it. 00:28:18.071 [2024-04-25 03:28:52.454929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.071 [2024-04-25 03:28:52.455120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.071 [2024-04-25 03:28:52.455144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.071 qpair failed and we were unable to recover it. 00:28:18.071 [2024-04-25 03:28:52.455336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.071 [2024-04-25 03:28:52.455533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.071 [2024-04-25 03:28:52.455558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.071 qpair failed and we were unable to recover it. 00:28:18.071 [2024-04-25 03:28:52.455737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.071 [2024-04-25 03:28:52.455940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.071 [2024-04-25 03:28:52.455974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.071 qpair failed and we were unable to recover it. 00:28:18.071 [2024-04-25 03:28:52.456181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.071 [2024-04-25 03:28:52.456418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.071 [2024-04-25 03:28:52.456443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.071 qpair failed and we were unable to recover it. 00:28:18.071 [2024-04-25 03:28:52.456643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.071 [2024-04-25 03:28:52.456815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.071 [2024-04-25 03:28:52.456840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.071 qpair failed and we were unable to recover it. 00:28:18.071 [2024-04-25 03:28:52.457051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.071 [2024-04-25 03:28:52.457249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.071 [2024-04-25 03:28:52.457275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.071 qpair failed and we were unable to recover it. 00:28:18.071 [2024-04-25 03:28:52.457512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.071 [2024-04-25 03:28:52.457713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.071 [2024-04-25 03:28:52.457739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.071 qpair failed and we were unable to recover it. 00:28:18.071 [2024-04-25 03:28:52.457909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.071 [2024-04-25 03:28:52.458102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.071 [2024-04-25 03:28:52.458127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.071 qpair failed and we were unable to recover it. 00:28:18.071 [2024-04-25 03:28:52.458329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.071 [2024-04-25 03:28:52.458520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.071 [2024-04-25 03:28:52.458545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.071 qpair failed and we were unable to recover it. 00:28:18.071 [2024-04-25 03:28:52.458748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.071 [2024-04-25 03:28:52.458954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.071 [2024-04-25 03:28:52.458979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.071 qpair failed and we were unable to recover it. 00:28:18.071 [2024-04-25 03:28:52.459196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.071 [2024-04-25 03:28:52.459368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.071 [2024-04-25 03:28:52.459393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.071 qpair failed and we were unable to recover it. 00:28:18.071 [2024-04-25 03:28:52.459591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.071 [2024-04-25 03:28:52.459792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.071 [2024-04-25 03:28:52.459817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.071 qpair failed and we were unable to recover it. 00:28:18.071 [2024-04-25 03:28:52.460014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.071 [2024-04-25 03:28:52.460206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.071 [2024-04-25 03:28:52.460231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.071 qpair failed and we were unable to recover it. 00:28:18.071 [2024-04-25 03:28:52.460435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.071 [2024-04-25 03:28:52.460608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.071 [2024-04-25 03:28:52.460648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.071 qpair failed and we were unable to recover it. 00:28:18.071 [2024-04-25 03:28:52.460888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.071 [2024-04-25 03:28:52.461056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.071 [2024-04-25 03:28:52.461081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.071 qpair failed and we were unable to recover it. 00:28:18.071 [2024-04-25 03:28:52.461274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.071 [2024-04-25 03:28:52.461477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.071 [2024-04-25 03:28:52.461502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.071 qpair failed and we were unable to recover it. 00:28:18.071 [2024-04-25 03:28:52.461679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.071 [2024-04-25 03:28:52.461895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.071 [2024-04-25 03:28:52.461920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.071 qpair failed and we were unable to recover it. 00:28:18.071 [2024-04-25 03:28:52.462109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.071 [2024-04-25 03:28:52.462283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.071 [2024-04-25 03:28:52.462307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.071 qpair failed and we were unable to recover it. 00:28:18.071 [2024-04-25 03:28:52.462506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.071 [2024-04-25 03:28:52.462687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.071 [2024-04-25 03:28:52.462713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.071 qpair failed and we were unable to recover it. 00:28:18.071 [2024-04-25 03:28:52.462884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.071 [2024-04-25 03:28:52.463132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.071 [2024-04-25 03:28:52.463157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.071 qpair failed and we were unable to recover it. 00:28:18.071 [2024-04-25 03:28:52.463329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.071 [2024-04-25 03:28:52.463530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.071 [2024-04-25 03:28:52.463555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.071 qpair failed and we were unable to recover it. 00:28:18.072 [2024-04-25 03:28:52.463756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.072 [2024-04-25 03:28:52.463959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.072 [2024-04-25 03:28:52.463986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.072 qpair failed and we were unable to recover it. 00:28:18.072 [2024-04-25 03:28:52.464158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.072 [2024-04-25 03:28:52.464358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.072 [2024-04-25 03:28:52.464382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.072 qpair failed and we were unable to recover it. 00:28:18.072 [2024-04-25 03:28:52.464583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.072 [2024-04-25 03:28:52.464783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.072 [2024-04-25 03:28:52.464808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.072 qpair failed and we were unable to recover it. 00:28:18.072 [2024-04-25 03:28:52.465016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.072 [2024-04-25 03:28:52.465215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.072 [2024-04-25 03:28:52.465240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.072 qpair failed and we were unable to recover it. 00:28:18.072 [2024-04-25 03:28:52.465423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.072 [2024-04-25 03:28:52.465612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.072 [2024-04-25 03:28:52.465643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.072 qpair failed and we were unable to recover it. 00:28:18.072 [2024-04-25 03:28:52.465852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.072 [2024-04-25 03:28:52.466027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.072 [2024-04-25 03:28:52.466051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.072 qpair failed and we were unable to recover it. 00:28:18.072 [2024-04-25 03:28:52.466243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.072 [2024-04-25 03:28:52.466441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.072 [2024-04-25 03:28:52.466469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.072 qpair failed and we were unable to recover it. 00:28:18.072 [2024-04-25 03:28:52.467207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.072 [2024-04-25 03:28:52.467419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.072 [2024-04-25 03:28:52.467445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.072 qpair failed and we were unable to recover it. 00:28:18.072 [2024-04-25 03:28:52.467681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.072 [2024-04-25 03:28:52.467895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.072 [2024-04-25 03:28:52.467921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.072 qpair failed and we were unable to recover it. 00:28:18.072 [2024-04-25 03:28:52.468086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.072 [2024-04-25 03:28:52.468284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.072 [2024-04-25 03:28:52.468308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.072 qpair failed and we were unable to recover it. 00:28:18.072 [2024-04-25 03:28:52.468504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.072 [2024-04-25 03:28:52.468721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.072 [2024-04-25 03:28:52.468747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.072 qpair failed and we were unable to recover it. 00:28:18.072 [2024-04-25 03:28:52.468942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.072 [2024-04-25 03:28:52.469137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.072 [2024-04-25 03:28:52.469161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.072 qpair failed and we were unable to recover it. 00:28:18.072 [2024-04-25 03:28:52.469368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.072 [2024-04-25 03:28:52.469568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.072 [2024-04-25 03:28:52.469595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.072 qpair failed and we were unable to recover it. 00:28:18.072 [2024-04-25 03:28:52.469800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.072 [2024-04-25 03:28:52.469979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.072 [2024-04-25 03:28:52.470005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.072 qpair failed and we were unable to recover it. 00:28:18.072 [2024-04-25 03:28:52.470206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.072 [2024-04-25 03:28:52.470407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.072 [2024-04-25 03:28:52.470432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.072 qpair failed and we were unable to recover it. 00:28:18.072 [2024-04-25 03:28:52.470603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.072 [2024-04-25 03:28:52.470788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.072 [2024-04-25 03:28:52.470814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.072 qpair failed and we were unable to recover it. 00:28:18.072 [2024-04-25 03:28:52.471026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.072 [2024-04-25 03:28:52.471227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.072 [2024-04-25 03:28:52.471253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.072 qpair failed and we were unable to recover it. 00:28:18.072 [2024-04-25 03:28:52.471422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.072 [2024-04-25 03:28:52.471646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.072 [2024-04-25 03:28:52.471671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.072 qpair failed and we were unable to recover it. 00:28:18.072 [2024-04-25 03:28:52.471861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.072 [2024-04-25 03:28:52.472091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.072 [2024-04-25 03:28:52.472115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.072 qpair failed and we were unable to recover it. 00:28:18.072 [2024-04-25 03:28:52.472318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.072 [2024-04-25 03:28:52.472513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.072 [2024-04-25 03:28:52.472539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.072 qpair failed and we were unable to recover it. 00:28:18.072 [2024-04-25 03:28:52.472745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.072 [2024-04-25 03:28:52.472937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.072 [2024-04-25 03:28:52.472962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.072 qpair failed and we were unable to recover it. 00:28:18.072 [2024-04-25 03:28:52.473161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.072 [2024-04-25 03:28:52.473360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.072 [2024-04-25 03:28:52.473388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.072 qpair failed and we were unable to recover it. 00:28:18.072 [2024-04-25 03:28:52.473597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.072 [2024-04-25 03:28:52.473833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.072 [2024-04-25 03:28:52.473858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.072 qpair failed and we were unable to recover it. 00:28:18.072 [2024-04-25 03:28:52.474061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.072 [2024-04-25 03:28:52.474263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.072 [2024-04-25 03:28:52.474289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.072 qpair failed and we were unable to recover it. 00:28:18.072 [2024-04-25 03:28:52.474491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.072 [2024-04-25 03:28:52.474690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.072 [2024-04-25 03:28:52.474716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.072 qpair failed and we were unable to recover it. 00:28:18.072 [2024-04-25 03:28:52.474914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.072 [2024-04-25 03:28:52.475133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.072 [2024-04-25 03:28:52.475158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.072 qpair failed and we were unable to recover it. 00:28:18.072 [2024-04-25 03:28:52.475384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.072 [2024-04-25 03:28:52.475611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.072 [2024-04-25 03:28:52.475642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.072 qpair failed and we were unable to recover it. 00:28:18.072 [2024-04-25 03:28:52.475830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.072 [2024-04-25 03:28:52.476050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.072 [2024-04-25 03:28:52.476075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.072 qpair failed and we were unable to recover it. 00:28:18.072 [2024-04-25 03:28:52.476260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.072 [2024-04-25 03:28:52.476454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.072 [2024-04-25 03:28:52.476479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.072 qpair failed and we were unable to recover it. 00:28:18.073 [2024-04-25 03:28:52.476682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.073 [2024-04-25 03:28:52.476873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.073 [2024-04-25 03:28:52.476897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.073 qpair failed and we were unable to recover it. 00:28:18.073 [2024-04-25 03:28:52.477098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.073 [2024-04-25 03:28:52.477288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.073 [2024-04-25 03:28:52.477313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.073 qpair failed and we were unable to recover it. 00:28:18.073 [2024-04-25 03:28:52.477477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.073 [2024-04-25 03:28:52.477638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.073 [2024-04-25 03:28:52.477663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.073 qpair failed and we were unable to recover it. 00:28:18.073 [2024-04-25 03:28:52.477875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.073 [2024-04-25 03:28:52.478109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.073 [2024-04-25 03:28:52.478135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.073 qpair failed and we were unable to recover it. 00:28:18.073 [2024-04-25 03:28:52.478333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.073 [2024-04-25 03:28:52.478536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.073 [2024-04-25 03:28:52.478561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.073 qpair failed and we were unable to recover it. 00:28:18.073 [2024-04-25 03:28:52.478732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.073 [2024-04-25 03:28:52.478957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.073 [2024-04-25 03:28:52.478981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.073 qpair failed and we were unable to recover it. 00:28:18.073 [2024-04-25 03:28:52.479205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.073 [2024-04-25 03:28:52.479401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.073 [2024-04-25 03:28:52.479426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.073 qpair failed and we were unable to recover it. 00:28:18.073 [2024-04-25 03:28:52.479653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.073 [2024-04-25 03:28:52.479847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.073 [2024-04-25 03:28:52.479872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.073 qpair failed and we were unable to recover it. 00:28:18.073 [2024-04-25 03:28:52.480068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.073 [2024-04-25 03:28:52.480259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.073 [2024-04-25 03:28:52.480284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.073 qpair failed and we were unable to recover it. 00:28:18.073 [2024-04-25 03:28:52.480460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.073 [2024-04-25 03:28:52.480695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.073 [2024-04-25 03:28:52.480722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.073 qpair failed and we were unable to recover it. 00:28:18.073 [2024-04-25 03:28:52.480896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.073 [2024-04-25 03:28:52.481097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.073 [2024-04-25 03:28:52.481122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.073 qpair failed and we were unable to recover it. 00:28:18.073 [2024-04-25 03:28:52.481296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.073 [2024-04-25 03:28:52.481466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.073 [2024-04-25 03:28:52.481493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.073 qpair failed and we were unable to recover it. 00:28:18.073 [2024-04-25 03:28:52.481689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.073 [2024-04-25 03:28:52.481858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.073 [2024-04-25 03:28:52.481883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.073 qpair failed and we were unable to recover it. 00:28:18.073 [2024-04-25 03:28:52.482119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.073 [2024-04-25 03:28:52.482368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.073 [2024-04-25 03:28:52.482394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.073 qpair failed and we were unable to recover it. 00:28:18.073 [2024-04-25 03:28:52.482591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.073 [2024-04-25 03:28:52.482798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.073 [2024-04-25 03:28:52.482825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.073 qpair failed and we were unable to recover it. 00:28:18.073 [2024-04-25 03:28:52.483034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.073 [2024-04-25 03:28:52.483260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.073 [2024-04-25 03:28:52.483285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.073 qpair failed and we were unable to recover it. 00:28:18.073 [2024-04-25 03:28:52.483466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.073 [2024-04-25 03:28:52.483692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.073 [2024-04-25 03:28:52.483718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.073 qpair failed and we were unable to recover it. 00:28:18.073 [2024-04-25 03:28:52.483920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.073 [2024-04-25 03:28:52.484092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.073 [2024-04-25 03:28:52.484118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.073 qpair failed and we were unable to recover it. 00:28:18.073 [2024-04-25 03:28:52.484366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.073 [2024-04-25 03:28:52.484565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.073 [2024-04-25 03:28:52.484590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.073 qpair failed and we were unable to recover it. 00:28:18.073 [2024-04-25 03:28:52.484797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.073 [2024-04-25 03:28:52.485049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.073 [2024-04-25 03:28:52.485074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.073 qpair failed and we were unable to recover it. 00:28:18.073 [2024-04-25 03:28:52.485253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.073 [2024-04-25 03:28:52.485423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.073 [2024-04-25 03:28:52.485447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.073 qpair failed and we were unable to recover it. 00:28:18.073 [2024-04-25 03:28:52.485649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.073 [2024-04-25 03:28:52.485845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.073 [2024-04-25 03:28:52.485871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.073 qpair failed and we were unable to recover it. 00:28:18.073 [2024-04-25 03:28:52.486089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.073 [2024-04-25 03:28:52.486293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.073 [2024-04-25 03:28:52.486319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.073 qpair failed and we were unable to recover it. 00:28:18.073 [2024-04-25 03:28:52.486556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.073 [2024-04-25 03:28:52.486725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.073 [2024-04-25 03:28:52.486751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.073 qpair failed and we were unable to recover it. 00:28:18.073 [2024-04-25 03:28:52.486925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.074 [2024-04-25 03:28:52.487143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.074 [2024-04-25 03:28:52.487168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.074 qpair failed and we were unable to recover it. 00:28:18.074 [2024-04-25 03:28:52.487396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.074 [2024-04-25 03:28:52.487609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.074 [2024-04-25 03:28:52.487640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.074 qpair failed and we were unable to recover it. 00:28:18.074 [2024-04-25 03:28:52.487810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.074 [2024-04-25 03:28:52.488007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.074 [2024-04-25 03:28:52.488033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.074 qpair failed and we were unable to recover it. 00:28:18.074 [2024-04-25 03:28:52.488217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.074 [2024-04-25 03:28:52.488396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.074 [2024-04-25 03:28:52.488421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.074 qpair failed and we were unable to recover it. 00:28:18.074 [2024-04-25 03:28:52.488651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.074 [2024-04-25 03:28:52.488856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.074 [2024-04-25 03:28:52.488882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.074 qpair failed and we were unable to recover it. 00:28:18.074 [2024-04-25 03:28:52.489056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.074 [2024-04-25 03:28:52.489224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.074 [2024-04-25 03:28:52.489266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.074 qpair failed and we were unable to recover it. 00:28:18.074 [2024-04-25 03:28:52.489498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.074 [2024-04-25 03:28:52.489697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.074 [2024-04-25 03:28:52.489722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.074 qpair failed and we were unable to recover it. 00:28:18.074 [2024-04-25 03:28:52.489926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.074 [2024-04-25 03:28:52.490128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.074 [2024-04-25 03:28:52.490152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.074 qpair failed and we were unable to recover it. 00:28:18.074 [2024-04-25 03:28:52.490352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.074 [2024-04-25 03:28:52.490544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.074 [2024-04-25 03:28:52.490570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2a78000b90 with addr=10.0.0.2, port=4420 00:28:18.074 qpair failed and we were unable to recover it. 00:28:18.074 [2024-04-25 03:28:52.490799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.074 [2024-04-25 03:28:52.490993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.074 [2024-04-25 03:28:52.491021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.074 qpair failed and we were unable to recover it. 00:28:18.074 [2024-04-25 03:28:52.491223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.074 [2024-04-25 03:28:52.491415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.074 [2024-04-25 03:28:52.491440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.074 qpair failed and we were unable to recover it. 00:28:18.074 [2024-04-25 03:28:52.491611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.074 [2024-04-25 03:28:52.491792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.074 [2024-04-25 03:28:52.491817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.074 qpair failed and we were unable to recover it. 00:28:18.074 [2024-04-25 03:28:52.492014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.074 [2024-04-25 03:28:52.492213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.074 [2024-04-25 03:28:52.492239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.074 qpair failed and we were unable to recover it. 00:28:18.074 [2024-04-25 03:28:52.492411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.074 [2024-04-25 03:28:52.492600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.074 [2024-04-25 03:28:52.492624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.074 qpair failed and we were unable to recover it. 00:28:18.074 [2024-04-25 03:28:52.492841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.074 [2024-04-25 03:28:52.493042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.074 [2024-04-25 03:28:52.493067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.074 qpair failed and we were unable to recover it. 00:28:18.074 [2024-04-25 03:28:52.493292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.074 [2024-04-25 03:28:52.493460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.074 [2024-04-25 03:28:52.493484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.074 qpair failed and we were unable to recover it. 00:28:18.074 [2024-04-25 03:28:52.493691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.074 [2024-04-25 03:28:52.493890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.074 [2024-04-25 03:28:52.493914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.074 qpair failed and we were unable to recover it. 00:28:18.074 [2024-04-25 03:28:52.494111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.074 [2024-04-25 03:28:52.494296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.074 [2024-04-25 03:28:52.494320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.074 qpair failed and we were unable to recover it. 00:28:18.074 [2024-04-25 03:28:52.494492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.074 [2024-04-25 03:28:52.494689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.074 [2024-04-25 03:28:52.494713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.074 qpair failed and we were unable to recover it. 00:28:18.074 [2024-04-25 03:28:52.494884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.074 [2024-04-25 03:28:52.495108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.074 [2024-04-25 03:28:52.495133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.074 qpair failed and we were unable to recover it. 00:28:18.074 [2024-04-25 03:28:52.495343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.074 [2024-04-25 03:28:52.495529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.074 [2024-04-25 03:28:52.495553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.074 qpair failed and we were unable to recover it. 00:28:18.074 [2024-04-25 03:28:52.495725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.074 [2024-04-25 03:28:52.495925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.074 [2024-04-25 03:28:52.495949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.074 qpair failed and we were unable to recover it. 00:28:18.074 [2024-04-25 03:28:52.496119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.074 [2024-04-25 03:28:52.496295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.074 [2024-04-25 03:28:52.496321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.074 qpair failed and we were unable to recover it. 00:28:18.074 [2024-04-25 03:28:52.496545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.074 [2024-04-25 03:28:52.496738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.074 [2024-04-25 03:28:52.496763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.074 qpair failed and we were unable to recover it. 00:28:18.074 [2024-04-25 03:28:52.496934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.074 [2024-04-25 03:28:52.497151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.074 [2024-04-25 03:28:52.497176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.074 qpair failed and we were unable to recover it. 00:28:18.074 [2024-04-25 03:28:52.497347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.074 [2024-04-25 03:28:52.497552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.074 [2024-04-25 03:28:52.497577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.074 qpair failed and we were unable to recover it. 00:28:18.074 [2024-04-25 03:28:52.497778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.074 [2024-04-25 03:28:52.497944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.074 [2024-04-25 03:28:52.497968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.074 qpair failed and we were unable to recover it. 00:28:18.074 [2024-04-25 03:28:52.498140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.074 [2024-04-25 03:28:52.498348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.074 [2024-04-25 03:28:52.498372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.074 qpair failed and we were unable to recover it. 00:28:18.074 [2024-04-25 03:28:52.498602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.074 [2024-04-25 03:28:52.498794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.074 [2024-04-25 03:28:52.498818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.074 qpair failed and we were unable to recover it. 00:28:18.074 [2024-04-25 03:28:52.498996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.075 [2024-04-25 03:28:52.499202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.075 [2024-04-25 03:28:52.499231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.075 qpair failed and we were unable to recover it. 00:28:18.075 [2024-04-25 03:28:52.499404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.075 [2024-04-25 03:28:52.499603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.075 [2024-04-25 03:28:52.499633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.075 qpair failed and we were unable to recover it. 00:28:18.075 [2024-04-25 03:28:52.499825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.075 [2024-04-25 03:28:52.500008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.075 [2024-04-25 03:28:52.500032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.075 qpair failed and we were unable to recover it. 00:28:18.075 [2024-04-25 03:28:52.500336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.075 [2024-04-25 03:28:52.500505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.075 [2024-04-25 03:28:52.500529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.075 qpair failed and we were unable to recover it. 00:28:18.075 [2024-04-25 03:28:52.500706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.075 [2024-04-25 03:28:52.500870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.075 [2024-04-25 03:28:52.500901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.075 qpair failed and we were unable to recover it. 00:28:18.075 [2024-04-25 03:28:52.501100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.075 [2024-04-25 03:28:52.501264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.075 [2024-04-25 03:28:52.501288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.075 qpair failed and we were unable to recover it. 00:28:18.075 [2024-04-25 03:28:52.501520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.075 [2024-04-25 03:28:52.501694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.075 [2024-04-25 03:28:52.501719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.075 qpair failed and we were unable to recover it. 00:28:18.075 [2024-04-25 03:28:52.501951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.075 [2024-04-25 03:28:52.502145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.075 [2024-04-25 03:28:52.502169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.075 qpair failed and we were unable to recover it. 00:28:18.075 [2024-04-25 03:28:52.502362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.075 [2024-04-25 03:28:52.502555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.075 [2024-04-25 03:28:52.502579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.075 qpair failed and we were unable to recover it. 00:28:18.075 [2024-04-25 03:28:52.502759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.075 [2024-04-25 03:28:52.502935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.075 [2024-04-25 03:28:52.502959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.075 qpair failed and we were unable to recover it. 00:28:18.075 [2024-04-25 03:28:52.503156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.075 [2024-04-25 03:28:52.503323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.075 [2024-04-25 03:28:52.503347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.075 qpair failed and we were unable to recover it. 00:28:18.075 [2024-04-25 03:28:52.503579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.075 [2024-04-25 03:28:52.503793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.075 [2024-04-25 03:28:52.503818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.075 qpair failed and we were unable to recover it. 00:28:18.075 [2024-04-25 03:28:52.504040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.075 [2024-04-25 03:28:52.504246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.075 [2024-04-25 03:28:52.504271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.075 qpair failed and we were unable to recover it. 00:28:18.075 [2024-04-25 03:28:52.504470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.075 [2024-04-25 03:28:52.504663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.075 [2024-04-25 03:28:52.504688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.075 qpair failed and we were unable to recover it. 00:28:18.075 [2024-04-25 03:28:52.504883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.075 [2024-04-25 03:28:52.505055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.075 [2024-04-25 03:28:52.505079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.075 qpair failed and we were unable to recover it. 00:28:18.075 [2024-04-25 03:28:52.505259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.075 [2024-04-25 03:28:52.505456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.075 [2024-04-25 03:28:52.505480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.075 qpair failed and we were unable to recover it. 00:28:18.075 [2024-04-25 03:28:52.505685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.075 [2024-04-25 03:28:52.505859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.075 [2024-04-25 03:28:52.505883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.075 qpair failed and we were unable to recover it. 00:28:18.075 [2024-04-25 03:28:52.506083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.075 [2024-04-25 03:28:52.506254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.075 [2024-04-25 03:28:52.506277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.075 qpair failed and we were unable to recover it. 00:28:18.075 [2024-04-25 03:28:52.506476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.075 [2024-04-25 03:28:52.506673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.075 [2024-04-25 03:28:52.506698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.075 qpair failed and we were unable to recover it. 00:28:18.075 [2024-04-25 03:28:52.506901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.075 [2024-04-25 03:28:52.507098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.075 [2024-04-25 03:28:52.507123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.075 qpair failed and we were unable to recover it. 00:28:18.075 [2024-04-25 03:28:52.507348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.075 [2024-04-25 03:28:52.507539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.075 [2024-04-25 03:28:52.507562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.075 qpair failed and we were unable to recover it. 00:28:18.075 [2024-04-25 03:28:52.507780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.075 [2024-04-25 03:28:52.507946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.075 [2024-04-25 03:28:52.507969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.075 qpair failed and we were unable to recover it. 00:28:18.075 [2024-04-25 03:28:52.508142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.075 [2024-04-25 03:28:52.508338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.075 [2024-04-25 03:28:52.508361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.075 qpair failed and we were unable to recover it. 00:28:18.075 [2024-04-25 03:28:52.508537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.075 [2024-04-25 03:28:52.508735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.075 [2024-04-25 03:28:52.508759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.075 qpair failed and we were unable to recover it. 00:28:18.075 [2024-04-25 03:28:52.508927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.075 [2024-04-25 03:28:52.509119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.075 [2024-04-25 03:28:52.509143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.075 qpair failed and we were unable to recover it. 00:28:18.075 [2024-04-25 03:28:52.509313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.075 [2024-04-25 03:28:52.509483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.075 [2024-04-25 03:28:52.509507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.075 qpair failed and we were unable to recover it. 00:28:18.075 [2024-04-25 03:28:52.509710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.075 [2024-04-25 03:28:52.509882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.075 [2024-04-25 03:28:52.509916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.075 qpair failed and we were unable to recover it. 00:28:18.075 [2024-04-25 03:28:52.510092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.075 [2024-04-25 03:28:52.510315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.075 [2024-04-25 03:28:52.510340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.075 qpair failed and we were unable to recover it. 00:28:18.075 [2024-04-25 03:28:52.510561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.075 [2024-04-25 03:28:52.510767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.075 [2024-04-25 03:28:52.510792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.075 qpair failed and we were unable to recover it. 00:28:18.076 [2024-04-25 03:28:52.510963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.076 [2024-04-25 03:28:52.511186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.076 [2024-04-25 03:28:52.511211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.076 qpair failed and we were unable to recover it. 00:28:18.076 [2024-04-25 03:28:52.511418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.076 [2024-04-25 03:28:52.511591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.076 [2024-04-25 03:28:52.511614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.076 qpair failed and we were unable to recover it. 00:28:18.076 [2024-04-25 03:28:52.511831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.076 [2024-04-25 03:28:52.512064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.076 [2024-04-25 03:28:52.512089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.076 qpair failed and we were unable to recover it. 00:28:18.076 [2024-04-25 03:28:52.512265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.076 [2024-04-25 03:28:52.512433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.076 [2024-04-25 03:28:52.512457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.076 qpair failed and we were unable to recover it. 00:28:18.076 [2024-04-25 03:28:52.512657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.076 [2024-04-25 03:28:52.512864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.076 [2024-04-25 03:28:52.512888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.076 qpair failed and we were unable to recover it. 00:28:18.076 [2024-04-25 03:28:52.513066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.076 [2024-04-25 03:28:52.513233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.076 [2024-04-25 03:28:52.513258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.076 qpair failed and we were unable to recover it. 00:28:18.076 [2024-04-25 03:28:52.513486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.076 [2024-04-25 03:28:52.513690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.076 [2024-04-25 03:28:52.513714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.076 qpair failed and we were unable to recover it. 00:28:18.076 [2024-04-25 03:28:52.513911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.076 [2024-04-25 03:28:52.514076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.076 [2024-04-25 03:28:52.514099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.076 qpair failed and we were unable to recover it. 00:28:18.076 [2024-04-25 03:28:52.514278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.076 [2024-04-25 03:28:52.514502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.076 [2024-04-25 03:28:52.514527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.076 qpair failed and we were unable to recover it. 00:28:18.076 [2024-04-25 03:28:52.514709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.076 [2024-04-25 03:28:52.514908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.076 [2024-04-25 03:28:52.514934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.076 qpair failed and we were unable to recover it. 00:28:18.076 [2024-04-25 03:28:52.515138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.076 [2024-04-25 03:28:52.515308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.076 [2024-04-25 03:28:52.515332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.076 qpair failed and we were unable to recover it. 00:28:18.076 [2024-04-25 03:28:52.515528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.076 [2024-04-25 03:28:52.515736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.076 [2024-04-25 03:28:52.515761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.076 qpair failed and we were unable to recover it. 00:28:18.076 [2024-04-25 03:28:52.515937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.076 [2024-04-25 03:28:52.516129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.076 [2024-04-25 03:28:52.516152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.076 qpair failed and we were unable to recover it. 00:28:18.076 [2024-04-25 03:28:52.516354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.076 [2024-04-25 03:28:52.516525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.076 [2024-04-25 03:28:52.516550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.076 qpair failed and we were unable to recover it. 00:28:18.076 [2024-04-25 03:28:52.516753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.076 [2024-04-25 03:28:52.516927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.076 [2024-04-25 03:28:52.516951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.076 qpair failed and we were unable to recover it. 00:28:18.076 [2024-04-25 03:28:52.517148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.076 [2024-04-25 03:28:52.517350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.076 [2024-04-25 03:28:52.517374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.076 qpair failed and we were unable to recover it. 00:28:18.076 [2024-04-25 03:28:52.517571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.076 [2024-04-25 03:28:52.517746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.076 [2024-04-25 03:28:52.517771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.076 qpair failed and we were unable to recover it. 00:28:18.076 [2024-04-25 03:28:52.517944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.076 [2024-04-25 03:28:52.518133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.076 [2024-04-25 03:28:52.518157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.076 qpair failed and we were unable to recover it. 00:28:18.076 [2024-04-25 03:28:52.518386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.076 [2024-04-25 03:28:52.518608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.076 [2024-04-25 03:28:52.518648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.076 qpair failed and we were unable to recover it. 00:28:18.076 [2024-04-25 03:28:52.518879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.076 [2024-04-25 03:28:52.519055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.076 [2024-04-25 03:28:52.519078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.076 qpair failed and we were unable to recover it. 00:28:18.076 [2024-04-25 03:28:52.519277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.076 [2024-04-25 03:28:52.519479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.076 [2024-04-25 03:28:52.519502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.076 qpair failed and we were unable to recover it. 00:28:18.076 [2024-04-25 03:28:52.519716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.076 [2024-04-25 03:28:52.519913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.076 [2024-04-25 03:28:52.519937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.076 qpair failed and we were unable to recover it. 00:28:18.076 [2024-04-25 03:28:52.520137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.076 [2024-04-25 03:28:52.520312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.076 [2024-04-25 03:28:52.520341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.076 qpair failed and we were unable to recover it. 00:28:18.076 [2024-04-25 03:28:52.520537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.076 [2024-04-25 03:28:52.520734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.076 [2024-04-25 03:28:52.520759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.076 qpair failed and we were unable to recover it. 00:28:18.076 [2024-04-25 03:28:52.520933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.076 [2024-04-25 03:28:52.521165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.076 [2024-04-25 03:28:52.521189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.076 qpair failed and we were unable to recover it. 00:28:18.076 [2024-04-25 03:28:52.521391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.076 [2024-04-25 03:28:52.521598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.076 [2024-04-25 03:28:52.521621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.076 qpair failed and we were unable to recover it. 00:28:18.076 [2024-04-25 03:28:52.521822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.076 [2024-04-25 03:28:52.522014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.076 [2024-04-25 03:28:52.522037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.076 qpair failed and we were unable to recover it. 00:28:18.076 [2024-04-25 03:28:52.522232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.076 [2024-04-25 03:28:52.522450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.076 [2024-04-25 03:28:52.522474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.076 qpair failed and we were unable to recover it. 00:28:18.076 [2024-04-25 03:28:52.522670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.076 [2024-04-25 03:28:52.522844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.076 [2024-04-25 03:28:52.522869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.076 qpair failed and we were unable to recover it. 00:28:18.076 [2024-04-25 03:28:52.523049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.077 [2024-04-25 03:28:52.523217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.077 [2024-04-25 03:28:52.523241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.077 qpair failed and we were unable to recover it. 00:28:18.077 [2024-04-25 03:28:52.523463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.077 [2024-04-25 03:28:52.523650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.077 [2024-04-25 03:28:52.523675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.077 qpair failed and we were unable to recover it. 00:28:18.077 [2024-04-25 03:28:52.523840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.077 [2024-04-25 03:28:52.524035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.077 [2024-04-25 03:28:52.524059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.077 qpair failed and we were unable to recover it. 00:28:18.077 [2024-04-25 03:28:52.524254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.077 [2024-04-25 03:28:52.524476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.077 [2024-04-25 03:28:52.524500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.077 qpair failed and we were unable to recover it. 00:28:18.077 [2024-04-25 03:28:52.524685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.077 [2024-04-25 03:28:52.524850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.077 [2024-04-25 03:28:52.524873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.077 qpair failed and we were unable to recover it. 00:28:18.077 [2024-04-25 03:28:52.525044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.077 [2024-04-25 03:28:52.525263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.077 [2024-04-25 03:28:52.525287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.077 qpair failed and we were unable to recover it. 00:28:18.077 [2024-04-25 03:28:52.525477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.077 [2024-04-25 03:28:52.525683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.077 [2024-04-25 03:28:52.525707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.077 qpair failed and we were unable to recover it. 00:28:18.077 [2024-04-25 03:28:52.525896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.077 [2024-04-25 03:28:52.526064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.077 [2024-04-25 03:28:52.526087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.077 qpair failed and we were unable to recover it. 00:28:18.077 [2024-04-25 03:28:52.526253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.077 [2024-04-25 03:28:52.526416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.077 [2024-04-25 03:28:52.526441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.077 qpair failed and we were unable to recover it. 00:28:18.077 [2024-04-25 03:28:52.526664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.077 [2024-04-25 03:28:52.526857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.077 [2024-04-25 03:28:52.526881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.077 qpair failed and we were unable to recover it. 00:28:18.077 [2024-04-25 03:28:52.527101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.077 [2024-04-25 03:28:52.527269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.077 [2024-04-25 03:28:52.527292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.077 qpair failed and we were unable to recover it. 00:28:18.077 [2024-04-25 03:28:52.527488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.077 [2024-04-25 03:28:52.527686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.077 [2024-04-25 03:28:52.527711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.077 qpair failed and we were unable to recover it. 00:28:18.077 [2024-04-25 03:28:52.527907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.077 [2024-04-25 03:28:52.528108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.077 [2024-04-25 03:28:52.528133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.077 qpair failed and we were unable to recover it. 00:28:18.077 [2024-04-25 03:28:52.528355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.077 [2024-04-25 03:28:52.528579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.077 [2024-04-25 03:28:52.528603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.077 qpair failed and we were unable to recover it. 00:28:18.077 [2024-04-25 03:28:52.528785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.077 [2024-04-25 03:28:52.528986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.077 [2024-04-25 03:28:52.529010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.077 qpair failed and we were unable to recover it. 00:28:18.077 [2024-04-25 03:28:52.529179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.077 [2024-04-25 03:28:52.529378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.077 [2024-04-25 03:28:52.529403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.077 qpair failed and we were unable to recover it. 00:28:18.077 [2024-04-25 03:28:52.529633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.077 [2024-04-25 03:28:52.529805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.077 [2024-04-25 03:28:52.529828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.077 qpair failed and we were unable to recover it. 00:28:18.077 [2024-04-25 03:28:52.530050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.077 [2024-04-25 03:28:52.530245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.077 [2024-04-25 03:28:52.530268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.077 qpair failed and we were unable to recover it. 00:28:18.077 [2024-04-25 03:28:52.530496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.077 [2024-04-25 03:28:52.530694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.077 [2024-04-25 03:28:52.530720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.077 qpair failed and we were unable to recover it. 00:28:18.077 [2024-04-25 03:28:52.530891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.077 [2024-04-25 03:28:52.531085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.077 [2024-04-25 03:28:52.531109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.077 qpair failed and we were unable to recover it. 00:28:18.077 [2024-04-25 03:28:52.531301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.077 [2024-04-25 03:28:52.531468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.077 [2024-04-25 03:28:52.531494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.077 qpair failed and we were unable to recover it. 00:28:18.077 [2024-04-25 03:28:52.531720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.077 [2024-04-25 03:28:52.531921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.077 [2024-04-25 03:28:52.531945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.077 qpair failed and we were unable to recover it. 00:28:18.077 [2024-04-25 03:28:52.532176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.077 [2024-04-25 03:28:52.532368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.077 [2024-04-25 03:28:52.532391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.077 qpair failed and we were unable to recover it. 00:28:18.077 [2024-04-25 03:28:52.532584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.077 [2024-04-25 03:28:52.532756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.077 [2024-04-25 03:28:52.532780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.077 qpair failed and we were unable to recover it. 00:28:18.077 [2024-04-25 03:28:52.532983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.077 [2024-04-25 03:28:52.533190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.077 [2024-04-25 03:28:52.533215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.077 qpair failed and we were unable to recover it. 00:28:18.077 [2024-04-25 03:28:52.533413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.078 [2024-04-25 03:28:52.533606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.078 [2024-04-25 03:28:52.533635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.078 qpair failed and we were unable to recover it. 00:28:18.078 [2024-04-25 03:28:52.533859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.078 [2024-04-25 03:28:52.534020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.078 [2024-04-25 03:28:52.534045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.078 qpair failed and we were unable to recover it. 00:28:18.078 [2024-04-25 03:28:52.534216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.078 [2024-04-25 03:28:52.534431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.078 [2024-04-25 03:28:52.534454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.078 qpair failed and we were unable to recover it. 00:28:18.078 [2024-04-25 03:28:52.534672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.078 [2024-04-25 03:28:52.534892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.078 [2024-04-25 03:28:52.534916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.078 qpair failed and we were unable to recover it. 00:28:18.078 [2024-04-25 03:28:52.535086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.078 [2024-04-25 03:28:52.535308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.078 [2024-04-25 03:28:52.535332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.078 qpair failed and we were unable to recover it. 00:28:18.078 [2024-04-25 03:28:52.535509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.078 [2024-04-25 03:28:52.535710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.078 [2024-04-25 03:28:52.535734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.078 qpair failed and we were unable to recover it. 00:28:18.078 [2024-04-25 03:28:52.535907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.078 [2024-04-25 03:28:52.536129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.078 [2024-04-25 03:28:52.536153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.078 qpair failed and we were unable to recover it. 00:28:18.078 [2024-04-25 03:28:52.536324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.078 [2024-04-25 03:28:52.536523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.078 [2024-04-25 03:28:52.536547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.078 qpair failed and we were unable to recover it. 00:28:18.078 [2024-04-25 03:28:52.536748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.078 [2024-04-25 03:28:52.536969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.078 [2024-04-25 03:28:52.536993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.078 qpair failed and we were unable to recover it. 00:28:18.078 [2024-04-25 03:28:52.537214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.078 [2024-04-25 03:28:52.537389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.078 [2024-04-25 03:28:52.537419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.078 qpair failed and we were unable to recover it. 00:28:18.078 [2024-04-25 03:28:52.537620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.078 [2024-04-25 03:28:52.537847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.078 [2024-04-25 03:28:52.537871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.078 qpair failed and we were unable to recover it. 00:28:18.078 [2024-04-25 03:28:52.538069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.078 [2024-04-25 03:28:52.538265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.078 [2024-04-25 03:28:52.538289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.078 qpair failed and we were unable to recover it. 00:28:18.078 [2024-04-25 03:28:52.538489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.078 [2024-04-25 03:28:52.538708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.078 [2024-04-25 03:28:52.538733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.078 qpair failed and we were unable to recover it. 00:28:18.078 [2024-04-25 03:28:52.538900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.078 [2024-04-25 03:28:52.539093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.078 [2024-04-25 03:28:52.539117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.078 qpair failed and we were unable to recover it. 00:28:18.078 [2024-04-25 03:28:52.539283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.078 [2024-04-25 03:28:52.539509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.078 [2024-04-25 03:28:52.539533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.078 qpair failed and we were unable to recover it. 00:28:18.078 [2024-04-25 03:28:52.539704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.078 [2024-04-25 03:28:52.539900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.078 [2024-04-25 03:28:52.539923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.078 qpair failed and we were unable to recover it. 00:28:18.078 [2024-04-25 03:28:52.540154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.078 [2024-04-25 03:28:52.540347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.078 [2024-04-25 03:28:52.540371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.078 qpair failed and we were unable to recover it. 00:28:18.078 [2024-04-25 03:28:52.540551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.078 [2024-04-25 03:28:52.540746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.078 [2024-04-25 03:28:52.540770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.078 qpair failed and we were unable to recover it. 00:28:18.078 [2024-04-25 03:28:52.540971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.078 [2024-04-25 03:28:52.541184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.078 [2024-04-25 03:28:52.541207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.078 qpair failed and we were unable to recover it. 00:28:18.078 [2024-04-25 03:28:52.541398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.078 [2024-04-25 03:28:52.541592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.078 [2024-04-25 03:28:52.541616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.078 qpair failed and we were unable to recover it. 00:28:18.078 [2024-04-25 03:28:52.541797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.078 [2024-04-25 03:28:52.541969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.078 [2024-04-25 03:28:52.541992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.078 qpair failed and we were unable to recover it. 00:28:18.078 [2024-04-25 03:28:52.542170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.353 [2024-04-25 03:28:52.542356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.353 [2024-04-25 03:28:52.542381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.353 qpair failed and we were unable to recover it. 00:28:18.353 [2024-04-25 03:28:52.542583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.353 [2024-04-25 03:28:52.542777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.353 [2024-04-25 03:28:52.542801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.353 qpair failed and we were unable to recover it. 00:28:18.353 [2024-04-25 03:28:52.542991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.353 [2024-04-25 03:28:52.543188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.353 [2024-04-25 03:28:52.543212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.353 qpair failed and we were unable to recover it. 00:28:18.353 [2024-04-25 03:28:52.543415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.353 [2024-04-25 03:28:52.543613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.353 [2024-04-25 03:28:52.543644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.353 qpair failed and we were unable to recover it. 00:28:18.353 [2024-04-25 03:28:52.543846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.353 [2024-04-25 03:28:52.544012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.353 [2024-04-25 03:28:52.544035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.353 qpair failed and we were unable to recover it. 00:28:18.353 [2024-04-25 03:28:52.544231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.353 [2024-04-25 03:28:52.544422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.353 [2024-04-25 03:28:52.544446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.353 qpair failed and we were unable to recover it. 00:28:18.353 [2024-04-25 03:28:52.544653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.353 [2024-04-25 03:28:52.544845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.353 [2024-04-25 03:28:52.544869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.353 qpair failed and we were unable to recover it. 00:28:18.353 [2024-04-25 03:28:52.545040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.353 [2024-04-25 03:28:52.545264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.353 [2024-04-25 03:28:52.545288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.353 qpair failed and we were unable to recover it. 00:28:18.353 [2024-04-25 03:28:52.545479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.353 [2024-04-25 03:28:52.545651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.353 [2024-04-25 03:28:52.545675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.353 qpair failed and we were unable to recover it. 00:28:18.353 [2024-04-25 03:28:52.545874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.353 [2024-04-25 03:28:52.546038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.353 [2024-04-25 03:28:52.546063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.353 qpair failed and we were unable to recover it. 00:28:18.353 [2024-04-25 03:28:52.546289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.353 [2024-04-25 03:28:52.546487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.353 [2024-04-25 03:28:52.546511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.353 qpair failed and we were unable to recover it. 00:28:18.353 [2024-04-25 03:28:52.546684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.353 [2024-04-25 03:28:52.546856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.353 [2024-04-25 03:28:52.546880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.353 qpair failed and we were unable to recover it. 00:28:18.353 [2024-04-25 03:28:52.547072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.353 [2024-04-25 03:28:52.547297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.353 [2024-04-25 03:28:52.547321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.353 qpair failed and we were unable to recover it. 00:28:18.353 [2024-04-25 03:28:52.547495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.353 [2024-04-25 03:28:52.547690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.353 [2024-04-25 03:28:52.547714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.353 qpair failed and we were unable to recover it. 00:28:18.353 [2024-04-25 03:28:52.547941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.353 [2024-04-25 03:28:52.548103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.353 [2024-04-25 03:28:52.548127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.353 qpair failed and we were unable to recover it. 00:28:18.353 [2024-04-25 03:28:52.548321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.353 [2024-04-25 03:28:52.548492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.353 [2024-04-25 03:28:52.548516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.353 qpair failed and we were unable to recover it. 00:28:18.353 [2024-04-25 03:28:52.548713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.353 [2024-04-25 03:28:52.548902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.353 [2024-04-25 03:28:52.548926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.353 qpair failed and we were unable to recover it. 00:28:18.353 [2024-04-25 03:28:52.549119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.353 [2024-04-25 03:28:52.549284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.353 [2024-04-25 03:28:52.549308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.353 qpair failed and we were unable to recover it. 00:28:18.353 [2024-04-25 03:28:52.549533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.353 [2024-04-25 03:28:52.549704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.353 [2024-04-25 03:28:52.549728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.353 qpair failed and we were unable to recover it. 00:28:18.353 [2024-04-25 03:28:52.549910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.353 [2024-04-25 03:28:52.550102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.353 [2024-04-25 03:28:52.550127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.353 qpair failed and we were unable to recover it. 00:28:18.353 [2024-04-25 03:28:52.550346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.353 [2024-04-25 03:28:52.550541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.353 [2024-04-25 03:28:52.550566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.353 qpair failed and we were unable to recover it. 00:28:18.353 [2024-04-25 03:28:52.550793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.353 [2024-04-25 03:28:52.551001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.354 [2024-04-25 03:28:52.551025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.354 qpair failed and we were unable to recover it. 00:28:18.354 [2024-04-25 03:28:52.551231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.354 [2024-04-25 03:28:52.551423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.354 [2024-04-25 03:28:52.551447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.354 qpair failed and we were unable to recover it. 00:28:18.354 [2024-04-25 03:28:52.551675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.354 [2024-04-25 03:28:52.551847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.354 [2024-04-25 03:28:52.551871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.354 qpair failed and we were unable to recover it. 00:28:18.354 [2024-04-25 03:28:52.552050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.354 [2024-04-25 03:28:52.552238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.354 [2024-04-25 03:28:52.552262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.354 qpair failed and we were unable to recover it. 00:28:18.354 [2024-04-25 03:28:52.552453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.354 [2024-04-25 03:28:52.552619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.354 [2024-04-25 03:28:52.552648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.354 qpair failed and we were unable to recover it. 00:28:18.354 [2024-04-25 03:28:52.552873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.354 [2024-04-25 03:28:52.553039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.354 [2024-04-25 03:28:52.553063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.354 qpair failed and we were unable to recover it. 00:28:18.354 [2024-04-25 03:28:52.553288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.354 [2024-04-25 03:28:52.553459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.354 [2024-04-25 03:28:52.553483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.354 qpair failed and we were unable to recover it. 00:28:18.354 [2024-04-25 03:28:52.553660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.354 [2024-04-25 03:28:52.553895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.354 [2024-04-25 03:28:52.553918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.354 qpair failed and we were unable to recover it. 00:28:18.354 [2024-04-25 03:28:52.554091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.354 [2024-04-25 03:28:52.554289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.354 [2024-04-25 03:28:52.554314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.354 qpair failed and we were unable to recover it. 00:28:18.354 [2024-04-25 03:28:52.554511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.354 [2024-04-25 03:28:52.554707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.354 [2024-04-25 03:28:52.554732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.354 qpair failed and we were unable to recover it. 00:28:18.354 [2024-04-25 03:28:52.554928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.354 [2024-04-25 03:28:52.555146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.354 [2024-04-25 03:28:52.555170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.354 qpair failed and we were unable to recover it. 00:28:18.354 [2024-04-25 03:28:52.555341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.354 [2024-04-25 03:28:52.555537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.354 [2024-04-25 03:28:52.555561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.354 qpair failed and we were unable to recover it. 00:28:18.354 [2024-04-25 03:28:52.555729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.354 [2024-04-25 03:28:52.555902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.354 [2024-04-25 03:28:52.555927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.354 qpair failed and we were unable to recover it. 00:28:18.354 [2024-04-25 03:28:52.556107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.354 [2024-04-25 03:28:52.556300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.354 [2024-04-25 03:28:52.556324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.354 qpair failed and we were unable to recover it. 00:28:18.354 [2024-04-25 03:28:52.556521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.354 [2024-04-25 03:28:52.556739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.354 [2024-04-25 03:28:52.556764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.354 qpair failed and we were unable to recover it. 00:28:18.354 [2024-04-25 03:28:52.556936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.354 [2024-04-25 03:28:52.557130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.354 [2024-04-25 03:28:52.557154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.354 qpair failed and we were unable to recover it. 00:28:18.354 [2024-04-25 03:28:52.557378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.354 [2024-04-25 03:28:52.557601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.354 [2024-04-25 03:28:52.557626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.354 qpair failed and we were unable to recover it. 00:28:18.354 [2024-04-25 03:28:52.557831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.354 [2024-04-25 03:28:52.558000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.354 [2024-04-25 03:28:52.558025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.354 qpair failed and we were unable to recover it. 00:28:18.354 [2024-04-25 03:28:52.558195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.354 [2024-04-25 03:28:52.558385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.354 [2024-04-25 03:28:52.558413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.354 qpair failed and we were unable to recover it. 00:28:18.354 [2024-04-25 03:28:52.558610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.354 [2024-04-25 03:28:52.558831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.354 [2024-04-25 03:28:52.558856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.354 qpair failed and we were unable to recover it. 00:28:18.354 [2024-04-25 03:28:52.559031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.354 [2024-04-25 03:28:52.559221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.354 [2024-04-25 03:28:52.559245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.354 qpair failed and we were unable to recover it. 00:28:18.354 [2024-04-25 03:28:52.559419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.354 [2024-04-25 03:28:52.559614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.354 [2024-04-25 03:28:52.559644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.354 qpair failed and we were unable to recover it. 00:28:18.354 [2024-04-25 03:28:52.559821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.354 [2024-04-25 03:28:52.560029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.354 [2024-04-25 03:28:52.560053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.354 qpair failed and we were unable to recover it. 00:28:18.354 [2024-04-25 03:28:52.560247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.354 [2024-04-25 03:28:52.560465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.354 [2024-04-25 03:28:52.560489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.354 qpair failed and we were unable to recover it. 00:28:18.354 [2024-04-25 03:28:52.560660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.354 [2024-04-25 03:28:52.560831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.354 [2024-04-25 03:28:52.560855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.354 qpair failed and we were unable to recover it. 00:28:18.354 [2024-04-25 03:28:52.561088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.354 [2024-04-25 03:28:52.561308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.354 [2024-04-25 03:28:52.561331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.354 qpair failed and we were unable to recover it. 00:28:18.354 [2024-04-25 03:28:52.561496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.354 [2024-04-25 03:28:52.561717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.354 [2024-04-25 03:28:52.561742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.354 qpair failed and we were unable to recover it. 00:28:18.354 [2024-04-25 03:28:52.561951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.354 [2024-04-25 03:28:52.562146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.354 [2024-04-25 03:28:52.562172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.354 qpair failed and we were unable to recover it. 00:28:18.354 [2024-04-25 03:28:52.562343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.354 [2024-04-25 03:28:52.562539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.354 [2024-04-25 03:28:52.562563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.354 qpair failed and we were unable to recover it. 00:28:18.354 [2024-04-25 03:28:52.562799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.355 [2024-04-25 03:28:52.563023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.355 [2024-04-25 03:28:52.563048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.355 qpair failed and we were unable to recover it. 00:28:18.355 [2024-04-25 03:28:52.563247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.355 [2024-04-25 03:28:52.563443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.355 [2024-04-25 03:28:52.563467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.355 qpair failed and we were unable to recover it. 00:28:18.355 [2024-04-25 03:28:52.563666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.355 [2024-04-25 03:28:52.563859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.355 [2024-04-25 03:28:52.563882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.355 qpair failed and we were unable to recover it. 00:28:18.355 [2024-04-25 03:28:52.564056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.355 [2024-04-25 03:28:52.564277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.355 [2024-04-25 03:28:52.564301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.355 qpair failed and we were unable to recover it. 00:28:18.355 [2024-04-25 03:28:52.564496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.355 [2024-04-25 03:28:52.564667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.355 [2024-04-25 03:28:52.564693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.355 qpair failed and we were unable to recover it. 00:28:18.355 [2024-04-25 03:28:52.564870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.355 [2024-04-25 03:28:52.565070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.355 [2024-04-25 03:28:52.565094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.355 qpair failed and we were unable to recover it. 00:28:18.355 [2024-04-25 03:28:52.565287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.355 [2024-04-25 03:28:52.565461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.355 [2024-04-25 03:28:52.565485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.355 qpair failed and we were unable to recover it. 00:28:18.355 [2024-04-25 03:28:52.565689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.355 [2024-04-25 03:28:52.565860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.355 [2024-04-25 03:28:52.565884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.355 qpair failed and we were unable to recover it. 00:28:18.355 [2024-04-25 03:28:52.566058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.355 [2024-04-25 03:28:52.566219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.355 [2024-04-25 03:28:52.566244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.355 qpair failed and we were unable to recover it. 00:28:18.355 [2024-04-25 03:28:52.566416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.355 [2024-04-25 03:28:52.566608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.355 [2024-04-25 03:28:52.566646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.355 qpair failed and we were unable to recover it. 00:28:18.355 [2024-04-25 03:28:52.566830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.355 [2024-04-25 03:28:52.567030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.355 [2024-04-25 03:28:52.567054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.355 qpair failed and we were unable to recover it. 00:28:18.355 [2024-04-25 03:28:52.567258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.355 [2024-04-25 03:28:52.567451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.355 [2024-04-25 03:28:52.567475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.355 qpair failed and we were unable to recover it. 00:28:18.355 [2024-04-25 03:28:52.567671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.355 [2024-04-25 03:28:52.567893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.355 [2024-04-25 03:28:52.567918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.355 qpair failed and we were unable to recover it. 00:28:18.355 [2024-04-25 03:28:52.568125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.355 [2024-04-25 03:28:52.568345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.355 [2024-04-25 03:28:52.568369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.355 qpair failed and we were unable to recover it. 00:28:18.355 [2024-04-25 03:28:52.568588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.355 [2024-04-25 03:28:52.568764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.355 [2024-04-25 03:28:52.568789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.355 qpair failed and we were unable to recover it. 00:28:18.355 [2024-04-25 03:28:52.568983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.355 [2024-04-25 03:28:52.569179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.355 [2024-04-25 03:28:52.569203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.355 qpair failed and we were unable to recover it. 00:28:18.355 [2024-04-25 03:28:52.569374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.355 [2024-04-25 03:28:52.569537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.355 [2024-04-25 03:28:52.569560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.355 qpair failed and we were unable to recover it. 00:28:18.355 [2024-04-25 03:28:52.569758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.355 [2024-04-25 03:28:52.569975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.355 [2024-04-25 03:28:52.569999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.355 qpair failed and we were unable to recover it. 00:28:18.355 [2024-04-25 03:28:52.570199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.355 [2024-04-25 03:28:52.570389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.355 [2024-04-25 03:28:52.570414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.355 qpair failed and we were unable to recover it. 00:28:18.355 [2024-04-25 03:28:52.570608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.355 [2024-04-25 03:28:52.570813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.355 [2024-04-25 03:28:52.570837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.355 qpair failed and we were unable to recover it. 00:28:18.355 [2024-04-25 03:28:52.571058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.355 [2024-04-25 03:28:52.571225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.355 [2024-04-25 03:28:52.571249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.355 qpair failed and we were unable to recover it. 00:28:18.355 [2024-04-25 03:28:52.571412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.355 [2024-04-25 03:28:52.571601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.355 [2024-04-25 03:28:52.571625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.355 qpair failed and we were unable to recover it. 00:28:18.355 [2024-04-25 03:28:52.571873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.355 [2024-04-25 03:28:52.572072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.355 [2024-04-25 03:28:52.572096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.355 qpair failed and we were unable to recover it. 00:28:18.355 [2024-04-25 03:28:52.572258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.355 [2024-04-25 03:28:52.572421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.355 [2024-04-25 03:28:52.572446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.355 qpair failed and we were unable to recover it. 00:28:18.355 [2024-04-25 03:28:52.572641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.355 [2024-04-25 03:28:52.572812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.355 [2024-04-25 03:28:52.572836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.355 qpair failed and we were unable to recover it. 00:28:18.355 [2024-04-25 03:28:52.573037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.355 [2024-04-25 03:28:52.573201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.355 [2024-04-25 03:28:52.573225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.355 qpair failed and we were unable to recover it. 00:28:18.355 [2024-04-25 03:28:52.573419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.355 [2024-04-25 03:28:52.573643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.355 [2024-04-25 03:28:52.573668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.355 qpair failed and we were unable to recover it. 00:28:18.355 [2024-04-25 03:28:52.573859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.355 [2024-04-25 03:28:52.574027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.355 [2024-04-25 03:28:52.574051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.355 qpair failed and we were unable to recover it. 00:28:18.355 [2024-04-25 03:28:52.574215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.355 [2024-04-25 03:28:52.574434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.355 [2024-04-25 03:28:52.574458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.355 qpair failed and we were unable to recover it. 00:28:18.355 [2024-04-25 03:28:52.574658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.355 [2024-04-25 03:28:52.574854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.356 [2024-04-25 03:28:52.574878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.356 qpair failed and we were unable to recover it. 00:28:18.356 [2024-04-25 03:28:52.575048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.356 [2024-04-25 03:28:52.575244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.356 [2024-04-25 03:28:52.575268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.356 qpair failed and we were unable to recover it. 00:28:18.356 [2024-04-25 03:28:52.575466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.356 [2024-04-25 03:28:52.575709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.356 [2024-04-25 03:28:52.575734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.356 qpair failed and we were unable to recover it. 00:28:18.356 [2024-04-25 03:28:52.575976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.356 [2024-04-25 03:28:52.576197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.356 [2024-04-25 03:28:52.576222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.356 qpair failed and we were unable to recover it. 00:28:18.356 [2024-04-25 03:28:52.576401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.356 [2024-04-25 03:28:52.576618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.356 [2024-04-25 03:28:52.576648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.356 qpair failed and we were unable to recover it. 00:28:18.356 [2024-04-25 03:28:52.576828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.356 [2024-04-25 03:28:52.577044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.356 [2024-04-25 03:28:52.577068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.356 qpair failed and we were unable to recover it. 00:28:18.356 [2024-04-25 03:28:52.577265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.356 [2024-04-25 03:28:52.577436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.356 [2024-04-25 03:28:52.577461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.356 qpair failed and we were unable to recover it. 00:28:18.356 [2024-04-25 03:28:52.577638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.356 [2024-04-25 03:28:52.577839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.356 [2024-04-25 03:28:52.577863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.356 qpair failed and we were unable to recover it. 00:28:18.356 [2024-04-25 03:28:52.578038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.356 [2024-04-25 03:28:52.578216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.356 [2024-04-25 03:28:52.578240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.356 qpair failed and we were unable to recover it. 00:28:18.356 [2024-04-25 03:28:52.578440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.356 [2024-04-25 03:28:52.578609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.356 [2024-04-25 03:28:52.578648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.356 qpair failed and we were unable to recover it. 00:28:18.356 [2024-04-25 03:28:52.578850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.356 [2024-04-25 03:28:52.579042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.356 [2024-04-25 03:28:52.579066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.356 qpair failed and we were unable to recover it. 00:28:18.356 [2024-04-25 03:28:52.579233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.356 [2024-04-25 03:28:52.579431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.356 [2024-04-25 03:28:52.579458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.356 qpair failed and we were unable to recover it. 00:28:18.356 [2024-04-25 03:28:52.579654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.356 [2024-04-25 03:28:52.579828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.356 [2024-04-25 03:28:52.579852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.356 qpair failed and we were unable to recover it. 00:28:18.356 [2024-04-25 03:28:52.580068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.356 [2024-04-25 03:28:52.580267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.356 [2024-04-25 03:28:52.580293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.356 qpair failed and we were unable to recover it. 00:28:18.356 [2024-04-25 03:28:52.580494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.356 [2024-04-25 03:28:52.580699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.356 [2024-04-25 03:28:52.580723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.356 qpair failed and we were unable to recover it. 00:28:18.356 [2024-04-25 03:28:52.580946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.356 [2024-04-25 03:28:52.581135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.356 [2024-04-25 03:28:52.581159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.356 qpair failed and we were unable to recover it. 00:28:18.356 [2024-04-25 03:28:52.581356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.356 [2024-04-25 03:28:52.581583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.356 [2024-04-25 03:28:52.581607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.356 qpair failed and we were unable to recover it. 00:28:18.356 [2024-04-25 03:28:52.581808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.356 [2024-04-25 03:28:52.582001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.356 [2024-04-25 03:28:52.582027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.356 qpair failed and we were unable to recover it. 00:28:18.356 [2024-04-25 03:28:52.582225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.356 [2024-04-25 03:28:52.582397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.356 [2024-04-25 03:28:52.582421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.356 qpair failed and we were unable to recover it. 00:28:18.356 [2024-04-25 03:28:52.582632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.356 [2024-04-25 03:28:52.582803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.356 [2024-04-25 03:28:52.582827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.356 qpair failed and we were unable to recover it. 00:28:18.356 [2024-04-25 03:28:52.583052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.356 [2024-04-25 03:28:52.583215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.356 [2024-04-25 03:28:52.583239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.356 qpair failed and we were unable to recover it. 00:28:18.356 [2024-04-25 03:28:52.583438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.356 [2024-04-25 03:28:52.583637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.356 [2024-04-25 03:28:52.583661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.356 qpair failed and we were unable to recover it. 00:28:18.356 [2024-04-25 03:28:52.583890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.356 [2024-04-25 03:28:52.584114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.356 [2024-04-25 03:28:52.584138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.356 qpair failed and we were unable to recover it. 00:28:18.356 [2024-04-25 03:28:52.584332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.356 [2024-04-25 03:28:52.584530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.356 [2024-04-25 03:28:52.584554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.356 qpair failed and we were unable to recover it. 00:28:18.356 [2024-04-25 03:28:52.584731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.356 [2024-04-25 03:28:52.584964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.356 [2024-04-25 03:28:52.584989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.356 qpair failed and we were unable to recover it. 00:28:18.356 [2024-04-25 03:28:52.585187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.356 [2024-04-25 03:28:52.585407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.356 [2024-04-25 03:28:52.585431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.356 qpair failed and we were unable to recover it. 00:28:18.356 [2024-04-25 03:28:52.585626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.356 [2024-04-25 03:28:52.585802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.356 [2024-04-25 03:28:52.585826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.356 qpair failed and we were unable to recover it. 00:28:18.356 [2024-04-25 03:28:52.586025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.356 [2024-04-25 03:28:52.586192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.356 [2024-04-25 03:28:52.586216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.356 qpair failed and we were unable to recover it. 00:28:18.356 [2024-04-25 03:28:52.586412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.356 [2024-04-25 03:28:52.586650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.356 [2024-04-25 03:28:52.586676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.356 qpair failed and we were unable to recover it. 00:28:18.356 [2024-04-25 03:28:52.586902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.356 [2024-04-25 03:28:52.587070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.357 [2024-04-25 03:28:52.587096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.357 qpair failed and we were unable to recover it. 00:28:18.357 [2024-04-25 03:28:52.587289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.357 [2024-04-25 03:28:52.587455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.357 [2024-04-25 03:28:52.587480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.357 qpair failed and we were unable to recover it. 00:28:18.357 [2024-04-25 03:28:52.587675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.357 [2024-04-25 03:28:52.587884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.357 [2024-04-25 03:28:52.587909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.357 qpair failed and we were unable to recover it. 00:28:18.357 [2024-04-25 03:28:52.588093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.357 [2024-04-25 03:28:52.588313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.357 [2024-04-25 03:28:52.588337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.357 qpair failed and we were unable to recover it. 00:28:18.357 [2024-04-25 03:28:52.588509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.357 [2024-04-25 03:28:52.588733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.357 [2024-04-25 03:28:52.588757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.357 qpair failed and we were unable to recover it. 00:28:18.357 [2024-04-25 03:28:52.588930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.357 [2024-04-25 03:28:52.589128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.357 [2024-04-25 03:28:52.589152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.357 qpair failed and we were unable to recover it. 00:28:18.357 [2024-04-25 03:28:52.589353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.357 [2024-04-25 03:28:52.589549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.357 [2024-04-25 03:28:52.589572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.357 qpair failed and we were unable to recover it. 00:28:18.357 [2024-04-25 03:28:52.589756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.357 [2024-04-25 03:28:52.589985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.357 [2024-04-25 03:28:52.590010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.357 qpair failed and we were unable to recover it. 00:28:18.357 [2024-04-25 03:28:52.590203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.357 [2024-04-25 03:28:52.590371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.357 [2024-04-25 03:28:52.590395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.357 qpair failed and we were unable to recover it. 00:28:18.357 [2024-04-25 03:28:52.590590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.357 [2024-04-25 03:28:52.590768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.357 [2024-04-25 03:28:52.590793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.357 qpair failed and we were unable to recover it. 00:28:18.357 [2024-04-25 03:28:52.591014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.357 [2024-04-25 03:28:52.591205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.357 [2024-04-25 03:28:52.591229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.357 qpair failed and we were unable to recover it. 00:28:18.357 [2024-04-25 03:28:52.591436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.357 [2024-04-25 03:28:52.591655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.357 [2024-04-25 03:28:52.591680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.357 qpair failed and we were unable to recover it. 00:28:18.357 [2024-04-25 03:28:52.591882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.357 [2024-04-25 03:28:52.592045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.357 [2024-04-25 03:28:52.592069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.357 qpair failed and we were unable to recover it. 00:28:18.357 [2024-04-25 03:28:52.592267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.357 [2024-04-25 03:28:52.592539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.357 [2024-04-25 03:28:52.592563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.357 qpair failed and we were unable to recover it. 00:28:18.357 [2024-04-25 03:28:52.592742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.357 [2024-04-25 03:28:52.592908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.357 [2024-04-25 03:28:52.592931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.357 qpair failed and we were unable to recover it. 00:28:18.357 [2024-04-25 03:28:52.593127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.357 [2024-04-25 03:28:52.593319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.357 [2024-04-25 03:28:52.593343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.357 qpair failed and we were unable to recover it. 00:28:18.357 [2024-04-25 03:28:52.593511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.357 [2024-04-25 03:28:52.593709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.357 [2024-04-25 03:28:52.593733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.357 qpair failed and we were unable to recover it. 00:28:18.357 [2024-04-25 03:28:52.593934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.357 [2024-04-25 03:28:52.594107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.357 [2024-04-25 03:28:52.594131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.357 qpair failed and we were unable to recover it. 00:28:18.357 [2024-04-25 03:28:52.594330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.357 [2024-04-25 03:28:52.594527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.357 [2024-04-25 03:28:52.594550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.357 qpair failed and we were unable to recover it. 00:28:18.357 [2024-04-25 03:28:52.594747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.357 [2024-04-25 03:28:52.594976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.357 [2024-04-25 03:28:52.595000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.357 qpair failed and we were unable to recover it. 00:28:18.357 [2024-04-25 03:28:52.595172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.357 [2024-04-25 03:28:52.595373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.357 [2024-04-25 03:28:52.595396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.357 qpair failed and we were unable to recover it. 00:28:18.357 [2024-04-25 03:28:52.595593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.357 [2024-04-25 03:28:52.595783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.357 [2024-04-25 03:28:52.595807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.357 qpair failed and we were unable to recover it. 00:28:18.357 [2024-04-25 03:28:52.596002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.357 [2024-04-25 03:28:52.596191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.357 [2024-04-25 03:28:52.596215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.357 qpair failed and we were unable to recover it. 00:28:18.357 [2024-04-25 03:28:52.596386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.357 [2024-04-25 03:28:52.596582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.357 [2024-04-25 03:28:52.596609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.357 qpair failed and we were unable to recover it. 00:28:18.357 [2024-04-25 03:28:52.596814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.357 [2024-04-25 03:28:52.597005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.358 [2024-04-25 03:28:52.597029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.358 qpair failed and we were unable to recover it. 00:28:18.358 [2024-04-25 03:28:52.597249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.358 [2024-04-25 03:28:52.597467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.358 [2024-04-25 03:28:52.597491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.358 qpair failed and we were unable to recover it. 00:28:18.358 [2024-04-25 03:28:52.597690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.358 [2024-04-25 03:28:52.597907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.358 [2024-04-25 03:28:52.597931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.358 qpair failed and we were unable to recover it. 00:28:18.358 [2024-04-25 03:28:52.598103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.358 [2024-04-25 03:28:52.598272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.358 [2024-04-25 03:28:52.598297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.358 qpair failed and we were unable to recover it. 00:28:18.358 [2024-04-25 03:28:52.598495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.358 [2024-04-25 03:28:52.598682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.358 [2024-04-25 03:28:52.598707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.358 qpair failed and we were unable to recover it. 00:28:18.358 [2024-04-25 03:28:52.598904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.358 [2024-04-25 03:28:52.599073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.358 [2024-04-25 03:28:52.599096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.358 qpair failed and we were unable to recover it. 00:28:18.358 [2024-04-25 03:28:52.599301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.358 [2024-04-25 03:28:52.599500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.358 [2024-04-25 03:28:52.599524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.358 qpair failed and we were unable to recover it. 00:28:18.358 [2024-04-25 03:28:52.599727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.358 [2024-04-25 03:28:52.599896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.358 [2024-04-25 03:28:52.599920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.358 qpair failed and we were unable to recover it. 00:28:18.358 [2024-04-25 03:28:52.600144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.358 [2024-04-25 03:28:52.600342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.358 [2024-04-25 03:28:52.600366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.358 qpair failed and we were unable to recover it. 00:28:18.358 [2024-04-25 03:28:52.600560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.358 [2024-04-25 03:28:52.600730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.358 [2024-04-25 03:28:52.600754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.358 qpair failed and we were unable to recover it. 00:28:18.358 [2024-04-25 03:28:52.600983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.358 [2024-04-25 03:28:52.601154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.358 [2024-04-25 03:28:52.601179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.358 qpair failed and we were unable to recover it. 00:28:18.358 [2024-04-25 03:28:52.601383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.358 [2024-04-25 03:28:52.601580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.358 [2024-04-25 03:28:52.601604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.358 qpair failed and we were unable to recover it. 00:28:18.358 [2024-04-25 03:28:52.601806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.358 [2024-04-25 03:28:52.602003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.358 [2024-04-25 03:28:52.602027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.358 qpair failed and we were unable to recover it. 00:28:18.358 [2024-04-25 03:28:52.602222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.358 [2024-04-25 03:28:52.602416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.358 [2024-04-25 03:28:52.602440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.358 qpair failed and we were unable to recover it. 00:28:18.358 [2024-04-25 03:28:52.602626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.358 [2024-04-25 03:28:52.602811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.358 [2024-04-25 03:28:52.602837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.358 qpair failed and we were unable to recover it. 00:28:18.358 [2024-04-25 03:28:52.603037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.358 [2024-04-25 03:28:52.603259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.358 [2024-04-25 03:28:52.603283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.358 qpair failed and we were unable to recover it. 00:28:18.358 [2024-04-25 03:28:52.603452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.358 [2024-04-25 03:28:52.603619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.358 [2024-04-25 03:28:52.603648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.358 qpair failed and we were unable to recover it. 00:28:18.358 [2024-04-25 03:28:52.603861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.358 [2024-04-25 03:28:52.604062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.358 [2024-04-25 03:28:52.604086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.358 qpair failed and we were unable to recover it. 00:28:18.358 [2024-04-25 03:28:52.604314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.358 [2024-04-25 03:28:52.604481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.358 [2024-04-25 03:28:52.604506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.358 qpair failed and we were unable to recover it. 00:28:18.358 [2024-04-25 03:28:52.604704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.358 [2024-04-25 03:28:52.604898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.358 [2024-04-25 03:28:52.604921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.358 qpair failed and we were unable to recover it. 00:28:18.358 [2024-04-25 03:28:52.605121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.358 [2024-04-25 03:28:52.605343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.358 [2024-04-25 03:28:52.605367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.358 qpair failed and we were unable to recover it. 00:28:18.358 [2024-04-25 03:28:52.605564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.358 [2024-04-25 03:28:52.605741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.358 [2024-04-25 03:28:52.605765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.358 qpair failed and we were unable to recover it. 00:28:18.358 [2024-04-25 03:28:52.605958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.358 [2024-04-25 03:28:52.606172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.358 [2024-04-25 03:28:52.606196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.358 qpair failed and we were unable to recover it. 00:28:18.358 [2024-04-25 03:28:52.606435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.358 [2024-04-25 03:28:52.606633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.358 [2024-04-25 03:28:52.606657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.358 qpair failed and we were unable to recover it. 00:28:18.358 [2024-04-25 03:28:52.606858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.358 [2024-04-25 03:28:52.607054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.358 [2024-04-25 03:28:52.607078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.358 qpair failed and we were unable to recover it. 00:28:18.358 [2024-04-25 03:28:52.607269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.358 [2024-04-25 03:28:52.607465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.358 [2024-04-25 03:28:52.607489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.358 qpair failed and we were unable to recover it. 00:28:18.358 [2024-04-25 03:28:52.607682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.358 [2024-04-25 03:28:52.607853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.358 [2024-04-25 03:28:52.607876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.358 qpair failed and we were unable to recover it. 00:28:18.358 [2024-04-25 03:28:52.608049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.358 [2024-04-25 03:28:52.608217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.358 [2024-04-25 03:28:52.608241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.358 qpair failed and we were unable to recover it. 00:28:18.358 [2024-04-25 03:28:52.608431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.358 [2024-04-25 03:28:52.608591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.358 [2024-04-25 03:28:52.608616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.358 qpair failed and we were unable to recover it. 00:28:18.358 [2024-04-25 03:28:52.608819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.358 [2024-04-25 03:28:52.608984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.359 [2024-04-25 03:28:52.609010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.359 qpair failed and we were unable to recover it. 00:28:18.359 [2024-04-25 03:28:52.609238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.359 [2024-04-25 03:28:52.609430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.359 [2024-04-25 03:28:52.609453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.359 qpair failed and we were unable to recover it. 00:28:18.359 [2024-04-25 03:28:52.609638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.359 [2024-04-25 03:28:52.609833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.359 [2024-04-25 03:28:52.609857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.359 qpair failed and we were unable to recover it. 00:28:18.359 [2024-04-25 03:28:52.610055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.359 [2024-04-25 03:28:52.610217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.359 [2024-04-25 03:28:52.610241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.359 qpair failed and we were unable to recover it. 00:28:18.359 [2024-04-25 03:28:52.610433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.359 [2024-04-25 03:28:52.610602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.359 [2024-04-25 03:28:52.610641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.359 qpair failed and we were unable to recover it. 00:28:18.359 [2024-04-25 03:28:52.610871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.359 [2024-04-25 03:28:52.611045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.359 [2024-04-25 03:28:52.611068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.359 qpair failed and we were unable to recover it. 00:28:18.359 [2024-04-25 03:28:52.611262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.359 [2024-04-25 03:28:52.611450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.359 [2024-04-25 03:28:52.611473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.359 qpair failed and we were unable to recover it. 00:28:18.359 [2024-04-25 03:28:52.611671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.359 [2024-04-25 03:28:52.611880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.359 [2024-04-25 03:28:52.611904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.359 qpair failed and we were unable to recover it. 00:28:18.359 [2024-04-25 03:28:52.612076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.359 [2024-04-25 03:28:52.612269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.359 [2024-04-25 03:28:52.612293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.359 qpair failed and we were unable to recover it. 00:28:18.359 [2024-04-25 03:28:52.612492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.359 [2024-04-25 03:28:52.612716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.359 [2024-04-25 03:28:52.612740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.359 qpair failed and we were unable to recover it. 00:28:18.359 [2024-04-25 03:28:52.612907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.359 [2024-04-25 03:28:52.613070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.359 [2024-04-25 03:28:52.613094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.359 qpair failed and we were unable to recover it. 00:28:18.359 [2024-04-25 03:28:52.613293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.359 [2024-04-25 03:28:52.613459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.359 [2024-04-25 03:28:52.613483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.359 qpair failed and we were unable to recover it. 00:28:18.359 [2024-04-25 03:28:52.613657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.359 [2024-04-25 03:28:52.613850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.359 [2024-04-25 03:28:52.613874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.359 qpair failed and we were unable to recover it. 00:28:18.359 [2024-04-25 03:28:52.614071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.359 [2024-04-25 03:28:52.614272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.359 [2024-04-25 03:28:52.614296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.359 qpair failed and we were unable to recover it. 00:28:18.359 [2024-04-25 03:28:52.614465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.359 [2024-04-25 03:28:52.614648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.359 [2024-04-25 03:28:52.614675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.359 qpair failed and we were unable to recover it. 00:28:18.359 [2024-04-25 03:28:52.614848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.359 [2024-04-25 03:28:52.615069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.359 [2024-04-25 03:28:52.615092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.359 qpair failed and we were unable to recover it. 00:28:18.359 [2024-04-25 03:28:52.615315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.359 [2024-04-25 03:28:52.615533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.359 [2024-04-25 03:28:52.615556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.359 qpair failed and we were unable to recover it. 00:28:18.359 [2024-04-25 03:28:52.615755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.359 [2024-04-25 03:28:52.615953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.359 [2024-04-25 03:28:52.615977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.359 qpair failed and we were unable to recover it. 00:28:18.359 [2024-04-25 03:28:52.616196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.359 [2024-04-25 03:28:52.616388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.359 [2024-04-25 03:28:52.616412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.359 qpair failed and we were unable to recover it. 00:28:18.359 [2024-04-25 03:28:52.616587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.359 [2024-04-25 03:28:52.616814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.359 [2024-04-25 03:28:52.616837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.359 qpair failed and we were unable to recover it. 00:28:18.359 [2024-04-25 03:28:52.617035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.359 [2024-04-25 03:28:52.617209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.359 [2024-04-25 03:28:52.617233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.359 qpair failed and we were unable to recover it. 00:28:18.359 [2024-04-25 03:28:52.617408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.359 [2024-04-25 03:28:52.617601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.359 [2024-04-25 03:28:52.617633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.359 qpair failed and we were unable to recover it. 00:28:18.359 [2024-04-25 03:28:52.617812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.359 [2024-04-25 03:28:52.618010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.359 [2024-04-25 03:28:52.618034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.359 qpair failed and we were unable to recover it. 00:28:18.359 [2024-04-25 03:28:52.618253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.359 [2024-04-25 03:28:52.618422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.359 [2024-04-25 03:28:52.618446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.359 qpair failed and we were unable to recover it. 00:28:18.359 [2024-04-25 03:28:52.618617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.359 [2024-04-25 03:28:52.618821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.359 [2024-04-25 03:28:52.618845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.359 qpair failed and we were unable to recover it. 00:28:18.359 [2024-04-25 03:28:52.619043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.359 [2024-04-25 03:28:52.619267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.359 [2024-04-25 03:28:52.619291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.359 qpair failed and we were unable to recover it. 00:28:18.359 [2024-04-25 03:28:52.619456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.359 [2024-04-25 03:28:52.619623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.359 [2024-04-25 03:28:52.619652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.359 qpair failed and we were unable to recover it. 00:28:18.359 [2024-04-25 03:28:52.619850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.359 [2024-04-25 03:28:52.620044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.359 [2024-04-25 03:28:52.620068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.359 qpair failed and we were unable to recover it. 00:28:18.359 [2024-04-25 03:28:52.620240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.359 [2024-04-25 03:28:52.620418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.359 [2024-04-25 03:28:52.620442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.359 qpair failed and we were unable to recover it. 00:28:18.359 [2024-04-25 03:28:52.620676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.359 [2024-04-25 03:28:52.620872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.360 [2024-04-25 03:28:52.620896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.360 qpair failed and we were unable to recover it. 00:28:18.360 [2024-04-25 03:28:52.621097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.360 [2024-04-25 03:28:52.621327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.360 [2024-04-25 03:28:52.621352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.360 qpair failed and we were unable to recover it. 00:28:18.360 [2024-04-25 03:28:52.621551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.360 [2024-04-25 03:28:52.621718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.360 [2024-04-25 03:28:52.621742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.360 qpair failed and we were unable to recover it. 00:28:18.360 [2024-04-25 03:28:52.621962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.360 [2024-04-25 03:28:52.622137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.360 [2024-04-25 03:28:52.622163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.360 qpair failed and we were unable to recover it. 00:28:18.360 [2024-04-25 03:28:52.622341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.360 [2024-04-25 03:28:52.622511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.360 [2024-04-25 03:28:52.622535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.360 qpair failed and we were unable to recover it. 00:28:18.360 [2024-04-25 03:28:52.622760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.360 [2024-04-25 03:28:52.622957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.360 [2024-04-25 03:28:52.622981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.360 qpair failed and we were unable to recover it. 00:28:18.360 [2024-04-25 03:28:52.623155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.360 [2024-04-25 03:28:52.623347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.360 [2024-04-25 03:28:52.623371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.360 qpair failed and we were unable to recover it. 00:28:18.360 [2024-04-25 03:28:52.623572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.360 [2024-04-25 03:28:52.623770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.360 [2024-04-25 03:28:52.623794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.360 qpair failed and we were unable to recover it. 00:28:18.360 [2024-04-25 03:28:52.623964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.360 [2024-04-25 03:28:52.624151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.360 [2024-04-25 03:28:52.624176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.360 qpair failed and we were unable to recover it. 00:28:18.360 [2024-04-25 03:28:52.624353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.360 [2024-04-25 03:28:52.624549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.360 [2024-04-25 03:28:52.624573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.360 qpair failed and we were unable to recover it. 00:28:18.360 [2024-04-25 03:28:52.624760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.360 [2024-04-25 03:28:52.624956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.360 [2024-04-25 03:28:52.624981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.360 qpair failed and we were unable to recover it. 00:28:18.360 [2024-04-25 03:28:52.625202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.360 [2024-04-25 03:28:52.625372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.360 [2024-04-25 03:28:52.625395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.360 qpair failed and we were unable to recover it. 00:28:18.360 [2024-04-25 03:28:52.625619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.360 [2024-04-25 03:28:52.625817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.360 [2024-04-25 03:28:52.625841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.360 qpair failed and we were unable to recover it. 00:28:18.360 [2024-04-25 03:28:52.626052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.360 [2024-04-25 03:28:52.626246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.360 [2024-04-25 03:28:52.626270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.360 qpair failed and we were unable to recover it. 00:28:18.360 [2024-04-25 03:28:52.626444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.360 [2024-04-25 03:28:52.626644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.360 [2024-04-25 03:28:52.626669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.360 qpair failed and we were unable to recover it. 00:28:18.360 [2024-04-25 03:28:52.626892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.360 [2024-04-25 03:28:52.627065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.360 [2024-04-25 03:28:52.627089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.360 qpair failed and we were unable to recover it. 00:28:18.360 [2024-04-25 03:28:52.627284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.360 [2024-04-25 03:28:52.627483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.360 [2024-04-25 03:28:52.627508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.360 qpair failed and we were unable to recover it. 00:28:18.360 [2024-04-25 03:28:52.627710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.360 [2024-04-25 03:28:52.627879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.360 [2024-04-25 03:28:52.627904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.360 qpair failed and we were unable to recover it. 00:28:18.360 [2024-04-25 03:28:52.628098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.360 [2024-04-25 03:28:52.628295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.360 [2024-04-25 03:28:52.628319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.360 qpair failed and we were unable to recover it. 00:28:18.360 [2024-04-25 03:28:52.628509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.360 [2024-04-25 03:28:52.628704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.360 [2024-04-25 03:28:52.628728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.360 qpair failed and we were unable to recover it. 00:28:18.360 [2024-04-25 03:28:52.628898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.360 [2024-04-25 03:28:52.629127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.360 [2024-04-25 03:28:52.629152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.360 qpair failed and we were unable to recover it. 00:28:18.360 [2024-04-25 03:28:52.629354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.360 [2024-04-25 03:28:52.629515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.360 [2024-04-25 03:28:52.629539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.360 qpair failed and we were unable to recover it. 00:28:18.360 [2024-04-25 03:28:52.629709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.360 [2024-04-25 03:28:52.629908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.360 [2024-04-25 03:28:52.629932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.360 qpair failed and we were unable to recover it. 00:28:18.360 [2024-04-25 03:28:52.630097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.360 [2024-04-25 03:28:52.630269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.360 [2024-04-25 03:28:52.630294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.360 qpair failed and we were unable to recover it. 00:28:18.360 [2024-04-25 03:28:52.630517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.360 [2024-04-25 03:28:52.630742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.360 [2024-04-25 03:28:52.630766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.360 qpair failed and we were unable to recover it. 00:28:18.360 [2024-04-25 03:28:52.630971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.360 [2024-04-25 03:28:52.631133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.360 [2024-04-25 03:28:52.631157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.360 qpair failed and we were unable to recover it. 00:28:18.360 [2024-04-25 03:28:52.631355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.360 [2024-04-25 03:28:52.631546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.360 [2024-04-25 03:28:52.631569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.360 qpair failed and we were unable to recover it. 00:28:18.360 [2024-04-25 03:28:52.631793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.360 [2024-04-25 03:28:52.631985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.360 [2024-04-25 03:28:52.632009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.360 qpair failed and we were unable to recover it. 00:28:18.360 [2024-04-25 03:28:52.632180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.360 [2024-04-25 03:28:52.632406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.360 [2024-04-25 03:28:52.632430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.360 qpair failed and we were unable to recover it. 00:28:18.360 [2024-04-25 03:28:52.632601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.360 [2024-04-25 03:28:52.632799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.361 [2024-04-25 03:28:52.632824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.361 qpair failed and we were unable to recover it. 00:28:18.361 [2024-04-25 03:28:52.633022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.361 [2024-04-25 03:28:52.633241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.361 [2024-04-25 03:28:52.633265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.361 qpair failed and we were unable to recover it. 00:28:18.361 [2024-04-25 03:28:52.633460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.361 [2024-04-25 03:28:52.633684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.361 [2024-04-25 03:28:52.633708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.361 qpair failed and we were unable to recover it. 00:28:18.361 [2024-04-25 03:28:52.633881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.361 [2024-04-25 03:28:52.634080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.361 [2024-04-25 03:28:52.634106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.361 qpair failed and we were unable to recover it. 00:28:18.361 [2024-04-25 03:28:52.634301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.361 [2024-04-25 03:28:52.634500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.361 [2024-04-25 03:28:52.634525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.361 qpair failed and we were unable to recover it. 00:28:18.361 [2024-04-25 03:28:52.634723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.361 [2024-04-25 03:28:52.634899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.361 [2024-04-25 03:28:52.634923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.361 qpair failed and we were unable to recover it. 00:28:18.361 [2024-04-25 03:28:52.635088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.361 [2024-04-25 03:28:52.635287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.361 [2024-04-25 03:28:52.635312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.361 qpair failed and we were unable to recover it. 00:28:18.361 [2024-04-25 03:28:52.635471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.361 [2024-04-25 03:28:52.635641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.361 [2024-04-25 03:28:52.635666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.361 qpair failed and we were unable to recover it. 00:28:18.361 [2024-04-25 03:28:52.635861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.361 [2024-04-25 03:28:52.636052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.361 [2024-04-25 03:28:52.636077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.361 qpair failed and we were unable to recover it. 00:28:18.361 [2024-04-25 03:28:52.636251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.361 [2024-04-25 03:28:52.636465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.361 [2024-04-25 03:28:52.636489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.361 qpair failed and we were unable to recover it. 00:28:18.361 [2024-04-25 03:28:52.636685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.361 [2024-04-25 03:28:52.636855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.361 [2024-04-25 03:28:52.636880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.361 qpair failed and we were unable to recover it. 00:28:18.361 [2024-04-25 03:28:52.637102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.361 [2024-04-25 03:28:52.637300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.361 [2024-04-25 03:28:52.637325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.361 qpair failed and we were unable to recover it. 00:28:18.361 [2024-04-25 03:28:52.637524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.361 [2024-04-25 03:28:52.637721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.361 [2024-04-25 03:28:52.637745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.361 qpair failed and we were unable to recover it. 00:28:18.361 [2024-04-25 03:28:52.637942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.361 [2024-04-25 03:28:52.638132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.361 [2024-04-25 03:28:52.638156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.361 qpair failed and we were unable to recover it. 00:28:18.361 [2024-04-25 03:28:52.638353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.361 [2024-04-25 03:28:52.638551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.361 [2024-04-25 03:28:52.638579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.361 qpair failed and we were unable to recover it. 00:28:18.361 [2024-04-25 03:28:52.638781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.361 [2024-04-25 03:28:52.638949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.361 [2024-04-25 03:28:52.638973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.361 qpair failed and we were unable to recover it. 00:28:18.361 [2024-04-25 03:28:52.639197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.361 [2024-04-25 03:28:52.639396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.361 [2024-04-25 03:28:52.639420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.361 qpair failed and we were unable to recover it. 00:28:18.361 [2024-04-25 03:28:52.639587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.361 [2024-04-25 03:28:52.639790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.361 [2024-04-25 03:28:52.639814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.361 qpair failed and we were unable to recover it. 00:28:18.361 [2024-04-25 03:28:52.640004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.361 [2024-04-25 03:28:52.640201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.361 [2024-04-25 03:28:52.640225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.361 qpair failed and we were unable to recover it. 00:28:18.361 [2024-04-25 03:28:52.640425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.361 [2024-04-25 03:28:52.640616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.361 [2024-04-25 03:28:52.640646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.361 qpair failed and we were unable to recover it. 00:28:18.361 [2024-04-25 03:28:52.640816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.361 [2024-04-25 03:28:52.640996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.361 [2024-04-25 03:28:52.641019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.361 qpair failed and we were unable to recover it. 00:28:18.361 [2024-04-25 03:28:52.641186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.361 [2024-04-25 03:28:52.641378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.361 [2024-04-25 03:28:52.641401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.361 qpair failed and we were unable to recover it. 00:28:18.361 [2024-04-25 03:28:52.641623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.361 [2024-04-25 03:28:52.641811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.361 [2024-04-25 03:28:52.641835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.361 qpair failed and we were unable to recover it. 00:28:18.361 [2024-04-25 03:28:52.641996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.361 [2024-04-25 03:28:52.642167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.361 [2024-04-25 03:28:52.642193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.361 qpair failed and we were unable to recover it. 00:28:18.361 [2024-04-25 03:28:52.642374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.361 [2024-04-25 03:28:52.642598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.361 [2024-04-25 03:28:52.642622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.361 qpair failed and we were unable to recover it. 00:28:18.361 [2024-04-25 03:28:52.642808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.361 [2024-04-25 03:28:52.642978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.361 [2024-04-25 03:28:52.643002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.361 qpair failed and we were unable to recover it. 00:28:18.361 [2024-04-25 03:28:52.643166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.361 [2024-04-25 03:28:52.643397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.361 [2024-04-25 03:28:52.643421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.361 qpair failed and we were unable to recover it. 00:28:18.361 [2024-04-25 03:28:52.643591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.361 [2024-04-25 03:28:52.643782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.361 [2024-04-25 03:28:52.643806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.361 qpair failed and we were unable to recover it. 00:28:18.361 [2024-04-25 03:28:52.643977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.361 [2024-04-25 03:28:52.644171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.361 [2024-04-25 03:28:52.644196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.361 qpair failed and we were unable to recover it. 00:28:18.361 [2024-04-25 03:28:52.644390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.361 [2024-04-25 03:28:52.644585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.361 [2024-04-25 03:28:52.644608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.362 qpair failed and we were unable to recover it. 00:28:18.362 [2024-04-25 03:28:52.644812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.362 [2024-04-25 03:28:52.645026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.362 [2024-04-25 03:28:52.645050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.362 qpair failed and we were unable to recover it. 00:28:18.362 [2024-04-25 03:28:52.645269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.362 [2024-04-25 03:28:52.645491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.362 [2024-04-25 03:28:52.645515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.362 qpair failed and we were unable to recover it. 00:28:18.362 [2024-04-25 03:28:52.645720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.362 [2024-04-25 03:28:52.645920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.362 [2024-04-25 03:28:52.645943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.362 qpair failed and we were unable to recover it. 00:28:18.362 [2024-04-25 03:28:52.646104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.362 [2024-04-25 03:28:52.646295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.362 [2024-04-25 03:28:52.646319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.362 qpair failed and we were unable to recover it. 00:28:18.362 [2024-04-25 03:28:52.646520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.362 [2024-04-25 03:28:52.646743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.362 [2024-04-25 03:28:52.646768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.362 qpair failed and we were unable to recover it. 00:28:18.362 [2024-04-25 03:28:52.646964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.362 [2024-04-25 03:28:52.647159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.362 [2024-04-25 03:28:52.647183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.362 qpair failed and we were unable to recover it. 00:28:18.362 [2024-04-25 03:28:52.647357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.362 [2024-04-25 03:28:52.647555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.362 [2024-04-25 03:28:52.647578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.362 qpair failed and we were unable to recover it. 00:28:18.362 [2024-04-25 03:28:52.647775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.362 [2024-04-25 03:28:52.647967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.362 [2024-04-25 03:28:52.647990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.362 qpair failed and we were unable to recover it. 00:28:18.362 [2024-04-25 03:28:52.648207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.362 [2024-04-25 03:28:52.648404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.362 [2024-04-25 03:28:52.648428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.362 qpair failed and we were unable to recover it. 00:28:18.362 [2024-04-25 03:28:52.648622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.362 [2024-04-25 03:28:52.648823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.362 [2024-04-25 03:28:52.648847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.362 qpair failed and we were unable to recover it. 00:28:18.362 [2024-04-25 03:28:52.649071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.362 [2024-04-25 03:28:52.649263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.362 [2024-04-25 03:28:52.649286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.362 qpair failed and we were unable to recover it. 00:28:18.362 [2024-04-25 03:28:52.649481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.362 [2024-04-25 03:28:52.649655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.362 [2024-04-25 03:28:52.649679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.362 qpair failed and we were unable to recover it. 00:28:18.362 [2024-04-25 03:28:52.649845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.362 [2024-04-25 03:28:52.650040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.362 [2024-04-25 03:28:52.650063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.362 qpair failed and we were unable to recover it. 00:28:18.362 [2024-04-25 03:28:52.650240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.362 [2024-04-25 03:28:52.650434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.362 [2024-04-25 03:28:52.650458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.362 qpair failed and we were unable to recover it. 00:28:18.362 [2024-04-25 03:28:52.650681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.362 [2024-04-25 03:28:52.650851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.362 [2024-04-25 03:28:52.650875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.362 qpair failed and we were unable to recover it. 00:28:18.362 [2024-04-25 03:28:52.651102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.362 [2024-04-25 03:28:52.651302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.362 [2024-04-25 03:28:52.651327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.362 qpair failed and we were unable to recover it. 00:28:18.362 [2024-04-25 03:28:52.651500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.362 [2024-04-25 03:28:52.651699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.362 [2024-04-25 03:28:52.651724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.362 qpair failed and we were unable to recover it. 00:28:18.362 [2024-04-25 03:28:52.651925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.362 [2024-04-25 03:28:52.652151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.362 [2024-04-25 03:28:52.652175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.362 qpair failed and we were unable to recover it. 00:28:18.362 [2024-04-25 03:28:52.652373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.362 [2024-04-25 03:28:52.652565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.362 [2024-04-25 03:28:52.652589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.362 qpair failed and we were unable to recover it. 00:28:18.362 [2024-04-25 03:28:52.652772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.362 [2024-04-25 03:28:52.652970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.363 [2024-04-25 03:28:52.652995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.363 qpair failed and we were unable to recover it. 00:28:18.363 [2024-04-25 03:28:52.653186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.363 [2024-04-25 03:28:52.653355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.363 [2024-04-25 03:28:52.653378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.363 qpair failed and we were unable to recover it. 00:28:18.363 [2024-04-25 03:28:52.653568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.363 [2024-04-25 03:28:52.653739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.363 [2024-04-25 03:28:52.653764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.363 qpair failed and we were unable to recover it. 00:28:18.363 [2024-04-25 03:28:52.653985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.363 [2024-04-25 03:28:52.654195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.363 [2024-04-25 03:28:52.654218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.363 qpair failed and we were unable to recover it. 00:28:18.363 [2024-04-25 03:28:52.654381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.363 [2024-04-25 03:28:52.654603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.363 [2024-04-25 03:28:52.654627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.363 qpair failed and we were unable to recover it. 00:28:18.363 [2024-04-25 03:28:52.654845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.363 [2024-04-25 03:28:52.655036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.363 [2024-04-25 03:28:52.655060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.363 qpair failed and we were unable to recover it. 00:28:18.363 [2024-04-25 03:28:52.655235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.363 [2024-04-25 03:28:52.655448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.363 [2024-04-25 03:28:52.655476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.363 qpair failed and we were unable to recover it. 00:28:18.363 [2024-04-25 03:28:52.655701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.363 [2024-04-25 03:28:52.655866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.363 [2024-04-25 03:28:52.655890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.363 qpair failed and we were unable to recover it. 00:28:18.363 [2024-04-25 03:28:52.656054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.363 [2024-04-25 03:28:52.656222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.363 [2024-04-25 03:28:52.656246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.363 qpair failed and we were unable to recover it. 00:28:18.363 [2024-04-25 03:28:52.656422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.363 [2024-04-25 03:28:52.656617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.363 [2024-04-25 03:28:52.656646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.363 qpair failed and we were unable to recover it. 00:28:18.363 [2024-04-25 03:28:52.656817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.363 [2024-04-25 03:28:52.657011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.363 [2024-04-25 03:28:52.657035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.363 qpair failed and we were unable to recover it. 00:28:18.363 [2024-04-25 03:28:52.657206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.363 [2024-04-25 03:28:52.657429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.363 [2024-04-25 03:28:52.657452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.363 qpair failed and we were unable to recover it. 00:28:18.363 [2024-04-25 03:28:52.657649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.363 [2024-04-25 03:28:52.657817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.363 [2024-04-25 03:28:52.657841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.363 qpair failed and we were unable to recover it. 00:28:18.363 [2024-04-25 03:28:52.658038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.363 [2024-04-25 03:28:52.658229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.363 [2024-04-25 03:28:52.658253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.363 qpair failed and we were unable to recover it. 00:28:18.363 [2024-04-25 03:28:52.658450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.363 [2024-04-25 03:28:52.658646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.363 [2024-04-25 03:28:52.658671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.363 qpair failed and we were unable to recover it. 00:28:18.363 [2024-04-25 03:28:52.658895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.363 [2024-04-25 03:28:52.659121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.363 [2024-04-25 03:28:52.659145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.363 qpair failed and we were unable to recover it. 00:28:18.363 [2024-04-25 03:28:52.659364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.363 [2024-04-25 03:28:52.659559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.363 [2024-04-25 03:28:52.659583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.363 qpair failed and we were unable to recover it. 00:28:18.363 [2024-04-25 03:28:52.659804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.363 [2024-04-25 03:28:52.659967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.363 [2024-04-25 03:28:52.659991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.363 qpair failed and we were unable to recover it. 00:28:18.363 [2024-04-25 03:28:52.660189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.363 [2024-04-25 03:28:52.660352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.363 [2024-04-25 03:28:52.660376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.363 qpair failed and we were unable to recover it. 00:28:18.363 [2024-04-25 03:28:52.660579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.363 [2024-04-25 03:28:52.660778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.363 [2024-04-25 03:28:52.660803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.363 qpair failed and we were unable to recover it. 00:28:18.363 [2024-04-25 03:28:52.660979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.363 [2024-04-25 03:28:52.661149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.363 [2024-04-25 03:28:52.661173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.363 qpair failed and we were unable to recover it. 00:28:18.363 [2024-04-25 03:28:52.661366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.363 [2024-04-25 03:28:52.661581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.363 [2024-04-25 03:28:52.661604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.363 qpair failed and we were unable to recover it. 00:28:18.363 [2024-04-25 03:28:52.661789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.363 [2024-04-25 03:28:52.661965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.363 [2024-04-25 03:28:52.661989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.363 qpair failed and we were unable to recover it. 00:28:18.363 [2024-04-25 03:28:52.662159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.363 [2024-04-25 03:28:52.662322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.363 [2024-04-25 03:28:52.662345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.363 qpair failed and we were unable to recover it. 00:28:18.363 [2024-04-25 03:28:52.662567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.363 [2024-04-25 03:28:52.662797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.363 [2024-04-25 03:28:52.662822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.363 qpair failed and we were unable to recover it. 00:28:18.363 [2024-04-25 03:28:52.663042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.363 [2024-04-25 03:28:52.663229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.363 [2024-04-25 03:28:52.663254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.363 qpair failed and we were unable to recover it. 00:28:18.363 [2024-04-25 03:28:52.663427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.363 [2024-04-25 03:28:52.663623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.363 [2024-04-25 03:28:52.663662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.363 qpair failed and we were unable to recover it. 00:28:18.363 [2024-04-25 03:28:52.663891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.363 [2024-04-25 03:28:52.664085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.363 [2024-04-25 03:28:52.664109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.363 qpair failed and we were unable to recover it. 00:28:18.363 [2024-04-25 03:28:52.664304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.363 [2024-04-25 03:28:52.664468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.363 [2024-04-25 03:28:52.664492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.363 qpair failed and we were unable to recover it. 00:28:18.363 [2024-04-25 03:28:52.664722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.363 [2024-04-25 03:28:52.664944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.364 [2024-04-25 03:28:52.664969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.364 qpair failed and we were unable to recover it. 00:28:18.364 [2024-04-25 03:28:52.665135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.364 [2024-04-25 03:28:52.665362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.364 [2024-04-25 03:28:52.665387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.364 qpair failed and we were unable to recover it. 00:28:18.364 [2024-04-25 03:28:52.665616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.364 [2024-04-25 03:28:52.665814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.364 [2024-04-25 03:28:52.665840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.364 qpair failed and we were unable to recover it. 00:28:18.364 [2024-04-25 03:28:52.666022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.364 [2024-04-25 03:28:52.666220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.364 [2024-04-25 03:28:52.666244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.364 qpair failed and we were unable to recover it. 00:28:18.364 [2024-04-25 03:28:52.666469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.364 [2024-04-25 03:28:52.666666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.364 [2024-04-25 03:28:52.666692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.364 qpair failed and we were unable to recover it. 00:28:18.364 [2024-04-25 03:28:52.666894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.364 [2024-04-25 03:28:52.667093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.364 [2024-04-25 03:28:52.667118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.364 qpair failed and we were unable to recover it. 00:28:18.364 [2024-04-25 03:28:52.667288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.364 [2024-04-25 03:28:52.667476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.364 [2024-04-25 03:28:52.667500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.364 qpair failed and we were unable to recover it. 00:28:18.364 [2024-04-25 03:28:52.667726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.364 [2024-04-25 03:28:52.667918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.364 [2024-04-25 03:28:52.667942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.364 qpair failed and we were unable to recover it. 00:28:18.364 [2024-04-25 03:28:52.668110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.364 [2024-04-25 03:28:52.668278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.364 [2024-04-25 03:28:52.668302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.364 qpair failed and we were unable to recover it. 00:28:18.364 [2024-04-25 03:28:52.668494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.364 [2024-04-25 03:28:52.668660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.364 [2024-04-25 03:28:52.668686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.364 qpair failed and we were unable to recover it. 00:28:18.364 [2024-04-25 03:28:52.668878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.364 [2024-04-25 03:28:52.669079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.364 [2024-04-25 03:28:52.669103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.364 qpair failed and we were unable to recover it. 00:28:18.364 [2024-04-25 03:28:52.669299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.364 [2024-04-25 03:28:52.669524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.364 [2024-04-25 03:28:52.669548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.364 qpair failed and we were unable to recover it. 00:28:18.364 [2024-04-25 03:28:52.669717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.364 [2024-04-25 03:28:52.669934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.364 [2024-04-25 03:28:52.669958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.364 qpair failed and we were unable to recover it. 00:28:18.364 [2024-04-25 03:28:52.670152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.364 [2024-04-25 03:28:52.670368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.364 [2024-04-25 03:28:52.670391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.364 qpair failed and we were unable to recover it. 00:28:18.364 [2024-04-25 03:28:52.670567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.364 [2024-04-25 03:28:52.670797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.364 [2024-04-25 03:28:52.670821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.364 qpair failed and we were unable to recover it. 00:28:18.364 [2024-04-25 03:28:52.670996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.364 [2024-04-25 03:28:52.671183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.364 [2024-04-25 03:28:52.671207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.364 qpair failed and we were unable to recover it. 00:28:18.364 [2024-04-25 03:28:52.671369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.364 [2024-04-25 03:28:52.671563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.364 [2024-04-25 03:28:52.671587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.364 qpair failed and we were unable to recover it. 00:28:18.364 [2024-04-25 03:28:52.671789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.364 [2024-04-25 03:28:52.672003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.364 [2024-04-25 03:28:52.672026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.364 qpair failed and we were unable to recover it. 00:28:18.364 [2024-04-25 03:28:52.672231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.364 [2024-04-25 03:28:52.672429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.364 [2024-04-25 03:28:52.672453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.364 qpair failed and we were unable to recover it. 00:28:18.364 [2024-04-25 03:28:52.672652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.364 [2024-04-25 03:28:52.672855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.364 [2024-04-25 03:28:52.672879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.364 qpair failed and we were unable to recover it. 00:28:18.364 [2024-04-25 03:28:52.673052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.364 [2024-04-25 03:28:52.673248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.364 [2024-04-25 03:28:52.673272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.364 qpair failed and we were unable to recover it. 00:28:18.364 [2024-04-25 03:28:52.673465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.364 [2024-04-25 03:28:52.673662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.364 [2024-04-25 03:28:52.673697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.364 qpair failed and we were unable to recover it. 00:28:18.364 [2024-04-25 03:28:52.673894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.364 [2024-04-25 03:28:52.674083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.364 [2024-04-25 03:28:52.674107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.364 qpair failed and we were unable to recover it. 00:28:18.364 [2024-04-25 03:28:52.674327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.364 [2024-04-25 03:28:52.674486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.364 [2024-04-25 03:28:52.674510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.364 qpair failed and we were unable to recover it. 00:28:18.364 [2024-04-25 03:28:52.674699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.364 [2024-04-25 03:28:52.674868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.364 [2024-04-25 03:28:52.674892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.364 qpair failed and we were unable to recover it. 00:28:18.364 [2024-04-25 03:28:52.675086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.364 [2024-04-25 03:28:52.675286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.364 [2024-04-25 03:28:52.675309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.364 qpair failed and we were unable to recover it. 00:28:18.364 [2024-04-25 03:28:52.675544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.364 [2024-04-25 03:28:52.675719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.364 [2024-04-25 03:28:52.675743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.364 qpair failed and we were unable to recover it. 00:28:18.364 [2024-04-25 03:28:52.675913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.364 [2024-04-25 03:28:52.676132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.364 [2024-04-25 03:28:52.676156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.364 qpair failed and we were unable to recover it. 00:28:18.364 [2024-04-25 03:28:52.676387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.365 [2024-04-25 03:28:52.676550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.365 [2024-04-25 03:28:52.676577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.365 qpair failed and we were unable to recover it. 00:28:18.365 [2024-04-25 03:28:52.676777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.365 [2024-04-25 03:28:52.676947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.365 [2024-04-25 03:28:52.676973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.365 qpair failed and we were unable to recover it. 00:28:18.365 [2024-04-25 03:28:52.677215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.365 [2024-04-25 03:28:52.677440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.365 [2024-04-25 03:28:52.677464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.365 qpair failed and we were unable to recover it. 00:28:18.365 [2024-04-25 03:28:52.677666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.365 [2024-04-25 03:28:52.677887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.365 [2024-04-25 03:28:52.677911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.365 qpair failed and we were unable to recover it. 00:28:18.365 [2024-04-25 03:28:52.678107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.365 [2024-04-25 03:28:52.678280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.365 [2024-04-25 03:28:52.678304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.365 qpair failed and we were unable to recover it. 00:28:18.365 [2024-04-25 03:28:52.678525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.365 [2024-04-25 03:28:52.678722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.365 [2024-04-25 03:28:52.678746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.365 qpair failed and we were unable to recover it. 00:28:18.365 [2024-04-25 03:28:52.678969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.365 [2024-04-25 03:28:52.679200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.365 [2024-04-25 03:28:52.679224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.365 qpair failed and we were unable to recover it. 00:28:18.365 [2024-04-25 03:28:52.679420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.365 [2024-04-25 03:28:52.679651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.365 [2024-04-25 03:28:52.679682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.365 qpair failed and we were unable to recover it. 00:28:18.365 [2024-04-25 03:28:52.679879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.365 [2024-04-25 03:28:52.680109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.365 [2024-04-25 03:28:52.680133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.365 qpair failed and we were unable to recover it. 00:28:18.365 [2024-04-25 03:28:52.680304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.365 [2024-04-25 03:28:52.680476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.365 [2024-04-25 03:28:52.680502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.365 qpair failed and we were unable to recover it. 00:28:18.365 [2024-04-25 03:28:52.680734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.365 [2024-04-25 03:28:52.680902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.365 [2024-04-25 03:28:52.680926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.365 qpair failed and we were unable to recover it. 00:28:18.365 [2024-04-25 03:28:52.681110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.365 [2024-04-25 03:28:52.681277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.365 [2024-04-25 03:28:52.681301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.365 qpair failed and we were unable to recover it. 00:28:18.365 [2024-04-25 03:28:52.681497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.365 [2024-04-25 03:28:52.681666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.365 [2024-04-25 03:28:52.681690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.365 qpair failed and we were unable to recover it. 00:28:18.365 [2024-04-25 03:28:52.681890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.365 [2024-04-25 03:28:52.682051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.365 [2024-04-25 03:28:52.682075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.365 qpair failed and we were unable to recover it. 00:28:18.365 [2024-04-25 03:28:52.682274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.365 [2024-04-25 03:28:52.682441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.365 [2024-04-25 03:28:52.682465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.365 qpair failed and we were unable to recover it. 00:28:18.365 [2024-04-25 03:28:52.682674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.365 [2024-04-25 03:28:52.682848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.365 [2024-04-25 03:28:52.682871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.365 qpair failed and we were unable to recover it. 00:28:18.365 [2024-04-25 03:28:52.683069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.365 [2024-04-25 03:28:52.683241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.365 [2024-04-25 03:28:52.683264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.365 qpair failed and we were unable to recover it. 00:28:18.365 [2024-04-25 03:28:52.683438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.365 [2024-04-25 03:28:52.683604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.365 [2024-04-25 03:28:52.683631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.365 qpair failed and we were unable to recover it. 00:28:18.365 [2024-04-25 03:28:52.683836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.365 [2024-04-25 03:28:52.684000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.365 [2024-04-25 03:28:52.684023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.365 qpair failed and we were unable to recover it. 00:28:18.365 [2024-04-25 03:28:52.684232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.365 [2024-04-25 03:28:52.684421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.365 [2024-04-25 03:28:52.684444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.365 qpair failed and we were unable to recover it. 00:28:18.365 [2024-04-25 03:28:52.684664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.365 [2024-04-25 03:28:52.684860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.365 [2024-04-25 03:28:52.684884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.365 qpair failed and we were unable to recover it. 00:28:18.365 [2024-04-25 03:28:52.685086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.365 [2024-04-25 03:28:52.685303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.365 [2024-04-25 03:28:52.685327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.365 qpair failed and we were unable to recover it. 00:28:18.365 [2024-04-25 03:28:52.685532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.365 [2024-04-25 03:28:52.685727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.365 [2024-04-25 03:28:52.685750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.365 qpair failed and we were unable to recover it. 00:28:18.365 [2024-04-25 03:28:52.685941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.365 [2024-04-25 03:28:52.686145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.365 [2024-04-25 03:28:52.686169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.365 qpair failed and we were unable to recover it. 00:28:18.365 [2024-04-25 03:28:52.686365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.365 [2024-04-25 03:28:52.686587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.365 [2024-04-25 03:28:52.686611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.365 qpair failed and we were unable to recover it. 00:28:18.365 [2024-04-25 03:28:52.686823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.365 [2024-04-25 03:28:52.687026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.365 [2024-04-25 03:28:52.687051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.365 qpair failed and we were unable to recover it. 00:28:18.365 [2024-04-25 03:28:52.687223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.365 [2024-04-25 03:28:52.687414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.365 [2024-04-25 03:28:52.687437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.365 qpair failed and we were unable to recover it. 00:28:18.365 [2024-04-25 03:28:52.687651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.365 [2024-04-25 03:28:52.687827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.365 [2024-04-25 03:28:52.687852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.365 qpair failed and we were unable to recover it. 00:28:18.365 [2024-04-25 03:28:52.688056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.365 [2024-04-25 03:28:52.688279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.365 [2024-04-25 03:28:52.688302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.365 qpair failed and we were unable to recover it. 00:28:18.365 [2024-04-25 03:28:52.688495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.365 [2024-04-25 03:28:52.688696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.366 [2024-04-25 03:28:52.688720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.366 qpair failed and we were unable to recover it. 00:28:18.366 [2024-04-25 03:28:52.688918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.366 [2024-04-25 03:28:52.689116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.366 [2024-04-25 03:28:52.689141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.366 qpair failed and we were unable to recover it. 00:28:18.366 [2024-04-25 03:28:52.689338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.366 [2024-04-25 03:28:52.689563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.366 [2024-04-25 03:28:52.689587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.366 qpair failed and we were unable to recover it. 00:28:18.366 [2024-04-25 03:28:52.689794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.366 [2024-04-25 03:28:52.689996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.366 [2024-04-25 03:28:52.690020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.366 qpair failed and we were unable to recover it. 00:28:18.366 [2024-04-25 03:28:52.690228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.366 [2024-04-25 03:28:52.690414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.366 [2024-04-25 03:28:52.690437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.366 qpair failed and we were unable to recover it. 00:28:18.366 [2024-04-25 03:28:52.690636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.366 [2024-04-25 03:28:52.690806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.366 [2024-04-25 03:28:52.690831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.366 qpair failed and we were unable to recover it. 00:28:18.366 [2024-04-25 03:28:52.691025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.366 [2024-04-25 03:28:52.691198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.366 [2024-04-25 03:28:52.691222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.366 qpair failed and we were unable to recover it. 00:28:18.366 [2024-04-25 03:28:52.691412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.366 [2024-04-25 03:28:52.691607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.366 [2024-04-25 03:28:52.691634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.366 qpair failed and we were unable to recover it. 00:28:18.366 [2024-04-25 03:28:52.691805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.366 [2024-04-25 03:28:52.692025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.366 [2024-04-25 03:28:52.692049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.366 qpair failed and we were unable to recover it. 00:28:18.366 [2024-04-25 03:28:52.692226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.366 [2024-04-25 03:28:52.692421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.366 [2024-04-25 03:28:52.692445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.366 qpair failed and we were unable to recover it. 00:28:18.366 [2024-04-25 03:28:52.692660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.366 [2024-04-25 03:28:52.692866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.366 [2024-04-25 03:28:52.692889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.366 qpair failed and we were unable to recover it. 00:28:18.366 [2024-04-25 03:28:52.693060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.366 [2024-04-25 03:28:52.693258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.366 [2024-04-25 03:28:52.693287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.366 qpair failed and we were unable to recover it. 00:28:18.366 [2024-04-25 03:28:52.693484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.366 [2024-04-25 03:28:52.693690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.366 [2024-04-25 03:28:52.693716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.366 qpair failed and we were unable to recover it. 00:28:18.366 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 44: 1623270 Killed "${NVMF_APP[@]}" "$@" 00:28:18.366 [2024-04-25 03:28:52.693923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.366 [2024-04-25 03:28:52.694092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.366 [2024-04-25 03:28:52.694117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.366 qpair failed and we were unable to recover it. 00:28:18.366 [2024-04-25 03:28:52.694318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.366 03:28:52 -- host/target_disconnect.sh@56 -- # disconnect_init 10.0.0.2 00:28:18.366 [2024-04-25 03:28:52.694517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.366 [2024-04-25 03:28:52.694542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.366 qpair failed and we were unable to recover it. 00:28:18.366 03:28:52 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:28:18.366 [2024-04-25 03:28:52.694739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.366 03:28:52 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:28:18.366 [2024-04-25 03:28:52.694935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.366 [2024-04-25 03:28:52.694960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.366 qpair failed and we were unable to recover it. 00:28:18.366 [2024-04-25 03:28:52.695172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.366 03:28:52 -- common/autotest_common.sh@710 -- # xtrace_disable 00:28:18.366 [2024-04-25 03:28:52.695372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.366 [2024-04-25 03:28:52.695399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.366 qpair failed and we were unable to recover it. 00:28:18.366 03:28:52 -- common/autotest_common.sh@10 -- # set +x 00:28:18.366 [2024-04-25 03:28:52.695607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.366 [2024-04-25 03:28:52.695826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.366 [2024-04-25 03:28:52.695851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.366 qpair failed and we were unable to recover it. 00:28:18.366 [2024-04-25 03:28:52.696082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.366 [2024-04-25 03:28:52.696303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.366 [2024-04-25 03:28:52.696327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.366 qpair failed and we were unable to recover it. 00:28:18.366 [2024-04-25 03:28:52.696519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.366 [2024-04-25 03:28:52.696747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.366 [2024-04-25 03:28:52.696771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.366 qpair failed and we were unable to recover it. 00:28:18.366 [2024-04-25 03:28:52.696968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.366 [2024-04-25 03:28:52.697165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.366 [2024-04-25 03:28:52.697191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.366 qpair failed and we were unable to recover it. 00:28:18.366 [2024-04-25 03:28:52.697423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.366 [2024-04-25 03:28:52.697622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.366 [2024-04-25 03:28:52.697659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.366 qpair failed and we were unable to recover it. 00:28:18.366 [2024-04-25 03:28:52.697865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.366 [2024-04-25 03:28:52.698069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.366 [2024-04-25 03:28:52.698094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.366 qpair failed and we were unable to recover it. 00:28:18.366 [2024-04-25 03:28:52.698263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.366 [2024-04-25 03:28:52.698487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.366 [2024-04-25 03:28:52.698521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.366 qpair failed and we were unable to recover it. 00:28:18.366 [2024-04-25 03:28:52.698721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.366 [2024-04-25 03:28:52.698889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.366 [2024-04-25 03:28:52.698913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.366 qpair failed and we were unable to recover it. 00:28:18.366 [2024-04-25 03:28:52.699113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.366 [2024-04-25 03:28:52.699338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.366 [2024-04-25 03:28:52.699362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.366 qpair failed and we were unable to recover it. 00:28:18.366 [2024-04-25 03:28:52.699584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.366 [2024-04-25 03:28:52.699814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.366 03:28:52 -- nvmf/common.sh@470 -- # nvmfpid=1623821 00:28:18.366 [2024-04-25 03:28:52.699838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.366 qpair failed and we were unable to recover it. 00:28:18.366 03:28:52 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:28:18.366 03:28:52 -- nvmf/common.sh@471 -- # waitforlisten 1623821 00:28:18.366 [2024-04-25 03:28:52.700041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.367 [2024-04-25 03:28:52.700210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.367 [2024-04-25 03:28:52.700235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.367 qpair failed and we were unable to recover it. 00:28:18.367 03:28:52 -- common/autotest_common.sh@817 -- # '[' -z 1623821 ']' 00:28:18.367 03:28:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:18.367 [2024-04-25 03:28:52.700438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.367 03:28:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:18.367 [2024-04-25 03:28:52.700606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.367 [2024-04-25 03:28:52.700635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.367 qpair failed and we were unable to recover it. 00:28:18.367 03:28:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:18.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:18.367 [2024-04-25 03:28:52.700841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.367 03:28:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:18.367 [2024-04-25 03:28:52.701042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.367 03:28:52 -- common/autotest_common.sh@10 -- # set +x 00:28:18.367 [2024-04-25 03:28:52.701067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.367 qpair failed and we were unable to recover it. 00:28:18.367 [2024-04-25 03:28:52.701248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.367 [2024-04-25 03:28:52.701447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.367 [2024-04-25 03:28:52.701471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.367 qpair failed and we were unable to recover it. 00:28:18.367 [2024-04-25 03:28:52.701665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.367 [2024-04-25 03:28:52.701858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.367 [2024-04-25 03:28:52.701882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.367 qpair failed and we were unable to recover it. 00:28:18.367 [2024-04-25 03:28:52.702087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.367 [2024-04-25 03:28:52.702283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.367 [2024-04-25 03:28:52.702308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.367 qpair failed and we were unable to recover it. 00:28:18.367 [2024-04-25 03:28:52.702544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.367 [2024-04-25 03:28:52.702759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.367 [2024-04-25 03:28:52.702784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.367 qpair failed and we were unable to recover it. 00:28:18.367 [2024-04-25 03:28:52.702980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.367 [2024-04-25 03:28:52.703177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.367 [2024-04-25 03:28:52.703200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.367 qpair failed and we were unable to recover it. 00:28:18.367 [2024-04-25 03:28:52.703373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.367 [2024-04-25 03:28:52.703570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.367 [2024-04-25 03:28:52.703594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.367 qpair failed and we were unable to recover it. 00:28:18.367 [2024-04-25 03:28:52.703810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.367 [2024-04-25 03:28:52.704007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.367 [2024-04-25 03:28:52.704032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.367 qpair failed and we were unable to recover it. 00:28:18.367 [2024-04-25 03:28:52.704257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.367 [2024-04-25 03:28:52.704478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.367 [2024-04-25 03:28:52.704501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.367 qpair failed and we were unable to recover it. 00:28:18.367 [2024-04-25 03:28:52.704721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.367 [2024-04-25 03:28:52.704924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.367 [2024-04-25 03:28:52.704948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.367 qpair failed and we were unable to recover it. 00:28:18.367 [2024-04-25 03:28:52.705141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.367 [2024-04-25 03:28:52.705335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.367 [2024-04-25 03:28:52.705359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.367 qpair failed and we were unable to recover it. 00:28:18.367 [2024-04-25 03:28:52.705532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.367 [2024-04-25 03:28:52.705707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.367 [2024-04-25 03:28:52.705730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.367 qpair failed and we were unable to recover it. 00:28:18.367 [2024-04-25 03:28:52.705926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.367 [2024-04-25 03:28:52.706143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.367 [2024-04-25 03:28:52.706167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.367 qpair failed and we were unable to recover it. 00:28:18.367 [2024-04-25 03:28:52.706333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.367 [2024-04-25 03:28:52.706535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.367 [2024-04-25 03:28:52.706559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.367 qpair failed and we were unable to recover it. 00:28:18.367 [2024-04-25 03:28:52.706777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.367 [2024-04-25 03:28:52.706995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.367 [2024-04-25 03:28:52.707019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.367 qpair failed and we were unable to recover it. 00:28:18.367 [2024-04-25 03:28:52.707192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.367 [2024-04-25 03:28:52.707356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.367 [2024-04-25 03:28:52.707381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.367 qpair failed and we were unable to recover it. 00:28:18.367 [2024-04-25 03:28:52.707597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.367 [2024-04-25 03:28:52.707800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.367 [2024-04-25 03:28:52.707824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.367 qpair failed and we were unable to recover it. 00:28:18.367 [2024-04-25 03:28:52.707998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.367 [2024-04-25 03:28:52.708173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.367 [2024-04-25 03:28:52.708196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.367 qpair failed and we were unable to recover it. 00:28:18.367 [2024-04-25 03:28:52.708370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.367 [2024-04-25 03:28:52.708541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.367 [2024-04-25 03:28:52.708565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.367 qpair failed and we were unable to recover it. 00:28:18.367 [2024-04-25 03:28:52.708741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.367 [2024-04-25 03:28:52.708950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.367 [2024-04-25 03:28:52.708974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.367 qpair failed and we were unable to recover it. 00:28:18.367 [2024-04-25 03:28:52.709143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.367 [2024-04-25 03:28:52.709316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.367 [2024-04-25 03:28:52.709341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.367 qpair failed and we were unable to recover it. 00:28:18.368 [2024-04-25 03:28:52.709550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.368 [2024-04-25 03:28:52.709711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.368 [2024-04-25 03:28:52.709736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.368 qpair failed and we were unable to recover it. 00:28:18.368 [2024-04-25 03:28:52.709903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.368 [2024-04-25 03:28:52.710099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.368 [2024-04-25 03:28:52.710123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.368 qpair failed and we were unable to recover it. 00:28:18.368 [2024-04-25 03:28:52.710288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.368 [2024-04-25 03:28:52.710478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.368 [2024-04-25 03:28:52.710502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.368 qpair failed and we were unable to recover it. 00:28:18.368 [2024-04-25 03:28:52.710705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.368 [2024-04-25 03:28:52.710920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.368 [2024-04-25 03:28:52.710944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.368 qpair failed and we were unable to recover it. 00:28:18.368 [2024-04-25 03:28:52.711164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.368 [2024-04-25 03:28:52.711399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.368 [2024-04-25 03:28:52.711423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.368 qpair failed and we were unable to recover it. 00:28:18.368 [2024-04-25 03:28:52.711654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.368 [2024-04-25 03:28:52.711853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.368 [2024-04-25 03:28:52.711877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.368 qpair failed and we were unable to recover it. 00:28:18.368 [2024-04-25 03:28:52.712080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.368 [2024-04-25 03:28:52.712302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.368 [2024-04-25 03:28:52.712325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.368 qpair failed and we were unable to recover it. 00:28:18.368 [2024-04-25 03:28:52.712541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.368 [2024-04-25 03:28:52.712708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.368 [2024-04-25 03:28:52.712732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.368 qpair failed and we were unable to recover it. 00:28:18.368 [2024-04-25 03:28:52.712909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.368 [2024-04-25 03:28:52.713101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.368 [2024-04-25 03:28:52.713125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.368 qpair failed and we were unable to recover it. 00:28:18.368 [2024-04-25 03:28:52.713327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.368 [2024-04-25 03:28:52.713506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.368 [2024-04-25 03:28:52.713529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.368 qpair failed and we were unable to recover it. 00:28:18.368 [2024-04-25 03:28:52.713751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.368 [2024-04-25 03:28:52.713920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.368 [2024-04-25 03:28:52.713945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.368 qpair failed and we were unable to recover it. 00:28:18.368 [2024-04-25 03:28:52.714147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.368 [2024-04-25 03:28:52.714373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.368 [2024-04-25 03:28:52.714398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.368 qpair failed and we were unable to recover it. 00:28:18.368 [2024-04-25 03:28:52.714618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.368 [2024-04-25 03:28:52.714789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.368 [2024-04-25 03:28:52.714812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.368 qpair failed and we were unable to recover it. 00:28:18.368 [2024-04-25 03:28:52.715013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.368 [2024-04-25 03:28:52.715205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.368 [2024-04-25 03:28:52.715229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.368 qpair failed and we were unable to recover it. 00:28:18.368 [2024-04-25 03:28:52.715424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.368 [2024-04-25 03:28:52.715591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.368 [2024-04-25 03:28:52.715614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.368 qpair failed and we were unable to recover it. 00:28:18.368 [2024-04-25 03:28:52.715819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.368 [2024-04-25 03:28:52.716013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.368 [2024-04-25 03:28:52.716038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.368 qpair failed and we were unable to recover it. 00:28:18.368 [2024-04-25 03:28:52.716258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.368 [2024-04-25 03:28:52.716429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.368 [2024-04-25 03:28:52.716453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.368 qpair failed and we were unable to recover it. 00:28:18.368 [2024-04-25 03:28:52.716641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.368 [2024-04-25 03:28:52.716832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.368 [2024-04-25 03:28:52.716857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.368 qpair failed and we were unable to recover it. 00:28:18.368 [2024-04-25 03:28:52.717031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.368 [2024-04-25 03:28:52.717226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.368 [2024-04-25 03:28:52.717250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.368 qpair failed and we were unable to recover it. 00:28:18.368 [2024-04-25 03:28:52.717452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.368 [2024-04-25 03:28:52.717616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.368 [2024-04-25 03:28:52.717647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.368 qpair failed and we were unable to recover it. 00:28:18.368 [2024-04-25 03:28:52.717845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.368 [2024-04-25 03:28:52.718040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.368 [2024-04-25 03:28:52.718070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.368 qpair failed and we were unable to recover it. 00:28:18.368 [2024-04-25 03:28:52.718261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.368 [2024-04-25 03:28:52.718456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.368 [2024-04-25 03:28:52.718480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.368 qpair failed and we were unable to recover it. 00:28:18.368 [2024-04-25 03:28:52.718705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.368 [2024-04-25 03:28:52.718884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.368 [2024-04-25 03:28:52.718908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.368 qpair failed and we were unable to recover it. 00:28:18.368 [2024-04-25 03:28:52.719138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.368 [2024-04-25 03:28:52.719322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.368 [2024-04-25 03:28:52.719346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.368 qpair failed and we were unable to recover it. 00:28:18.368 [2024-04-25 03:28:52.719518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.368 [2024-04-25 03:28:52.719710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.368 [2024-04-25 03:28:52.719735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.368 qpair failed and we were unable to recover it. 00:28:18.368 [2024-04-25 03:28:52.719909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.368 [2024-04-25 03:28:52.720108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.368 [2024-04-25 03:28:52.720133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.368 qpair failed and we were unable to recover it. 00:28:18.368 [2024-04-25 03:28:52.720352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.368 [2024-04-25 03:28:52.720519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.368 [2024-04-25 03:28:52.720543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.368 qpair failed and we were unable to recover it. 00:28:18.368 [2024-04-25 03:28:52.720722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.368 [2024-04-25 03:28:52.720917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.368 [2024-04-25 03:28:52.720941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.368 qpair failed and we were unable to recover it. 00:28:18.368 [2024-04-25 03:28:52.721164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.368 [2024-04-25 03:28:52.721361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.368 [2024-04-25 03:28:52.721387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.368 qpair failed and we were unable to recover it. 00:28:18.369 [2024-04-25 03:28:52.721583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.369 [2024-04-25 03:28:52.721751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.369 [2024-04-25 03:28:52.721776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.369 qpair failed and we were unable to recover it. 00:28:18.369 [2024-04-25 03:28:52.721948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.369 [2024-04-25 03:28:52.722142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.369 [2024-04-25 03:28:52.722172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.369 qpair failed and we were unable to recover it. 00:28:18.369 [2024-04-25 03:28:52.722371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.369 [2024-04-25 03:28:52.722570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.369 [2024-04-25 03:28:52.722595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.369 qpair failed and we were unable to recover it. 00:28:18.369 [2024-04-25 03:28:52.722781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.369 [2024-04-25 03:28:52.722973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.369 [2024-04-25 03:28:52.722997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.369 qpair failed and we were unable to recover it. 00:28:18.369 [2024-04-25 03:28:52.723162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.369 [2024-04-25 03:28:52.723386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.369 [2024-04-25 03:28:52.723410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.369 qpair failed and we were unable to recover it. 00:28:18.369 [2024-04-25 03:28:52.723648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.369 [2024-04-25 03:28:52.723824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.369 [2024-04-25 03:28:52.723848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.369 qpair failed and we were unable to recover it. 00:28:18.369 [2024-04-25 03:28:52.724056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.369 [2024-04-25 03:28:52.724245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.369 [2024-04-25 03:28:52.724269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.369 qpair failed and we were unable to recover it. 00:28:18.369 [2024-04-25 03:28:52.724488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.369 [2024-04-25 03:28:52.724683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.369 [2024-04-25 03:28:52.724707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.369 qpair failed and we were unable to recover it. 00:28:18.369 [2024-04-25 03:28:52.724905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.369 [2024-04-25 03:28:52.725131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.369 [2024-04-25 03:28:52.725155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.369 qpair failed and we were unable to recover it. 00:28:18.369 [2024-04-25 03:28:52.725324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.369 [2024-04-25 03:28:52.725542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.369 [2024-04-25 03:28:52.725566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.369 qpair failed and we were unable to recover it. 00:28:18.369 [2024-04-25 03:28:52.725793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.369 [2024-04-25 03:28:52.725957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.369 [2024-04-25 03:28:52.725981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.369 qpair failed and we were unable to recover it. 00:28:18.369 [2024-04-25 03:28:52.726154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.369 [2024-04-25 03:28:52.726348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.369 [2024-04-25 03:28:52.726373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.369 qpair failed and we were unable to recover it. 00:28:18.369 [2024-04-25 03:28:52.726582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.369 [2024-04-25 03:28:52.726774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.369 [2024-04-25 03:28:52.726799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.369 qpair failed and we were unable to recover it. 00:28:18.369 [2024-04-25 03:28:52.726995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.369 [2024-04-25 03:28:52.727163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.369 [2024-04-25 03:28:52.727186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.369 qpair failed and we were unable to recover it. 00:28:18.369 [2024-04-25 03:28:52.727392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.369 [2024-04-25 03:28:52.727614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.369 [2024-04-25 03:28:52.727652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.369 qpair failed and we were unable to recover it. 00:28:18.369 [2024-04-25 03:28:52.727831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.369 [2024-04-25 03:28:52.728026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.369 [2024-04-25 03:28:52.728057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.369 qpair failed and we were unable to recover it. 00:28:18.369 [2024-04-25 03:28:52.728229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.369 [2024-04-25 03:28:52.728417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.369 [2024-04-25 03:28:52.728442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.369 qpair failed and we were unable to recover it. 00:28:18.369 [2024-04-25 03:28:52.728665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.369 [2024-04-25 03:28:52.728860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.369 [2024-04-25 03:28:52.728884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.369 qpair failed and we were unable to recover it. 00:28:18.369 [2024-04-25 03:28:52.729065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.369 [2024-04-25 03:28:52.729254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.369 [2024-04-25 03:28:52.729278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.369 qpair failed and we were unable to recover it. 00:28:18.369 [2024-04-25 03:28:52.729474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.369 [2024-04-25 03:28:52.729652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.369 [2024-04-25 03:28:52.729676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.369 qpair failed and we were unable to recover it. 00:28:18.369 [2024-04-25 03:28:52.729908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.369 [2024-04-25 03:28:52.730075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.369 [2024-04-25 03:28:52.730099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.369 qpair failed and we were unable to recover it. 00:28:18.369 [2024-04-25 03:28:52.730294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.369 [2024-04-25 03:28:52.730487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.369 [2024-04-25 03:28:52.730512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.369 qpair failed and we were unable to recover it. 00:28:18.369 [2024-04-25 03:28:52.730730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.369 [2024-04-25 03:28:52.730902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.369 [2024-04-25 03:28:52.730934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.369 qpair failed and we were unable to recover it. 00:28:18.369 [2024-04-25 03:28:52.731157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.369 [2024-04-25 03:28:52.731385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.369 [2024-04-25 03:28:52.731409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.369 qpair failed and we were unable to recover it. 00:28:18.369 [2024-04-25 03:28:52.731606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.369 [2024-04-25 03:28:52.731788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.369 [2024-04-25 03:28:52.731812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.369 qpair failed and we were unable to recover it. 00:28:18.369 [2024-04-25 03:28:52.732021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.369 [2024-04-25 03:28:52.732213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.369 [2024-04-25 03:28:52.732237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.369 qpair failed and we were unable to recover it. 00:28:18.369 [2024-04-25 03:28:52.732437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.369 [2024-04-25 03:28:52.732640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.369 [2024-04-25 03:28:52.732664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.369 qpair failed and we were unable to recover it. 00:28:18.369 [2024-04-25 03:28:52.732885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.369 [2024-04-25 03:28:52.733055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.369 [2024-04-25 03:28:52.733080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.369 qpair failed and we were unable to recover it. 00:28:18.369 [2024-04-25 03:28:52.733302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.369 [2024-04-25 03:28:52.733490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.369 [2024-04-25 03:28:52.733513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.369 qpair failed and we were unable to recover it. 00:28:18.369 [2024-04-25 03:28:52.733688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.370 [2024-04-25 03:28:52.733884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.370 [2024-04-25 03:28:52.733907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.370 qpair failed and we were unable to recover it. 00:28:18.370 [2024-04-25 03:28:52.734111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.370 [2024-04-25 03:28:52.734287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.370 [2024-04-25 03:28:52.734311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.370 qpair failed and we were unable to recover it. 00:28:18.370 [2024-04-25 03:28:52.734529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.370 [2024-04-25 03:28:52.734731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.370 [2024-04-25 03:28:52.734755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.370 qpair failed and we were unable to recover it. 00:28:18.370 [2024-04-25 03:28:52.734949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.370 [2024-04-25 03:28:52.735119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.370 [2024-04-25 03:28:52.735142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.370 qpair failed and we were unable to recover it. 00:28:18.370 [2024-04-25 03:28:52.735314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.370 [2024-04-25 03:28:52.735504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.370 [2024-04-25 03:28:52.735528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.370 qpair failed and we were unable to recover it. 00:28:18.370 [2024-04-25 03:28:52.735750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.370 [2024-04-25 03:28:52.735931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.370 [2024-04-25 03:28:52.735954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.370 qpair failed and we were unable to recover it. 00:28:18.370 [2024-04-25 03:28:52.736159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.370 [2024-04-25 03:28:52.736333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.370 [2024-04-25 03:28:52.736358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.370 qpair failed and we were unable to recover it. 00:28:18.370 [2024-04-25 03:28:52.736555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.370 [2024-04-25 03:28:52.736747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.370 [2024-04-25 03:28:52.736771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.370 qpair failed and we were unable to recover it. 00:28:18.370 [2024-04-25 03:28:52.736974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.370 [2024-04-25 03:28:52.737143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.370 [2024-04-25 03:28:52.737166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.370 qpair failed and we were unable to recover it. 00:28:18.370 [2024-04-25 03:28:52.737344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.370 [2024-04-25 03:28:52.737572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.370 [2024-04-25 03:28:52.737596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.370 qpair failed and we were unable to recover it. 00:28:18.370 [2024-04-25 03:28:52.737809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.370 [2024-04-25 03:28:52.737991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.370 [2024-04-25 03:28:52.738014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.370 qpair failed and we were unable to recover it. 00:28:18.370 [2024-04-25 03:28:52.738205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.370 [2024-04-25 03:28:52.738403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.370 [2024-04-25 03:28:52.738427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.370 qpair failed and we were unable to recover it. 00:28:18.370 [2024-04-25 03:28:52.738625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.370 [2024-04-25 03:28:52.738821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.370 [2024-04-25 03:28:52.738844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.370 qpair failed and we were unable to recover it. 00:28:18.370 [2024-04-25 03:28:52.739061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.370 [2024-04-25 03:28:52.739238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.370 [2024-04-25 03:28:52.739267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.370 qpair failed and we were unable to recover it. 00:28:18.370 [2024-04-25 03:28:52.739436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.370 [2024-04-25 03:28:52.739626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.370 [2024-04-25 03:28:52.739655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.370 qpair failed and we were unable to recover it. 00:28:18.370 [2024-04-25 03:28:52.739844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.370 [2024-04-25 03:28:52.740025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.370 [2024-04-25 03:28:52.740049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.370 qpair failed and we were unable to recover it. 00:28:18.370 [2024-04-25 03:28:52.740223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.370 [2024-04-25 03:28:52.740393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.370 [2024-04-25 03:28:52.740419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.370 qpair failed and we were unable to recover it. 00:28:18.370 [2024-04-25 03:28:52.740648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.370 [2024-04-25 03:28:52.740823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.370 [2024-04-25 03:28:52.740848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.370 qpair failed and we were unable to recover it. 00:28:18.370 [2024-04-25 03:28:52.741045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.370 [2024-04-25 03:28:52.741263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.370 [2024-04-25 03:28:52.741287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.370 qpair failed and we were unable to recover it. 00:28:18.370 [2024-04-25 03:28:52.741482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.370 [2024-04-25 03:28:52.741675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.370 [2024-04-25 03:28:52.741700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.370 qpair failed and we were unable to recover it. 00:28:18.370 [2024-04-25 03:28:52.741875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.370 [2024-04-25 03:28:52.742097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.370 [2024-04-25 03:28:52.742121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.370 qpair failed and we were unable to recover it. 00:28:18.370 [2024-04-25 03:28:52.742289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.370 [2024-04-25 03:28:52.742516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.370 [2024-04-25 03:28:52.742541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.370 qpair failed and we were unable to recover it. 00:28:18.370 [2024-04-25 03:28:52.742749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.370 [2024-04-25 03:28:52.742961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.370 [2024-04-25 03:28:52.742986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.370 qpair failed and we were unable to recover it. 00:28:18.370 [2024-04-25 03:28:52.743162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.370 [2024-04-25 03:28:52.743334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.370 [2024-04-25 03:28:52.743357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.370 qpair failed and we were unable to recover it. 00:28:18.370 [2024-04-25 03:28:52.743534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.370 [2024-04-25 03:28:52.743702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.370 [2024-04-25 03:28:52.743728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.370 qpair failed and we were unable to recover it. 00:28:18.370 [2024-04-25 03:28:52.743935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.370 [2024-04-25 03:28:52.744043] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:28:18.370 [2024-04-25 03:28:52.744116] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:18.370 [2024-04-25 03:28:52.744133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.370 [2024-04-25 03:28:52.744157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.370 qpair failed and we were unable to recover it. 00:28:18.370 [2024-04-25 03:28:52.744373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.370 [2024-04-25 03:28:52.744571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.370 [2024-04-25 03:28:52.744593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.370 qpair failed and we were unable to recover it. 00:28:18.370 [2024-04-25 03:28:52.744789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.370 [2024-04-25 03:28:52.744957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.370 [2024-04-25 03:28:52.744983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.370 qpair failed and we were unable to recover it. 00:28:18.370 [2024-04-25 03:28:52.745213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.370 [2024-04-25 03:28:52.745386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.371 [2024-04-25 03:28:52.745410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.371 qpair failed and we were unable to recover it. 00:28:18.371 [2024-04-25 03:28:52.745610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.371 [2024-04-25 03:28:52.745813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.371 [2024-04-25 03:28:52.745837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.371 qpair failed and we were unable to recover it. 00:28:18.371 [2024-04-25 03:28:52.746038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.371 [2024-04-25 03:28:52.746239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.371 [2024-04-25 03:28:52.746263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.371 qpair failed and we were unable to recover it. 00:28:18.371 [2024-04-25 03:28:52.746460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.371 [2024-04-25 03:28:52.746655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.371 [2024-04-25 03:28:52.746680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.371 qpair failed and we were unable to recover it. 00:28:18.371 [2024-04-25 03:28:52.746872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.371 [2024-04-25 03:28:52.747073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.371 [2024-04-25 03:28:52.747097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.371 qpair failed and we were unable to recover it. 00:28:18.371 [2024-04-25 03:28:52.747328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.371 [2024-04-25 03:28:52.747503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.371 [2024-04-25 03:28:52.747529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.371 qpair failed and we were unable to recover it. 00:28:18.371 [2024-04-25 03:28:52.747732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.371 [2024-04-25 03:28:52.747907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.371 [2024-04-25 03:28:52.747932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.371 qpair failed and we were unable to recover it. 00:28:18.371 [2024-04-25 03:28:52.748124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.371 [2024-04-25 03:28:52.748296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.371 [2024-04-25 03:28:52.748321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.371 qpair failed and we were unable to recover it. 00:28:18.371 [2024-04-25 03:28:52.748519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.371 [2024-04-25 03:28:52.748723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.371 [2024-04-25 03:28:52.748749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.371 qpair failed and we were unable to recover it. 00:28:18.371 [2024-04-25 03:28:52.748951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.371 [2024-04-25 03:28:52.749119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.371 [2024-04-25 03:28:52.749143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.371 qpair failed and we were unable to recover it. 00:28:18.371 [2024-04-25 03:28:52.749336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.371 [2024-04-25 03:28:52.749500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.371 [2024-04-25 03:28:52.749524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.371 qpair failed and we were unable to recover it. 00:28:18.371 [2024-04-25 03:28:52.749730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.371 [2024-04-25 03:28:52.749901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.371 [2024-04-25 03:28:52.749925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.371 qpair failed and we were unable to recover it. 00:28:18.371 [2024-04-25 03:28:52.750098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.371 [2024-04-25 03:28:52.750264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.371 [2024-04-25 03:28:52.750289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.371 qpair failed and we were unable to recover it. 00:28:18.371 [2024-04-25 03:28:52.750523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.371 [2024-04-25 03:28:52.750696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.371 [2024-04-25 03:28:52.750720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.371 qpair failed and we were unable to recover it. 00:28:18.371 [2024-04-25 03:28:52.750896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.371 [2024-04-25 03:28:52.751094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.371 [2024-04-25 03:28:52.751120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.371 qpair failed and we were unable to recover it. 00:28:18.371 [2024-04-25 03:28:52.751293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.371 [2024-04-25 03:28:52.751498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.371 [2024-04-25 03:28:52.751522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.371 qpair failed and we were unable to recover it. 00:28:18.371 [2024-04-25 03:28:52.751728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.371 [2024-04-25 03:28:52.751901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.371 [2024-04-25 03:28:52.751926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.371 qpair failed and we were unable to recover it. 00:28:18.371 [2024-04-25 03:28:52.752110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.371 [2024-04-25 03:28:52.752278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.371 [2024-04-25 03:28:52.752302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.371 qpair failed and we were unable to recover it. 00:28:18.371 [2024-04-25 03:28:52.752503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.371 [2024-04-25 03:28:52.752699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.371 [2024-04-25 03:28:52.752724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.371 qpair failed and we were unable to recover it. 00:28:18.371 [2024-04-25 03:28:52.752895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.371 [2024-04-25 03:28:52.753066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.371 [2024-04-25 03:28:52.753090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.371 qpair failed and we were unable to recover it. 00:28:18.371 [2024-04-25 03:28:52.753281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.371 [2024-04-25 03:28:52.753446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.371 [2024-04-25 03:28:52.753470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.371 qpair failed and we were unable to recover it. 00:28:18.371 [2024-04-25 03:28:52.753671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.371 [2024-04-25 03:28:52.753889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.371 [2024-04-25 03:28:52.753913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.371 qpair failed and we were unable to recover it. 00:28:18.371 [2024-04-25 03:28:52.754106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.371 [2024-04-25 03:28:52.754303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.371 [2024-04-25 03:28:52.754330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.371 qpair failed and we were unable to recover it. 00:28:18.371 [2024-04-25 03:28:52.754513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.371 [2024-04-25 03:28:52.754703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.371 [2024-04-25 03:28:52.754729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.371 qpair failed and we were unable to recover it. 00:28:18.371 [2024-04-25 03:28:52.754935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.371 [2024-04-25 03:28:52.755134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.371 [2024-04-25 03:28:52.755158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.371 qpair failed and we were unable to recover it. 00:28:18.371 [2024-04-25 03:28:52.755352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.371 [2024-04-25 03:28:52.755546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.372 [2024-04-25 03:28:52.755574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.372 qpair failed and we were unable to recover it. 00:28:18.372 [2024-04-25 03:28:52.755790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.372 [2024-04-25 03:28:52.755983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.372 [2024-04-25 03:28:52.756007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.372 qpair failed and we were unable to recover it. 00:28:18.372 [2024-04-25 03:28:52.756200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.372 [2024-04-25 03:28:52.756366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.372 [2024-04-25 03:28:52.756390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.372 qpair failed and we were unable to recover it. 00:28:18.372 [2024-04-25 03:28:52.756552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.372 [2024-04-25 03:28:52.756758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.372 [2024-04-25 03:28:52.756783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.372 qpair failed and we were unable to recover it. 00:28:18.372 [2024-04-25 03:28:52.756964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.372 [2024-04-25 03:28:52.757123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.372 [2024-04-25 03:28:52.757147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.372 qpair failed and we were unable to recover it. 00:28:18.372 [2024-04-25 03:28:52.757367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.372 [2024-04-25 03:28:52.757567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.372 [2024-04-25 03:28:52.757591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.372 qpair failed and we were unable to recover it. 00:28:18.372 [2024-04-25 03:28:52.757787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.372 [2024-04-25 03:28:52.757982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.372 [2024-04-25 03:28:52.758006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.372 qpair failed and we were unable to recover it. 00:28:18.372 [2024-04-25 03:28:52.758204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.372 [2024-04-25 03:28:52.758402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.372 [2024-04-25 03:28:52.758427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.372 qpair failed and we were unable to recover it. 00:28:18.372 [2024-04-25 03:28:52.758626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.372 [2024-04-25 03:28:52.758831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.372 [2024-04-25 03:28:52.758855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.372 qpair failed and we were unable to recover it. 00:28:18.372 [2024-04-25 03:28:52.759036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.372 [2024-04-25 03:28:52.759253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.372 [2024-04-25 03:28:52.759278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.372 qpair failed and we were unable to recover it. 00:28:18.372 [2024-04-25 03:28:52.759469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.372 [2024-04-25 03:28:52.759669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.372 [2024-04-25 03:28:52.759695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.372 qpair failed and we were unable to recover it. 00:28:18.372 [2024-04-25 03:28:52.759906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.372 [2024-04-25 03:28:52.760110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.372 [2024-04-25 03:28:52.760134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.372 qpair failed and we were unable to recover it. 00:28:18.372 [2024-04-25 03:28:52.760301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.372 [2024-04-25 03:28:52.760478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.372 [2024-04-25 03:28:52.760503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.372 qpair failed and we were unable to recover it. 00:28:18.372 [2024-04-25 03:28:52.760704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.372 [2024-04-25 03:28:52.760900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.372 [2024-04-25 03:28:52.760925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.372 qpair failed and we were unable to recover it. 00:28:18.372 [2024-04-25 03:28:52.761116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.372 [2024-04-25 03:28:52.761304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.372 [2024-04-25 03:28:52.761328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.372 qpair failed and we were unable to recover it. 00:28:18.372 [2024-04-25 03:28:52.761532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.372 [2024-04-25 03:28:52.761730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.372 [2024-04-25 03:28:52.761754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.372 qpair failed and we were unable to recover it. 00:28:18.372 [2024-04-25 03:28:52.761957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.372 [2024-04-25 03:28:52.762159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.372 [2024-04-25 03:28:52.762183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.372 qpair failed and we were unable to recover it. 00:28:18.372 [2024-04-25 03:28:52.762380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.372 [2024-04-25 03:28:52.762544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.372 [2024-04-25 03:28:52.762568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.372 qpair failed and we were unable to recover it. 00:28:18.372 [2024-04-25 03:28:52.762755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.372 [2024-04-25 03:28:52.762950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.372 [2024-04-25 03:28:52.762975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.372 qpair failed and we were unable to recover it. 00:28:18.372 [2024-04-25 03:28:52.763139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.372 [2024-04-25 03:28:52.763308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.372 [2024-04-25 03:28:52.763333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.372 qpair failed and we were unable to recover it. 00:28:18.372 [2024-04-25 03:28:52.763513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.372 [2024-04-25 03:28:52.763712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.372 [2024-04-25 03:28:52.763737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.372 qpair failed and we were unable to recover it. 00:28:18.372 [2024-04-25 03:28:52.763966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.372 [2024-04-25 03:28:52.764163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.372 [2024-04-25 03:28:52.764187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.372 qpair failed and we were unable to recover it. 00:28:18.372 [2024-04-25 03:28:52.764387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.372 [2024-04-25 03:28:52.764612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.372 [2024-04-25 03:28:52.764648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.372 qpair failed and we were unable to recover it. 00:28:18.372 [2024-04-25 03:28:52.764868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.372 [2024-04-25 03:28:52.765088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.372 [2024-04-25 03:28:52.765111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.372 qpair failed and we were unable to recover it. 00:28:18.372 [2024-04-25 03:28:52.765303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.372 [2024-04-25 03:28:52.765472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.372 [2024-04-25 03:28:52.765496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.372 qpair failed and we were unable to recover it. 00:28:18.372 [2024-04-25 03:28:52.765691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.372 [2024-04-25 03:28:52.765853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.372 [2024-04-25 03:28:52.765877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.372 qpair failed and we were unable to recover it. 00:28:18.372 [2024-04-25 03:28:52.766106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.372 [2024-04-25 03:28:52.766277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.372 [2024-04-25 03:28:52.766301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.372 qpair failed and we were unable to recover it. 00:28:18.372 [2024-04-25 03:28:52.766491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.372 [2024-04-25 03:28:52.766709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.372 [2024-04-25 03:28:52.766733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.372 qpair failed and we were unable to recover it. 00:28:18.372 [2024-04-25 03:28:52.766900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.372 [2024-04-25 03:28:52.767093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.372 [2024-04-25 03:28:52.767117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.372 qpair failed and we were unable to recover it. 00:28:18.372 [2024-04-25 03:28:52.767293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.372 [2024-04-25 03:28:52.767485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.373 [2024-04-25 03:28:52.767509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.373 qpair failed and we were unable to recover it. 00:28:18.373 [2024-04-25 03:28:52.767709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.373 [2024-04-25 03:28:52.767899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.373 [2024-04-25 03:28:52.767934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.373 qpair failed and we were unable to recover it. 00:28:18.373 [2024-04-25 03:28:52.768160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.373 [2024-04-25 03:28:52.768380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.373 [2024-04-25 03:28:52.768405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.373 qpair failed and we were unable to recover it. 00:28:18.373 [2024-04-25 03:28:52.768599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.373 [2024-04-25 03:28:52.768798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.373 [2024-04-25 03:28:52.768822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.373 qpair failed and we were unable to recover it. 00:28:18.373 [2024-04-25 03:28:52.769009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.373 [2024-04-25 03:28:52.769230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.373 [2024-04-25 03:28:52.769254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.373 qpair failed and we were unable to recover it. 00:28:18.373 [2024-04-25 03:28:52.769426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.373 [2024-04-25 03:28:52.769623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.373 [2024-04-25 03:28:52.769652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.373 qpair failed and we were unable to recover it. 00:28:18.373 [2024-04-25 03:28:52.769877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.373 [2024-04-25 03:28:52.770084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.373 [2024-04-25 03:28:52.770108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.373 qpair failed and we were unable to recover it. 00:28:18.373 [2024-04-25 03:28:52.770304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.373 [2024-04-25 03:28:52.770480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.373 [2024-04-25 03:28:52.770504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.373 qpair failed and we were unable to recover it. 00:28:18.373 [2024-04-25 03:28:52.770704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.373 [2024-04-25 03:28:52.770879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.373 [2024-04-25 03:28:52.770903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.373 qpair failed and we were unable to recover it. 00:28:18.373 [2024-04-25 03:28:52.771098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.373 [2024-04-25 03:28:52.771262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.373 [2024-04-25 03:28:52.771285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.373 qpair failed and we were unable to recover it. 00:28:18.373 [2024-04-25 03:28:52.771479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.373 [2024-04-25 03:28:52.771683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.373 [2024-04-25 03:28:52.771707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.373 qpair failed and we were unable to recover it. 00:28:18.373 [2024-04-25 03:28:52.771899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.373 [2024-04-25 03:28:52.772089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.373 [2024-04-25 03:28:52.772112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.373 qpair failed and we were unable to recover it. 00:28:18.373 [2024-04-25 03:28:52.772312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.373 [2024-04-25 03:28:52.772538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.373 [2024-04-25 03:28:52.772562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.373 qpair failed and we were unable to recover it. 00:28:18.373 [2024-04-25 03:28:52.772771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.373 [2024-04-25 03:28:52.772981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.373 [2024-04-25 03:28:52.773005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.373 qpair failed and we were unable to recover it. 00:28:18.373 [2024-04-25 03:28:52.773202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.373 [2024-04-25 03:28:52.773399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.373 [2024-04-25 03:28:52.773422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.373 qpair failed and we were unable to recover it. 00:28:18.373 [2024-04-25 03:28:52.773593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.373 [2024-04-25 03:28:52.773768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.373 [2024-04-25 03:28:52.773792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.373 qpair failed and we were unable to recover it. 00:28:18.373 [2024-04-25 03:28:52.773956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.373 [2024-04-25 03:28:52.774171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.373 [2024-04-25 03:28:52.774195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.373 qpair failed and we were unable to recover it. 00:28:18.373 [2024-04-25 03:28:52.774387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.373 [2024-04-25 03:28:52.774550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.373 [2024-04-25 03:28:52.774573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.373 qpair failed and we were unable to recover it. 00:28:18.373 [2024-04-25 03:28:52.774757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.373 [2024-04-25 03:28:52.774958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.373 [2024-04-25 03:28:52.774981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.373 qpair failed and we were unable to recover it. 00:28:18.373 [2024-04-25 03:28:52.775171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.373 [2024-04-25 03:28:52.775362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.373 [2024-04-25 03:28:52.775386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.373 qpair failed and we were unable to recover it. 00:28:18.373 [2024-04-25 03:28:52.775581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.373 [2024-04-25 03:28:52.775750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.373 [2024-04-25 03:28:52.775776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.373 qpair failed and we were unable to recover it. 00:28:18.373 [2024-04-25 03:28:52.775963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.373 [2024-04-25 03:28:52.776137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.373 [2024-04-25 03:28:52.776161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.373 qpair failed and we were unable to recover it. 00:28:18.373 [2024-04-25 03:28:52.776359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.373 [2024-04-25 03:28:52.776532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.373 [2024-04-25 03:28:52.776562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.373 qpair failed and we were unable to recover it. 00:28:18.373 [2024-04-25 03:28:52.776764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.373 [2024-04-25 03:28:52.776962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.373 [2024-04-25 03:28:52.776986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.373 qpair failed and we were unable to recover it. 00:28:18.373 [2024-04-25 03:28:52.777191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.373 [2024-04-25 03:28:52.777361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.373 [2024-04-25 03:28:52.777385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.373 qpair failed and we were unable to recover it. 00:28:18.373 [2024-04-25 03:28:52.777592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.373 [2024-04-25 03:28:52.777787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.373 [2024-04-25 03:28:52.777812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.373 qpair failed and we were unable to recover it. 00:28:18.373 [2024-04-25 03:28:52.778050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.373 [2024-04-25 03:28:52.778218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.373 [2024-04-25 03:28:52.778242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.373 qpair failed and we were unable to recover it. 00:28:18.373 [2024-04-25 03:28:52.778440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.373 [2024-04-25 03:28:52.778635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.373 [2024-04-25 03:28:52.778660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.373 qpair failed and we were unable to recover it. 00:28:18.373 [2024-04-25 03:28:52.778851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.373 [2024-04-25 03:28:52.779006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.373 [2024-04-25 03:28:52.779029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.373 qpair failed and we were unable to recover it. 00:28:18.373 [2024-04-25 03:28:52.779230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.373 [2024-04-25 03:28:52.779428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.373 [2024-04-25 03:28:52.779452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.374 qpair failed and we were unable to recover it. 00:28:18.374 [2024-04-25 03:28:52.779677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.374 [2024-04-25 03:28:52.779868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.374 [2024-04-25 03:28:52.779892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.374 qpair failed and we were unable to recover it. 00:28:18.374 [2024-04-25 03:28:52.780119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.374 EAL: No free 2048 kB hugepages reported on node 1 00:28:18.374 [2024-04-25 03:28:52.780318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.374 [2024-04-25 03:28:52.780341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.374 qpair failed and we were unable to recover it. 00:28:18.374 [2024-04-25 03:28:52.780543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.374 [2024-04-25 03:28:52.780742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.374 [2024-04-25 03:28:52.780771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.374 qpair failed and we were unable to recover it. 00:28:18.374 [2024-04-25 03:28:52.781002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.374 [2024-04-25 03:28:52.781199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.374 [2024-04-25 03:28:52.781223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.374 qpair failed and we were unable to recover it. 00:28:18.374 [2024-04-25 03:28:52.781391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.374 [2024-04-25 03:28:52.781586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.374 [2024-04-25 03:28:52.781610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.374 qpair failed and we were unable to recover it. 00:28:18.374 [2024-04-25 03:28:52.781827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.374 [2024-04-25 03:28:52.782024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.374 [2024-04-25 03:28:52.782049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.374 qpair failed and we were unable to recover it. 00:28:18.374 [2024-04-25 03:28:52.782241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.374 [2024-04-25 03:28:52.782439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.374 [2024-04-25 03:28:52.782463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.374 qpair failed and we were unable to recover it. 00:28:18.374 [2024-04-25 03:28:52.782655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.374 [2024-04-25 03:28:52.782854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.374 [2024-04-25 03:28:52.782877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.374 qpair failed and we were unable to recover it. 00:28:18.374 [2024-04-25 03:28:52.783083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.374 [2024-04-25 03:28:52.783305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.374 [2024-04-25 03:28:52.783329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.374 qpair failed and we were unable to recover it. 00:28:18.374 [2024-04-25 03:28:52.783520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.374 [2024-04-25 03:28:52.783709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.374 [2024-04-25 03:28:52.783734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.374 qpair failed and we were unable to recover it. 00:28:18.374 [2024-04-25 03:28:52.783933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.374 [2024-04-25 03:28:52.784102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.374 [2024-04-25 03:28:52.784126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.374 qpair failed and we were unable to recover it. 00:28:18.374 [2024-04-25 03:28:52.784303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.374 [2024-04-25 03:28:52.784475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.374 [2024-04-25 03:28:52.784499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.374 qpair failed and we were unable to recover it. 00:28:18.374 [2024-04-25 03:28:52.784664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.374 [2024-04-25 03:28:52.784835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.374 [2024-04-25 03:28:52.784860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.374 qpair failed and we were unable to recover it. 00:28:18.374 [2024-04-25 03:28:52.785074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.374 [2024-04-25 03:28:52.785293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.374 [2024-04-25 03:28:52.785318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.374 qpair failed and we were unable to recover it. 00:28:18.374 [2024-04-25 03:28:52.785544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.374 [2024-04-25 03:28:52.785732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.374 [2024-04-25 03:28:52.785756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.374 qpair failed and we were unable to recover it. 00:28:18.374 [2024-04-25 03:28:52.785930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.374 [2024-04-25 03:28:52.786102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.374 [2024-04-25 03:28:52.786126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.374 qpair failed and we were unable to recover it. 00:28:18.374 [2024-04-25 03:28:52.786322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.374 [2024-04-25 03:28:52.786514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.374 [2024-04-25 03:28:52.786537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.374 qpair failed and we were unable to recover it. 00:28:18.374 [2024-04-25 03:28:52.786777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.374 [2024-04-25 03:28:52.786952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.374 [2024-04-25 03:28:52.786975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.374 qpair failed and we were unable to recover it. 00:28:18.374 [2024-04-25 03:28:52.787152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.374 [2024-04-25 03:28:52.787344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.374 [2024-04-25 03:28:52.787368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.374 qpair failed and we were unable to recover it. 00:28:18.374 [2024-04-25 03:28:52.787538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.374 [2024-04-25 03:28:52.787769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.374 [2024-04-25 03:28:52.787793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.374 qpair failed and we were unable to recover it. 00:28:18.374 [2024-04-25 03:28:52.787988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.374 [2024-04-25 03:28:52.788181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.374 [2024-04-25 03:28:52.788206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.374 qpair failed and we were unable to recover it. 00:28:18.374 [2024-04-25 03:28:52.788427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.374 [2024-04-25 03:28:52.788637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.374 [2024-04-25 03:28:52.788662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.374 qpair failed and we were unable to recover it. 00:28:18.374 [2024-04-25 03:28:52.788890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.374 [2024-04-25 03:28:52.789063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.374 [2024-04-25 03:28:52.789088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.374 qpair failed and we were unable to recover it. 00:28:18.374 [2024-04-25 03:28:52.789259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.374 [2024-04-25 03:28:52.789430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.374 [2024-04-25 03:28:52.789454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.374 qpair failed and we were unable to recover it. 00:28:18.374 [2024-04-25 03:28:52.789686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.374 [2024-04-25 03:28:52.789858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.374 [2024-04-25 03:28:52.789883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.374 qpair failed and we were unable to recover it. 00:28:18.374 [2024-04-25 03:28:52.790104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.374 [2024-04-25 03:28:52.790274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.374 [2024-04-25 03:28:52.790298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.374 qpair failed and we were unable to recover it. 00:28:18.374 [2024-04-25 03:28:52.790495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.374 [2024-04-25 03:28:52.790671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.374 [2024-04-25 03:28:52.790695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.374 qpair failed and we were unable to recover it. 00:28:18.374 [2024-04-25 03:28:52.790872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.374 [2024-04-25 03:28:52.791072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.374 [2024-04-25 03:28:52.791096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.374 qpair failed and we were unable to recover it. 00:28:18.374 [2024-04-25 03:28:52.791293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.374 [2024-04-25 03:28:52.791484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.375 [2024-04-25 03:28:52.791508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.375 qpair failed and we were unable to recover it. 00:28:18.375 [2024-04-25 03:28:52.791685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.375 [2024-04-25 03:28:52.791881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.375 [2024-04-25 03:28:52.791905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.375 qpair failed and we were unable to recover it. 00:28:18.375 [2024-04-25 03:28:52.792103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.375 [2024-04-25 03:28:52.792337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.375 [2024-04-25 03:28:52.792361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.375 qpair failed and we were unable to recover it. 00:28:18.375 [2024-04-25 03:28:52.792584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.375 [2024-04-25 03:28:52.792766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.375 [2024-04-25 03:28:52.792790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.375 qpair failed and we were unable to recover it. 00:28:18.375 [2024-04-25 03:28:52.793011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.375 [2024-04-25 03:28:52.793226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.375 [2024-04-25 03:28:52.793249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.375 qpair failed and we were unable to recover it. 00:28:18.375 [2024-04-25 03:28:52.793452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.375 [2024-04-25 03:28:52.793995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.375 [2024-04-25 03:28:52.794023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.375 qpair failed and we were unable to recover it. 00:28:18.375 [2024-04-25 03:28:52.794196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.375 [2024-04-25 03:28:52.794396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.375 [2024-04-25 03:28:52.794421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.375 qpair failed and we were unable to recover it. 00:28:18.375 [2024-04-25 03:28:52.794583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.375 [2024-04-25 03:28:52.794773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.375 [2024-04-25 03:28:52.794798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.375 qpair failed and we were unable to recover it. 00:28:18.375 [2024-04-25 03:28:52.795000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.375 [2024-04-25 03:28:52.795229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.375 [2024-04-25 03:28:52.795253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.375 qpair failed and we were unable to recover it. 00:28:18.375 [2024-04-25 03:28:52.795482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.375 [2024-04-25 03:28:52.795659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.375 [2024-04-25 03:28:52.795684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.375 qpair failed and we were unable to recover it. 00:28:18.375 [2024-04-25 03:28:52.795890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.375 [2024-04-25 03:28:52.796096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.375 [2024-04-25 03:28:52.796119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.375 qpair failed and we were unable to recover it. 00:28:18.375 [2024-04-25 03:28:52.796316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.375 [2024-04-25 03:28:52.796517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.375 [2024-04-25 03:28:52.796542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.375 qpair failed and we were unable to recover it. 00:28:18.375 [2024-04-25 03:28:52.796758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.375 [2024-04-25 03:28:52.796952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.375 [2024-04-25 03:28:52.796976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.375 qpair failed and we were unable to recover it. 00:28:18.375 [2024-04-25 03:28:52.797151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.375 [2024-04-25 03:28:52.797369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.375 [2024-04-25 03:28:52.797393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.375 qpair failed and we were unable to recover it. 00:28:18.375 [2024-04-25 03:28:52.797584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.375 [2024-04-25 03:28:52.797763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.375 [2024-04-25 03:28:52.797788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.375 qpair failed and we were unable to recover it. 00:28:18.375 [2024-04-25 03:28:52.798010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.375 [2024-04-25 03:28:52.798197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.375 [2024-04-25 03:28:52.798233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.375 qpair failed and we were unable to recover it. 00:28:18.375 [2024-04-25 03:28:52.798431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.375 [2024-04-25 03:28:52.798636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.375 [2024-04-25 03:28:52.798661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.375 qpair failed and we were unable to recover it. 00:28:18.375 [2024-04-25 03:28:52.798891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.375 [2024-04-25 03:28:52.799093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.375 [2024-04-25 03:28:52.799117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.375 qpair failed and we were unable to recover it. 00:28:18.375 [2024-04-25 03:28:52.799307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.375 [2024-04-25 03:28:52.799492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.375 [2024-04-25 03:28:52.799516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.375 qpair failed and we were unable to recover it. 00:28:18.375 [2024-04-25 03:28:52.799746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.375 [2024-04-25 03:28:52.799916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.375 [2024-04-25 03:28:52.799940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.375 qpair failed and we were unable to recover it. 00:28:18.375 [2024-04-25 03:28:52.800136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.375 [2024-04-25 03:28:52.800343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.375 [2024-04-25 03:28:52.800367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.375 qpair failed and we were unable to recover it. 00:28:18.375 [2024-04-25 03:28:52.800565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.375 [2024-04-25 03:28:52.800797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.375 [2024-04-25 03:28:52.800821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.375 qpair failed and we were unable to recover it. 00:28:18.375 [2024-04-25 03:28:52.801029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.375 [2024-04-25 03:28:52.801197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.375 [2024-04-25 03:28:52.801221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.375 qpair failed and we were unable to recover it. 00:28:18.375 [2024-04-25 03:28:52.801442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.375 [2024-04-25 03:28:52.801617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.375 [2024-04-25 03:28:52.801646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.375 qpair failed and we were unable to recover it. 00:28:18.375 [2024-04-25 03:28:52.801854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.375 [2024-04-25 03:28:52.802036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.375 [2024-04-25 03:28:52.802060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.375 qpair failed and we were unable to recover it. 00:28:18.375 [2024-04-25 03:28:52.802253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.375 [2024-04-25 03:28:52.802451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.375 [2024-04-25 03:28:52.802475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.375 qpair failed and we were unable to recover it. 00:28:18.375 [2024-04-25 03:28:52.802674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.375 [2024-04-25 03:28:52.802866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.375 [2024-04-25 03:28:52.802890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.375 qpair failed and we were unable to recover it. 00:28:18.375 [2024-04-25 03:28:52.803074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.375 [2024-04-25 03:28:52.803266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.375 [2024-04-25 03:28:52.803289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.375 qpair failed and we were unable to recover it. 00:28:18.375 [2024-04-25 03:28:52.803485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.375 [2024-04-25 03:28:52.803651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.375 [2024-04-25 03:28:52.803676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.375 qpair failed and we were unable to recover it. 00:28:18.375 [2024-04-25 03:28:52.803854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.376 [2024-04-25 03:28:52.804086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.376 [2024-04-25 03:28:52.804110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.376 qpair failed and we were unable to recover it. 00:28:18.376 [2024-04-25 03:28:52.804272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.376 [2024-04-25 03:28:52.804463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.376 [2024-04-25 03:28:52.804487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.376 qpair failed and we were unable to recover it. 00:28:18.376 [2024-04-25 03:28:52.804680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.376 [2024-04-25 03:28:52.804876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.376 [2024-04-25 03:28:52.804902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.376 qpair failed and we were unable to recover it. 00:28:18.376 [2024-04-25 03:28:52.805062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.376 [2024-04-25 03:28:52.805274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.376 [2024-04-25 03:28:52.805297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.376 qpair failed and we were unable to recover it. 00:28:18.376 [2024-04-25 03:28:52.805497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.376 [2024-04-25 03:28:52.805725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.376 [2024-04-25 03:28:52.805749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.376 qpair failed and we were unable to recover it. 00:28:18.376 [2024-04-25 03:28:52.805959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.376 [2024-04-25 03:28:52.806156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.376 [2024-04-25 03:28:52.806182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.376 qpair failed and we were unable to recover it. 00:28:18.376 [2024-04-25 03:28:52.806406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.376 [2024-04-25 03:28:52.806568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.376 [2024-04-25 03:28:52.806591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.376 qpair failed and we were unable to recover it. 00:28:18.376 [2024-04-25 03:28:52.806779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.376 [2024-04-25 03:28:52.806979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.376 [2024-04-25 03:28:52.807004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.376 qpair failed and we were unable to recover it. 00:28:18.376 [2024-04-25 03:28:52.807180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.376 [2024-04-25 03:28:52.807400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.376 [2024-04-25 03:28:52.807423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.376 qpair failed and we were unable to recover it. 00:28:18.376 [2024-04-25 03:28:52.807623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.376 [2024-04-25 03:28:52.807795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.376 [2024-04-25 03:28:52.807819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.376 qpair failed and we were unable to recover it. 00:28:18.376 [2024-04-25 03:28:52.808033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.376 [2024-04-25 03:28:52.808228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.376 [2024-04-25 03:28:52.808252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.376 qpair failed and we were unable to recover it. 00:28:18.376 [2024-04-25 03:28:52.808421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.376 [2024-04-25 03:28:52.808589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.376 [2024-04-25 03:28:52.808613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.376 qpair failed and we were unable to recover it. 00:28:18.376 [2024-04-25 03:28:52.808792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.376 [2024-04-25 03:28:52.808961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.376 [2024-04-25 03:28:52.808985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.376 qpair failed and we were unable to recover it. 00:28:18.376 [2024-04-25 03:28:52.809184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.376 [2024-04-25 03:28:52.809345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.376 [2024-04-25 03:28:52.809370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.376 qpair failed and we were unable to recover it. 00:28:18.376 [2024-04-25 03:28:52.809539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.376 [2024-04-25 03:28:52.809744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.376 [2024-04-25 03:28:52.809771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.376 qpair failed and we were unable to recover it. 00:28:18.376 [2024-04-25 03:28:52.809966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.376 [2024-04-25 03:28:52.810195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.376 [2024-04-25 03:28:52.810219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.376 qpair failed and we were unable to recover it. 00:28:18.376 [2024-04-25 03:28:52.810444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.376 [2024-04-25 03:28:52.810653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.376 [2024-04-25 03:28:52.810678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.376 qpair failed and we were unable to recover it. 00:28:18.376 [2024-04-25 03:28:52.810863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.376 [2024-04-25 03:28:52.811083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.376 [2024-04-25 03:28:52.811107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.376 qpair failed and we were unable to recover it. 00:28:18.376 [2024-04-25 03:28:52.811274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.376 [2024-04-25 03:28:52.811489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.376 [2024-04-25 03:28:52.811512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.376 qpair failed and we were unable to recover it. 00:28:18.376 [2024-04-25 03:28:52.811714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.376 [2024-04-25 03:28:52.811907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.376 [2024-04-25 03:28:52.811931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.376 qpair failed and we were unable to recover it. 00:28:18.376 [2024-04-25 03:28:52.812109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.376 [2024-04-25 03:28:52.812303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.376 [2024-04-25 03:28:52.812327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.376 qpair failed and we were unable to recover it. 00:28:18.376 [2024-04-25 03:28:52.812548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.376 [2024-04-25 03:28:52.812731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.376 [2024-04-25 03:28:52.812755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.376 qpair failed and we were unable to recover it. 00:28:18.376 [2024-04-25 03:28:52.812976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.376 [2024-04-25 03:28:52.813198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.376 [2024-04-25 03:28:52.813222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.376 qpair failed and we were unable to recover it. 00:28:18.376 [2024-04-25 03:28:52.813416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.376 [2024-04-25 03:28:52.813632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.376 [2024-04-25 03:28:52.813657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.376 qpair failed and we were unable to recover it. 00:28:18.376 [2024-04-25 03:28:52.813860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.376 [2024-04-25 03:28:52.814062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.376 [2024-04-25 03:28:52.814086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.377 qpair failed and we were unable to recover it. 00:28:18.377 [2024-04-25 03:28:52.814309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.377 [2024-04-25 03:28:52.814504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.377 [2024-04-25 03:28:52.814529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.377 qpair failed and we were unable to recover it. 00:28:18.377 [2024-04-25 03:28:52.814704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.377 [2024-04-25 03:28:52.814900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.377 [2024-04-25 03:28:52.814923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.377 qpair failed and we were unable to recover it. 00:28:18.377 [2024-04-25 03:28:52.815116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.377 [2024-04-25 03:28:52.815311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.377 [2024-04-25 03:28:52.815335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.377 qpair failed and we were unable to recover it. 00:28:18.377 [2024-04-25 03:28:52.815568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.377 [2024-04-25 03:28:52.815770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.377 [2024-04-25 03:28:52.815795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.377 qpair failed and we were unable to recover it. 00:28:18.377 [2024-04-25 03:28:52.815969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.377 [2024-04-25 03:28:52.816183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.377 [2024-04-25 03:28:52.816206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.377 qpair failed and we were unable to recover it. 00:28:18.377 [2024-04-25 03:28:52.816398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.377 [2024-04-25 03:28:52.816590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.377 [2024-04-25 03:28:52.816614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.377 qpair failed and we were unable to recover it. 00:28:18.377 [2024-04-25 03:28:52.816708] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:18.377 [2024-04-25 03:28:52.816807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.377 [2024-04-25 03:28:52.817004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.377 [2024-04-25 03:28:52.817029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.377 qpair failed and we were unable to recover it. 00:28:18.377 [2024-04-25 03:28:52.817253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.377 [2024-04-25 03:28:52.817480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.377 [2024-04-25 03:28:52.817504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.377 qpair failed and we were unable to recover it. 00:28:18.377 [2024-04-25 03:28:52.817697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.377 [2024-04-25 03:28:52.817867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.377 [2024-04-25 03:28:52.817892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.377 qpair failed and we were unable to recover it. 00:28:18.377 [2024-04-25 03:28:52.818072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.377 [2024-04-25 03:28:52.818266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.377 [2024-04-25 03:28:52.818290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.377 qpair failed and we were unable to recover it. 00:28:18.377 [2024-04-25 03:28:52.818480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.377 [2024-04-25 03:28:52.818687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.377 [2024-04-25 03:28:52.818712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.377 qpair failed and we were unable to recover it. 00:28:18.377 [2024-04-25 03:28:52.818911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.377 [2024-04-25 03:28:52.819129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.377 [2024-04-25 03:28:52.819153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.377 qpair failed and we were unable to recover it. 00:28:18.377 [2024-04-25 03:28:52.819320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.377 [2024-04-25 03:28:52.819520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.377 [2024-04-25 03:28:52.819544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.377 qpair failed and we were unable to recover it. 00:28:18.377 [2024-04-25 03:28:52.819723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.377 [2024-04-25 03:28:52.819944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.377 [2024-04-25 03:28:52.819969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.377 qpair failed and we were unable to recover it. 00:28:18.377 [2024-04-25 03:28:52.820162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.377 [2024-04-25 03:28:52.820345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.377 [2024-04-25 03:28:52.820370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.377 qpair failed and we were unable to recover it. 00:28:18.377 [2024-04-25 03:28:52.820572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.377 [2024-04-25 03:28:52.820776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.377 [2024-04-25 03:28:52.820799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.377 qpair failed and we were unable to recover it. 00:28:18.377 [2024-04-25 03:28:52.820996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.377 [2024-04-25 03:28:52.821218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.377 [2024-04-25 03:28:52.821243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.377 qpair failed and we were unable to recover it. 00:28:18.377 [2024-04-25 03:28:52.821442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.377 [2024-04-25 03:28:52.821644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.377 [2024-04-25 03:28:52.821669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.377 qpair failed and we were unable to recover it. 00:28:18.377 [2024-04-25 03:28:52.821849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.377 [2024-04-25 03:28:52.822024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.377 [2024-04-25 03:28:52.822048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.377 qpair failed and we were unable to recover it. 00:28:18.377 [2024-04-25 03:28:52.822223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.377 [2024-04-25 03:28:52.822431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.377 [2024-04-25 03:28:52.822455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.377 qpair failed and we were unable to recover it. 00:28:18.377 [2024-04-25 03:28:52.822635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.377 [2024-04-25 03:28:52.822809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.377 [2024-04-25 03:28:52.822833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.377 qpair failed and we were unable to recover it. 00:28:18.377 [2024-04-25 03:28:52.823005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.377 [2024-04-25 03:28:52.823200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.377 [2024-04-25 03:28:52.823224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.377 qpair failed and we were unable to recover it. 00:28:18.377 [2024-04-25 03:28:52.823561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.377 [2024-04-25 03:28:52.823758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.377 [2024-04-25 03:28:52.823787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.377 qpair failed and we were unable to recover it. 00:28:18.377 [2024-04-25 03:28:52.823965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.377 [2024-04-25 03:28:52.824174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.377 [2024-04-25 03:28:52.824198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.377 qpair failed and we were unable to recover it. 00:28:18.377 [2024-04-25 03:28:52.824399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.377 [2024-04-25 03:28:52.824621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.377 [2024-04-25 03:28:52.824651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.377 qpair failed and we were unable to recover it. 00:28:18.377 [2024-04-25 03:28:52.824848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.377 [2024-04-25 03:28:52.825052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.377 [2024-04-25 03:28:52.825076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.377 qpair failed and we were unable to recover it. 00:28:18.377 [2024-04-25 03:28:52.825275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.377 [2024-04-25 03:28:52.825444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.377 [2024-04-25 03:28:52.825467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.377 qpair failed and we were unable to recover it. 00:28:18.377 [2024-04-25 03:28:52.825670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.377 [2024-04-25 03:28:52.825899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.377 [2024-04-25 03:28:52.825930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.377 qpair failed and we were unable to recover it. 00:28:18.377 [2024-04-25 03:28:52.826130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.377 [2024-04-25 03:28:52.826302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.377 [2024-04-25 03:28:52.826326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.378 qpair failed and we were unable to recover it. 00:28:18.378 [2024-04-25 03:28:52.826490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.378 [2024-04-25 03:28:52.826669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.378 [2024-04-25 03:28:52.826693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.378 qpair failed and we were unable to recover it. 00:28:18.378 [2024-04-25 03:28:52.826875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.378 [2024-04-25 03:28:52.827083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.378 [2024-04-25 03:28:52.827106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.378 qpair failed and we were unable to recover it. 00:28:18.378 [2024-04-25 03:28:52.827315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.378 [2024-04-25 03:28:52.827506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.378 [2024-04-25 03:28:52.827529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.378 qpair failed and we were unable to recover it. 00:28:18.378 [2024-04-25 03:28:52.827725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.378 [2024-04-25 03:28:52.827924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.378 [2024-04-25 03:28:52.827948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.378 qpair failed and we were unable to recover it. 00:28:18.378 [2024-04-25 03:28:52.828156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.378 [2024-04-25 03:28:52.828328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.378 [2024-04-25 03:28:52.828353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.378 qpair failed and we were unable to recover it. 00:28:18.378 [2024-04-25 03:28:52.828526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.378 [2024-04-25 03:28:52.828691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.378 [2024-04-25 03:28:52.828716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.378 qpair failed and we were unable to recover it. 00:28:18.378 [2024-04-25 03:28:52.828920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.378 [2024-04-25 03:28:52.829267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.378 [2024-04-25 03:28:52.829291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.378 qpair failed and we were unable to recover it. 00:28:18.378 [2024-04-25 03:28:52.829498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.378 [2024-04-25 03:28:52.829665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.378 [2024-04-25 03:28:52.829689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.378 qpair failed and we were unable to recover it. 00:28:18.378 [2024-04-25 03:28:52.829887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.378 [2024-04-25 03:28:52.830069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.378 [2024-04-25 03:28:52.830093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.378 qpair failed and we were unable to recover it. 00:28:18.378 [2024-04-25 03:28:52.830268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.378 [2024-04-25 03:28:52.830492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.378 [2024-04-25 03:28:52.830517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.378 qpair failed and we were unable to recover it. 00:28:18.378 [2024-04-25 03:28:52.830725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.378 [2024-04-25 03:28:52.830926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.378 [2024-04-25 03:28:52.830950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.378 qpair failed and we were unable to recover it. 00:28:18.378 [2024-04-25 03:28:52.831174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.378 [2024-04-25 03:28:52.831374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.378 [2024-04-25 03:28:52.831398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.378 qpair failed and we were unable to recover it. 00:28:18.378 [2024-04-25 03:28:52.831637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.378 [2024-04-25 03:28:52.831848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.378 [2024-04-25 03:28:52.831873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.378 qpair failed and we were unable to recover it. 00:28:18.378 [2024-04-25 03:28:52.832076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.378 [2024-04-25 03:28:52.832302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.378 [2024-04-25 03:28:52.832326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.378 qpair failed and we were unable to recover it. 00:28:18.378 [2024-04-25 03:28:52.832528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.378 [2024-04-25 03:28:52.832704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.378 [2024-04-25 03:28:52.832730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.378 qpair failed and we were unable to recover it. 00:28:18.378 [2024-04-25 03:28:52.832906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.378 [2024-04-25 03:28:52.833109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.378 [2024-04-25 03:28:52.833133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.378 qpair failed and we were unable to recover it. 00:28:18.378 [2024-04-25 03:28:52.833307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.378 [2024-04-25 03:28:52.833542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.378 [2024-04-25 03:28:52.833566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.378 qpair failed and we were unable to recover it. 00:28:18.378 [2024-04-25 03:28:52.833746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.378 [2024-04-25 03:28:52.833941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.378 [2024-04-25 03:28:52.833966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.378 qpair failed and we were unable to recover it. 00:28:18.378 [2024-04-25 03:28:52.834145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.378 [2024-04-25 03:28:52.834310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.378 [2024-04-25 03:28:52.834334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.378 qpair failed and we were unable to recover it. 00:28:18.378 [2024-04-25 03:28:52.834514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.378 [2024-04-25 03:28:52.834704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.378 [2024-04-25 03:28:52.834729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.378 qpair failed and we were unable to recover it. 00:28:18.378 [2024-04-25 03:28:52.834925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.378 [2024-04-25 03:28:52.835097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.378 [2024-04-25 03:28:52.835123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.378 qpair failed and we were unable to recover it. 00:28:18.378 [2024-04-25 03:28:52.835321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.378 [2024-04-25 03:28:52.835515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.378 [2024-04-25 03:28:52.835539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.378 qpair failed and we were unable to recover it. 00:28:18.378 [2024-04-25 03:28:52.835727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.378 [2024-04-25 03:28:52.835925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.378 [2024-04-25 03:28:52.835950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.378 qpair failed and we were unable to recover it. 00:28:18.378 [2024-04-25 03:28:52.836191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.378 [2024-04-25 03:28:52.836391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.378 [2024-04-25 03:28:52.836415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.378 qpair failed and we were unable to recover it. 00:28:18.378 [2024-04-25 03:28:52.836582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.378 [2024-04-25 03:28:52.836759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.378 [2024-04-25 03:28:52.836783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.378 qpair failed and we were unable to recover it. 00:28:18.378 [2024-04-25 03:28:52.836979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.378 [2024-04-25 03:28:52.837172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.378 [2024-04-25 03:28:52.837196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.378 qpair failed and we were unable to recover it. 00:28:18.378 [2024-04-25 03:28:52.837370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.378 [2024-04-25 03:28:52.837534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.378 [2024-04-25 03:28:52.837557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.378 qpair failed and we were unable to recover it. 00:28:18.651 [2024-04-25 03:28:52.837767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.651 [2024-04-25 03:28:52.837929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.651 [2024-04-25 03:28:52.837954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.651 qpair failed and we were unable to recover it. 00:28:18.651 [2024-04-25 03:28:52.838154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.651 [2024-04-25 03:28:52.838551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.651 [2024-04-25 03:28:52.838585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.651 qpair failed and we were unable to recover it. 00:28:18.651 [2024-04-25 03:28:52.838840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.651 [2024-04-25 03:28:52.839017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.651 [2024-04-25 03:28:52.839044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.651 qpair failed and we were unable to recover it. 00:28:18.651 [2024-04-25 03:28:52.839252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.651 [2024-04-25 03:28:52.839427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.651 [2024-04-25 03:28:52.839452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.651 qpair failed and we were unable to recover it. 00:28:18.651 [2024-04-25 03:28:52.839624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.651 [2024-04-25 03:28:52.839849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.651 [2024-04-25 03:28:52.839873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.651 qpair failed and we were unable to recover it. 00:28:18.651 [2024-04-25 03:28:52.840055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.651 [2024-04-25 03:28:52.840284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.651 [2024-04-25 03:28:52.840309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.651 qpair failed and we were unable to recover it. 00:28:18.651 [2024-04-25 03:28:52.840486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.651 [2024-04-25 03:28:52.840706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.651 [2024-04-25 03:28:52.840731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.651 qpair failed and we were unable to recover it. 00:28:18.651 [2024-04-25 03:28:52.840920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.651 [2024-04-25 03:28:52.841083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.651 [2024-04-25 03:28:52.841112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.651 qpair failed and we were unable to recover it. 00:28:18.651 [2024-04-25 03:28:52.841290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.651 [2024-04-25 03:28:52.841461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.651 [2024-04-25 03:28:52.841484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.651 qpair failed and we were unable to recover it. 00:28:18.651 [2024-04-25 03:28:52.841701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.651 [2024-04-25 03:28:52.841890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.651 [2024-04-25 03:28:52.841915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.651 qpair failed and we were unable to recover it. 00:28:18.651 [2024-04-25 03:28:52.842152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.651 [2024-04-25 03:28:52.842353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.651 [2024-04-25 03:28:52.842377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.651 qpair failed and we were unable to recover it. 00:28:18.651 [2024-04-25 03:28:52.842580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.651 [2024-04-25 03:28:52.842767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.651 [2024-04-25 03:28:52.842793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.651 qpair failed and we were unable to recover it. 00:28:18.651 [2024-04-25 03:28:52.842957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.651 [2024-04-25 03:28:52.843128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.651 [2024-04-25 03:28:52.843153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.651 qpair failed and we were unable to recover it. 00:28:18.651 [2024-04-25 03:28:52.843352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.651 [2024-04-25 03:28:52.843541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.651 [2024-04-25 03:28:52.843564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.651 qpair failed and we were unable to recover it. 00:28:18.651 [2024-04-25 03:28:52.843789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.651 [2024-04-25 03:28:52.843964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.651 [2024-04-25 03:28:52.843988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.651 qpair failed and we were unable to recover it. 00:28:18.651 [2024-04-25 03:28:52.844216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.651 [2024-04-25 03:28:52.844413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.651 [2024-04-25 03:28:52.844438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.651 qpair failed and we were unable to recover it. 00:28:18.651 [2024-04-25 03:28:52.844604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.651 [2024-04-25 03:28:52.844792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.651 [2024-04-25 03:28:52.844816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.651 qpair failed and we were unable to recover it. 00:28:18.651 [2024-04-25 03:28:52.845026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.651 [2024-04-25 03:28:52.845196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.651 [2024-04-25 03:28:52.845224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.651 qpair failed and we were unable to recover it. 00:28:18.651 [2024-04-25 03:28:52.845421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.651 [2024-04-25 03:28:52.845634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.651 [2024-04-25 03:28:52.845658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.651 qpair failed and we were unable to recover it. 00:28:18.651 [2024-04-25 03:28:52.845873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.651 [2024-04-25 03:28:52.846069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.651 [2024-04-25 03:28:52.846092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.651 qpair failed and we were unable to recover it. 00:28:18.651 [2024-04-25 03:28:52.846294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.651 [2024-04-25 03:28:52.846488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.651 [2024-04-25 03:28:52.846513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.651 qpair failed and we were unable to recover it. 00:28:18.651 [2024-04-25 03:28:52.846688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.651 [2024-04-25 03:28:52.846885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.651 [2024-04-25 03:28:52.846909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.651 qpair failed and we were unable to recover it. 00:28:18.651 [2024-04-25 03:28:52.847101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.651 [2024-04-25 03:28:52.847297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.651 [2024-04-25 03:28:52.847321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.651 qpair failed and we were unable to recover it. 00:28:18.651 [2024-04-25 03:28:52.847527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.651 [2024-04-25 03:28:52.847725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.651 [2024-04-25 03:28:52.847750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.651 qpair failed and we were unable to recover it. 00:28:18.651 [2024-04-25 03:28:52.847948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.651 [2024-04-25 03:28:52.848116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.651 [2024-04-25 03:28:52.848139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.651 qpair failed and we were unable to recover it. 00:28:18.652 [2024-04-25 03:28:52.848357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.652 [2024-04-25 03:28:52.848552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.652 [2024-04-25 03:28:52.848578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.652 qpair failed and we were unable to recover it. 00:28:18.652 [2024-04-25 03:28:52.848787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.652 [2024-04-25 03:28:52.848984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.652 [2024-04-25 03:28:52.849009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.652 qpair failed and we were unable to recover it. 00:28:18.652 [2024-04-25 03:28:52.849242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.652 [2024-04-25 03:28:52.849437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.652 [2024-04-25 03:28:52.849462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.652 qpair failed and we were unable to recover it. 00:28:18.652 [2024-04-25 03:28:52.849642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.652 [2024-04-25 03:28:52.849837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.652 [2024-04-25 03:28:52.849861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.652 qpair failed and we were unable to recover it. 00:28:18.652 [2024-04-25 03:28:52.850068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.652 [2024-04-25 03:28:52.850232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.652 [2024-04-25 03:28:52.850255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.652 qpair failed and we were unable to recover it. 00:28:18.652 [2024-04-25 03:28:52.850424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.652 [2024-04-25 03:28:52.850655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.652 [2024-04-25 03:28:52.850681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.652 qpair failed and we were unable to recover it. 00:28:18.652 [2024-04-25 03:28:52.850912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.652 [2024-04-25 03:28:52.851108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.652 [2024-04-25 03:28:52.851132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.652 qpair failed and we were unable to recover it. 00:28:18.652 [2024-04-25 03:28:52.851331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.652 [2024-04-25 03:28:52.851498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.652 [2024-04-25 03:28:52.851522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.652 qpair failed and we were unable to recover it. 00:28:18.652 [2024-04-25 03:28:52.851738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.652 [2024-04-25 03:28:52.851909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.652 [2024-04-25 03:28:52.851934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.652 qpair failed and we were unable to recover it. 00:28:18.652 [2024-04-25 03:28:52.852138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.652 [2024-04-25 03:28:52.852328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.652 [2024-04-25 03:28:52.852352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.652 qpair failed and we were unable to recover it. 00:28:18.652 [2024-04-25 03:28:52.852548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.652 [2024-04-25 03:28:52.852747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.652 [2024-04-25 03:28:52.852772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.652 qpair failed and we were unable to recover it. 00:28:18.652 [2024-04-25 03:28:52.852989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.652 [2024-04-25 03:28:52.853185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.652 [2024-04-25 03:28:52.853208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.652 qpair failed and we were unable to recover it. 00:28:18.652 [2024-04-25 03:28:52.853379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.652 [2024-04-25 03:28:52.853608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.652 [2024-04-25 03:28:52.853636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.652 qpair failed and we were unable to recover it. 00:28:18.652 [2024-04-25 03:28:52.853807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.652 [2024-04-25 03:28:52.854025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.652 [2024-04-25 03:28:52.854050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.652 qpair failed and we were unable to recover it. 00:28:18.652 [2024-04-25 03:28:52.854250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.652 [2024-04-25 03:28:52.854448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.652 [2024-04-25 03:28:52.854472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.652 qpair failed and we were unable to recover it. 00:28:18.652 [2024-04-25 03:28:52.854658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.652 [2024-04-25 03:28:52.854869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.652 [2024-04-25 03:28:52.854893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.652 qpair failed and we were unable to recover it. 00:28:18.652 [2024-04-25 03:28:52.855091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.652 [2024-04-25 03:28:52.855261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.652 [2024-04-25 03:28:52.855285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.652 qpair failed and we were unable to recover it. 00:28:18.652 [2024-04-25 03:28:52.855483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.652 [2024-04-25 03:28:52.855653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.652 [2024-04-25 03:28:52.855677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.652 qpair failed and we were unable to recover it. 00:28:18.652 [2024-04-25 03:28:52.855885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.652 [2024-04-25 03:28:52.856077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.652 [2024-04-25 03:28:52.856102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.652 qpair failed and we were unable to recover it. 00:28:18.652 [2024-04-25 03:28:52.856297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.652 [2024-04-25 03:28:52.856459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.652 [2024-04-25 03:28:52.856483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.652 qpair failed and we were unable to recover it. 00:28:18.652 [2024-04-25 03:28:52.856677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.652 [2024-04-25 03:28:52.856849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.652 [2024-04-25 03:28:52.856882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.652 qpair failed and we were unable to recover it. 00:28:18.652 [2024-04-25 03:28:52.857060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.652 [2024-04-25 03:28:52.857259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.652 [2024-04-25 03:28:52.857284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.652 qpair failed and we were unable to recover it. 00:28:18.652 [2024-04-25 03:28:52.857490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.652 [2024-04-25 03:28:52.857688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.652 [2024-04-25 03:28:52.857713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.652 qpair failed and we were unable to recover it. 00:28:18.652 [2024-04-25 03:28:52.857910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.652 [2024-04-25 03:28:52.858109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.652 [2024-04-25 03:28:52.858134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.652 qpair failed and we were unable to recover it. 00:28:18.652 [2024-04-25 03:28:52.858328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.652 [2024-04-25 03:28:52.858521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.652 [2024-04-25 03:28:52.858545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.652 qpair failed and we were unable to recover it. 00:28:18.652 [2024-04-25 03:28:52.858712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.652 [2024-04-25 03:28:52.858947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.652 [2024-04-25 03:28:52.858972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.652 qpair failed and we were unable to recover it. 00:28:18.652 [2024-04-25 03:28:52.859144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.652 [2024-04-25 03:28:52.859337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.652 [2024-04-25 03:28:52.859361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.652 qpair failed and we were unable to recover it. 00:28:18.652 [2024-04-25 03:28:52.859572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.652 [2024-04-25 03:28:52.859761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.652 [2024-04-25 03:28:52.859786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.652 qpair failed and we were unable to recover it. 00:28:18.652 [2024-04-25 03:28:52.859955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.652 [2024-04-25 03:28:52.860163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.652 [2024-04-25 03:28:52.860188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.652 qpair failed and we were unable to recover it. 00:28:18.652 [2024-04-25 03:28:52.860383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.652 [2024-04-25 03:28:52.860614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.652 [2024-04-25 03:28:52.860644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.652 qpair failed and we were unable to recover it. 00:28:18.652 [2024-04-25 03:28:52.860853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.652 [2024-04-25 03:28:52.861047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.652 [2024-04-25 03:28:52.861072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.652 qpair failed and we were unable to recover it. 00:28:18.652 [2024-04-25 03:28:52.861266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.652 [2024-04-25 03:28:52.861434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.652 [2024-04-25 03:28:52.861459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.652 qpair failed and we were unable to recover it. 00:28:18.652 [2024-04-25 03:28:52.861633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.652 [2024-04-25 03:28:52.861832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.652 [2024-04-25 03:28:52.861856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.652 qpair failed and we were unable to recover it. 00:28:18.652 [2024-04-25 03:28:52.862032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.652 [2024-04-25 03:28:52.862223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.652 [2024-04-25 03:28:52.862255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.652 qpair failed and we were unable to recover it. 00:28:18.652 [2024-04-25 03:28:52.862489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.652 [2024-04-25 03:28:52.862677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.652 [2024-04-25 03:28:52.862701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.652 qpair failed and we were unable to recover it. 00:28:18.652 [2024-04-25 03:28:52.862922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.652 [2024-04-25 03:28:52.863090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.652 [2024-04-25 03:28:52.863114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.652 qpair failed and we were unable to recover it. 00:28:18.652 [2024-04-25 03:28:52.863280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.652 [2024-04-25 03:28:52.863499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.652 [2024-04-25 03:28:52.863523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.652 qpair failed and we were unable to recover it. 00:28:18.652 [2024-04-25 03:28:52.863692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.652 [2024-04-25 03:28:52.863863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.652 [2024-04-25 03:28:52.863887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.652 qpair failed and we were unable to recover it. 00:28:18.652 [2024-04-25 03:28:52.864056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.652 [2024-04-25 03:28:52.864253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.652 [2024-04-25 03:28:52.864276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.652 qpair failed and we were unable to recover it. 00:28:18.652 [2024-04-25 03:28:52.864471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.652 [2024-04-25 03:28:52.864691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.652 [2024-04-25 03:28:52.864715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.652 qpair failed and we were unable to recover it. 00:28:18.652 [2024-04-25 03:28:52.864920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.652 [2024-04-25 03:28:52.865113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.652 [2024-04-25 03:28:52.865137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.652 qpair failed and we were unable to recover it. 00:28:18.652 [2024-04-25 03:28:52.865305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.652 [2024-04-25 03:28:52.865496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.652 [2024-04-25 03:28:52.865519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.652 qpair failed and we were unable to recover it. 00:28:18.652 [2024-04-25 03:28:52.865696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.652 [2024-04-25 03:28:52.865864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.652 [2024-04-25 03:28:52.865888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.652 qpair failed and we were unable to recover it. 00:28:18.652 [2024-04-25 03:28:52.866053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.652 [2024-04-25 03:28:52.866226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.652 [2024-04-25 03:28:52.866250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.652 qpair failed and we were unable to recover it. 00:28:18.652 [2024-04-25 03:28:52.866449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.652 [2024-04-25 03:28:52.866644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.652 [2024-04-25 03:28:52.866668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.652 qpair failed and we were unable to recover it. 00:28:18.652 [2024-04-25 03:28:52.866891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.652 [2024-04-25 03:28:52.867057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.652 [2024-04-25 03:28:52.867081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.652 qpair failed and we were unable to recover it. 00:28:18.652 [2024-04-25 03:28:52.867310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.652 [2024-04-25 03:28:52.867530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.652 [2024-04-25 03:28:52.867555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.652 qpair failed and we were unable to recover it. 00:28:18.652 [2024-04-25 03:28:52.867727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.652 [2024-04-25 03:28:52.867923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.652 [2024-04-25 03:28:52.867948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.652 qpair failed and we were unable to recover it. 00:28:18.652 [2024-04-25 03:28:52.868169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.652 [2024-04-25 03:28:52.868338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.652 [2024-04-25 03:28:52.868362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.652 qpair failed and we were unable to recover it. 00:28:18.652 [2024-04-25 03:28:52.868555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.652 [2024-04-25 03:28:52.868751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.652 [2024-04-25 03:28:52.868776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.652 qpair failed and we were unable to recover it. 00:28:18.652 [2024-04-25 03:28:52.868955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.653 [2024-04-25 03:28:52.869183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.653 [2024-04-25 03:28:52.869207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.653 qpair failed and we were unable to recover it. 00:28:18.653 [2024-04-25 03:28:52.869404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.653 [2024-04-25 03:28:52.869577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.653 [2024-04-25 03:28:52.869600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.653 qpair failed and we were unable to recover it. 00:28:18.653 [2024-04-25 03:28:52.869778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.653 [2024-04-25 03:28:52.869966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.653 [2024-04-25 03:28:52.869990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.653 qpair failed and we were unable to recover it. 00:28:18.653 [2024-04-25 03:28:52.870159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.653 [2024-04-25 03:28:52.870356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.653 [2024-04-25 03:28:52.870380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.653 qpair failed and we were unable to recover it. 00:28:18.653 [2024-04-25 03:28:52.870609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.653 [2024-04-25 03:28:52.870790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.653 [2024-04-25 03:28:52.870814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.653 qpair failed and we were unable to recover it. 00:28:18.653 [2024-04-25 03:28:52.870991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.653 [2024-04-25 03:28:52.871151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.653 [2024-04-25 03:28:52.871176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.653 qpair failed and we were unable to recover it. 00:28:18.653 [2024-04-25 03:28:52.871367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.653 [2024-04-25 03:28:52.871538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.653 [2024-04-25 03:28:52.871560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.653 qpair failed and we were unable to recover it. 00:28:18.653 [2024-04-25 03:28:52.871766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.653 [2024-04-25 03:28:52.871988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.653 [2024-04-25 03:28:52.872013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.653 qpair failed and we were unable to recover it. 00:28:18.653 [2024-04-25 03:28:52.872191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.653 [2024-04-25 03:28:52.872384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.653 [2024-04-25 03:28:52.872409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.653 qpair failed and we were unable to recover it. 00:28:18.653 [2024-04-25 03:28:52.872603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.653 [2024-04-25 03:28:52.872824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.653 [2024-04-25 03:28:52.872849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.653 qpair failed and we were unable to recover it. 00:28:18.653 [2024-04-25 03:28:52.873044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.653 [2024-04-25 03:28:52.873242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.653 [2024-04-25 03:28:52.873266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.653 qpair failed and we were unable to recover it. 00:28:18.653 [2024-04-25 03:28:52.873462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.653 [2024-04-25 03:28:52.873661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.653 [2024-04-25 03:28:52.873685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.653 qpair failed and we were unable to recover it. 00:28:18.653 [2024-04-25 03:28:52.873911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.653 [2024-04-25 03:28:52.874106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.653 [2024-04-25 03:28:52.874131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.653 qpair failed and we were unable to recover it. 00:28:18.653 [2024-04-25 03:28:52.874368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.653 [2024-04-25 03:28:52.874563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.653 [2024-04-25 03:28:52.874588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.653 qpair failed and we were unable to recover it. 00:28:18.653 [2024-04-25 03:28:52.874785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.653 [2024-04-25 03:28:52.874983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.653 [2024-04-25 03:28:52.875007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.653 qpair failed and we were unable to recover it. 00:28:18.653 [2024-04-25 03:28:52.875224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.653 [2024-04-25 03:28:52.875437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.653 [2024-04-25 03:28:52.875460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.653 qpair failed and we were unable to recover it. 00:28:18.653 [2024-04-25 03:28:52.875684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.653 [2024-04-25 03:28:52.875886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.653 [2024-04-25 03:28:52.875910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.653 qpair failed and we were unable to recover it. 00:28:18.653 [2024-04-25 03:28:52.876108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.653 [2024-04-25 03:28:52.876336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.653 [2024-04-25 03:28:52.876361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.653 qpair failed and we were unable to recover it. 00:28:18.653 [2024-04-25 03:28:52.876581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.653 [2024-04-25 03:28:52.876777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.653 [2024-04-25 03:28:52.876802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.653 qpair failed and we were unable to recover it. 00:28:18.653 [2024-04-25 03:28:52.876998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.653 [2024-04-25 03:28:52.877206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.653 [2024-04-25 03:28:52.877231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.653 qpair failed and we were unable to recover it. 00:28:18.653 [2024-04-25 03:28:52.877430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.653 [2024-04-25 03:28:52.877625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.653 [2024-04-25 03:28:52.877654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.653 qpair failed and we were unable to recover it. 00:28:18.653 [2024-04-25 03:28:52.877852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.653 [2024-04-25 03:28:52.878028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.653 [2024-04-25 03:28:52.878051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.653 qpair failed and we were unable to recover it. 00:28:18.653 [2024-04-25 03:28:52.878221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.653 [2024-04-25 03:28:52.878418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.653 [2024-04-25 03:28:52.878442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.653 qpair failed and we were unable to recover it. 00:28:18.653 [2024-04-25 03:28:52.878646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.653 [2024-04-25 03:28:52.878845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.653 [2024-04-25 03:28:52.878870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.653 qpair failed and we were unable to recover it. 00:28:18.653 [2024-04-25 03:28:52.879070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.653 [2024-04-25 03:28:52.879273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.653 [2024-04-25 03:28:52.879298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.653 qpair failed and we were unable to recover it. 00:28:18.653 [2024-04-25 03:28:52.879484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.653 [2024-04-25 03:28:52.879679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.653 [2024-04-25 03:28:52.879704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.653 qpair failed and we were unable to recover it. 00:28:18.653 [2024-04-25 03:28:52.879865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.653 [2024-04-25 03:28:52.880083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.653 [2024-04-25 03:28:52.880106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.653 qpair failed and we were unable to recover it. 00:28:18.653 [2024-04-25 03:28:52.880307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.653 [2024-04-25 03:28:52.880501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.653 [2024-04-25 03:28:52.880526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.653 qpair failed and we were unable to recover it. 00:28:18.653 [2024-04-25 03:28:52.880704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.653 [2024-04-25 03:28:52.880904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.653 [2024-04-25 03:28:52.880929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.653 qpair failed and we were unable to recover it. 00:28:18.653 [2024-04-25 03:28:52.881110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.653 [2024-04-25 03:28:52.881301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.653 [2024-04-25 03:28:52.881325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.653 qpair failed and we were unable to recover it. 00:28:18.653 [2024-04-25 03:28:52.881524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.653 [2024-04-25 03:28:52.881735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.653 [2024-04-25 03:28:52.881760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.653 qpair failed and we were unable to recover it. 00:28:18.653 [2024-04-25 03:28:52.881958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.653 [2024-04-25 03:28:52.882169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.653 [2024-04-25 03:28:52.882195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.653 qpair failed and we were unable to recover it. 00:28:18.653 [2024-04-25 03:28:52.882390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.653 [2024-04-25 03:28:52.882596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.653 [2024-04-25 03:28:52.882619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.653 qpair failed and we were unable to recover it. 00:28:18.653 [2024-04-25 03:28:52.882798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.653 [2024-04-25 03:28:52.882963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.653 [2024-04-25 03:28:52.882988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.653 qpair failed and we were unable to recover it. 00:28:18.653 [2024-04-25 03:28:52.883184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.653 [2024-04-25 03:28:52.883367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.653 [2024-04-25 03:28:52.883395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.653 qpair failed and we were unable to recover it. 00:28:18.653 [2024-04-25 03:28:52.883618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.653 [2024-04-25 03:28:52.883829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.653 [2024-04-25 03:28:52.883854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.653 qpair failed and we were unable to recover it. 00:28:18.653 [2024-04-25 03:28:52.884061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.653 [2024-04-25 03:28:52.884283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.653 [2024-04-25 03:28:52.884307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.653 qpair failed and we were unable to recover it. 00:28:18.653 [2024-04-25 03:28:52.884500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.653 [2024-04-25 03:28:52.884678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.653 [2024-04-25 03:28:52.884702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.653 qpair failed and we were unable to recover it. 00:28:18.653 [2024-04-25 03:28:52.884944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.653 [2024-04-25 03:28:52.885117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.653 [2024-04-25 03:28:52.885142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.653 qpair failed and we were unable to recover it. 00:28:18.653 [2024-04-25 03:28:52.885338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.653 [2024-04-25 03:28:52.885543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.653 [2024-04-25 03:28:52.885567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.653 qpair failed and we were unable to recover it. 00:28:18.653 [2024-04-25 03:28:52.885770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.653 [2024-04-25 03:28:52.885979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.653 [2024-04-25 03:28:52.886004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.653 qpair failed and we were unable to recover it. 00:28:18.653 [2024-04-25 03:28:52.886179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.653 [2024-04-25 03:28:52.886374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.653 [2024-04-25 03:28:52.886398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.653 qpair failed and we were unable to recover it. 00:28:18.653 [2024-04-25 03:28:52.886593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.653 [2024-04-25 03:28:52.886800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.653 [2024-04-25 03:28:52.886824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.653 qpair failed and we were unable to recover it. 00:28:18.653 [2024-04-25 03:28:52.887015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.653 [2024-04-25 03:28:52.887208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.653 [2024-04-25 03:28:52.887232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.653 qpair failed and we were unable to recover it. 00:28:18.653 [2024-04-25 03:28:52.887455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.653 [2024-04-25 03:28:52.887654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.653 [2024-04-25 03:28:52.887678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.653 qpair failed and we were unable to recover it. 00:28:18.653 [2024-04-25 03:28:52.887881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.653 [2024-04-25 03:28:52.888045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.653 [2024-04-25 03:28:52.888068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.653 qpair failed and we were unable to recover it. 00:28:18.653 [2024-04-25 03:28:52.888270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.653 [2024-04-25 03:28:52.888467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.653 [2024-04-25 03:28:52.888491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.653 qpair failed and we were unable to recover it. 00:28:18.653 [2024-04-25 03:28:52.888665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.653 [2024-04-25 03:28:52.888892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.653 [2024-04-25 03:28:52.888917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.653 qpair failed and we were unable to recover it. 00:28:18.653 [2024-04-25 03:28:52.889109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.653 [2024-04-25 03:28:52.889335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.653 [2024-04-25 03:28:52.889360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.653 qpair failed and we were unable to recover it. 00:28:18.653 [2024-04-25 03:28:52.889556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.653 [2024-04-25 03:28:52.889754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.653 [2024-04-25 03:28:52.889779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.653 qpair failed and we were unable to recover it. 00:28:18.653 [2024-04-25 03:28:52.889958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.653 [2024-04-25 03:28:52.890184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.653 [2024-04-25 03:28:52.890208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.653 qpair failed and we were unable to recover it. 00:28:18.653 [2024-04-25 03:28:52.890401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.653 [2024-04-25 03:28:52.890570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.653 [2024-04-25 03:28:52.890594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.653 qpair failed and we were unable to recover it. 00:28:18.654 [2024-04-25 03:28:52.890805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.654 [2024-04-25 03:28:52.890998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.654 [2024-04-25 03:28:52.891022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.654 qpair failed and we were unable to recover it. 00:28:18.654 [2024-04-25 03:28:52.891245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.654 [2024-04-25 03:28:52.891438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.654 [2024-04-25 03:28:52.891462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.654 qpair failed and we were unable to recover it. 00:28:18.654 [2024-04-25 03:28:52.891672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.654 [2024-04-25 03:28:52.891866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.654 [2024-04-25 03:28:52.891890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.654 qpair failed and we were unable to recover it. 00:28:18.654 [2024-04-25 03:28:52.892075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.654 [2024-04-25 03:28:52.892279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.654 [2024-04-25 03:28:52.892304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.654 qpair failed and we were unable to recover it. 00:28:18.654 [2024-04-25 03:28:52.892502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.654 [2024-04-25 03:28:52.892702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.654 [2024-04-25 03:28:52.892726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.654 qpair failed and we were unable to recover it. 00:28:18.654 [2024-04-25 03:28:52.892897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.654 [2024-04-25 03:28:52.893099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.654 [2024-04-25 03:28:52.893123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.654 qpair failed and we were unable to recover it. 00:28:18.654 [2024-04-25 03:28:52.893316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.654 [2024-04-25 03:28:52.893515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.654 [2024-04-25 03:28:52.893539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.654 qpair failed and we were unable to recover it. 00:28:18.654 [2024-04-25 03:28:52.893735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.654 [2024-04-25 03:28:52.893932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.654 [2024-04-25 03:28:52.893955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.654 qpair failed and we were unable to recover it. 00:28:18.654 [2024-04-25 03:28:52.894173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.654 [2024-04-25 03:28:52.894339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.654 [2024-04-25 03:28:52.894364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.654 qpair failed and we were unable to recover it. 00:28:18.654 [2024-04-25 03:28:52.894589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.654 [2024-04-25 03:28:52.894793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.654 [2024-04-25 03:28:52.894819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.654 qpair failed and we were unable to recover it. 00:28:18.654 [2024-04-25 03:28:52.895010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.654 [2024-04-25 03:28:52.895216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.654 [2024-04-25 03:28:52.895239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.654 qpair failed and we were unable to recover it. 00:28:18.654 [2024-04-25 03:28:52.895435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.654 [2024-04-25 03:28:52.895635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.654 [2024-04-25 03:28:52.895660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.654 qpair failed and we were unable to recover it. 00:28:18.654 [2024-04-25 03:28:52.895865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.654 [2024-04-25 03:28:52.896095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.654 [2024-04-25 03:28:52.896119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.654 qpair failed and we were unable to recover it. 00:28:18.654 [2024-04-25 03:28:52.896313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.654 [2024-04-25 03:28:52.896487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.654 [2024-04-25 03:28:52.896518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.654 qpair failed and we were unable to recover it. 00:28:18.654 [2024-04-25 03:28:52.896716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.654 [2024-04-25 03:28:52.896915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.654 [2024-04-25 03:28:52.896939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.654 qpair failed and we were unable to recover it. 00:28:18.654 [2024-04-25 03:28:52.897137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.654 [2024-04-25 03:28:52.897297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.654 [2024-04-25 03:28:52.897320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.654 qpair failed and we were unable to recover it. 00:28:18.654 [2024-04-25 03:28:52.897543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.654 [2024-04-25 03:28:52.897723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.654 [2024-04-25 03:28:52.897750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.654 qpair failed and we were unable to recover it. 00:28:18.654 [2024-04-25 03:28:52.897975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.654 [2024-04-25 03:28:52.898199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.654 [2024-04-25 03:28:52.898224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.654 qpair failed and we were unable to recover it. 00:28:18.654 [2024-04-25 03:28:52.898422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.654 [2024-04-25 03:28:52.898582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.654 [2024-04-25 03:28:52.898606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.654 qpair failed and we were unable to recover it. 00:28:18.654 [2024-04-25 03:28:52.898789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.654 [2024-04-25 03:28:52.898981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.654 [2024-04-25 03:28:52.899005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.654 qpair failed and we were unable to recover it. 00:28:18.654 [2024-04-25 03:28:52.899206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.654 [2024-04-25 03:28:52.899381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.654 [2024-04-25 03:28:52.899405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.654 qpair failed and we were unable to recover it. 00:28:18.654 [2024-04-25 03:28:52.899609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.654 [2024-04-25 03:28:52.899803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.654 [2024-04-25 03:28:52.899827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.654 qpair failed and we were unable to recover it. 00:28:18.654 [2024-04-25 03:28:52.900023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.654 [2024-04-25 03:28:52.900242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.654 [2024-04-25 03:28:52.900266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.654 qpair failed and we were unable to recover it. 00:28:18.654 [2024-04-25 03:28:52.900462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.654 [2024-04-25 03:28:52.900624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.654 [2024-04-25 03:28:52.900667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.654 qpair failed and we were unable to recover it. 00:28:18.654 [2024-04-25 03:28:52.900850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.654 [2024-04-25 03:28:52.901045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.654 [2024-04-25 03:28:52.901069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.654 qpair failed and we were unable to recover it. 00:28:18.654 [2024-04-25 03:28:52.901296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.654 [2024-04-25 03:28:52.901486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.654 [2024-04-25 03:28:52.901510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.654 qpair failed and we were unable to recover it. 00:28:18.654 [2024-04-25 03:28:52.901688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.654 [2024-04-25 03:28:52.901888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.654 [2024-04-25 03:28:52.901914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.654 qpair failed and we were unable to recover it. 00:28:18.654 [2024-04-25 03:28:52.902122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.654 [2024-04-25 03:28:52.902292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.654 [2024-04-25 03:28:52.902318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.654 qpair failed and we were unable to recover it. 00:28:18.654 [2024-04-25 03:28:52.902513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.654 [2024-04-25 03:28:52.902683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.654 [2024-04-25 03:28:52.902707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.654 qpair failed and we were unable to recover it. 00:28:18.654 [2024-04-25 03:28:52.902908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.654 [2024-04-25 03:28:52.903112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.654 [2024-04-25 03:28:52.903136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.654 qpair failed and we were unable to recover it. 00:28:18.654 [2024-04-25 03:28:52.903333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.654 [2024-04-25 03:28:52.903528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.654 [2024-04-25 03:28:52.903552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.654 qpair failed and we were unable to recover it. 00:28:18.654 [2024-04-25 03:28:52.903723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.654 [2024-04-25 03:28:52.903894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.654 [2024-04-25 03:28:52.903918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.654 qpair failed and we were unable to recover it. 00:28:18.654 [2024-04-25 03:28:52.904155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.654 [2024-04-25 03:28:52.904353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.654 [2024-04-25 03:28:52.904378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.654 qpair failed and we were unable to recover it. 00:28:18.654 [2024-04-25 03:28:52.904585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.654 [2024-04-25 03:28:52.904804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.654 [2024-04-25 03:28:52.904833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.654 qpair failed and we were unable to recover it. 00:28:18.654 [2024-04-25 03:28:52.905030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.654 [2024-04-25 03:28:52.905219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.654 [2024-04-25 03:28:52.905243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.654 qpair failed and we were unable to recover it. 00:28:18.654 [2024-04-25 03:28:52.905442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.654 [2024-04-25 03:28:52.905638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.654 [2024-04-25 03:28:52.905663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.654 qpair failed and we were unable to recover it. 00:28:18.654 [2024-04-25 03:28:52.905858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.654 [2024-04-25 03:28:52.906051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.654 [2024-04-25 03:28:52.906076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.654 qpair failed and we were unable to recover it. 00:28:18.654 [2024-04-25 03:28:52.906236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.654 [2024-04-25 03:28:52.906429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.654 [2024-04-25 03:28:52.906453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.654 qpair failed and we were unable to recover it. 00:28:18.654 [2024-04-25 03:28:52.906652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.654 [2024-04-25 03:28:52.906822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.654 [2024-04-25 03:28:52.906845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.654 qpair failed and we were unable to recover it. 00:28:18.654 [2024-04-25 03:28:52.907041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.654 [2024-04-25 03:28:52.907238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.654 [2024-04-25 03:28:52.907262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.654 qpair failed and we were unable to recover it. 00:28:18.654 [2024-04-25 03:28:52.907456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.654 [2024-04-25 03:28:52.907684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.654 [2024-04-25 03:28:52.907709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.654 qpair failed and we were unable to recover it. 00:28:18.654 [2024-04-25 03:28:52.907877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.654 [2024-04-25 03:28:52.908127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.654 [2024-04-25 03:28:52.908152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.654 qpair failed and we were unable to recover it. 00:28:18.654 [2024-04-25 03:28:52.908365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.654 [2024-04-25 03:28:52.908534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.654 [2024-04-25 03:28:52.908559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.654 qpair failed and we were unable to recover it. 00:28:18.654 [2024-04-25 03:28:52.908744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.654 [2024-04-25 03:28:52.908906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.654 [2024-04-25 03:28:52.908941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.654 qpair failed and we were unable to recover it. 00:28:18.654 [2024-04-25 03:28:52.909120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.654 [2024-04-25 03:28:52.909360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.655 [2024-04-25 03:28:52.909384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.655 qpair failed and we were unable to recover it. 00:28:18.655 [2024-04-25 03:28:52.909559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.655 [2024-04-25 03:28:52.909740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.655 [2024-04-25 03:28:52.909766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.655 qpair failed and we were unable to recover it. 00:28:18.655 [2024-04-25 03:28:52.909964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.655 [2024-04-25 03:28:52.910125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.655 [2024-04-25 03:28:52.910148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.655 qpair failed and we were unable to recover it. 00:28:18.655 [2024-04-25 03:28:52.910346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.655 [2024-04-25 03:28:52.910514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.655 [2024-04-25 03:28:52.910538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.655 qpair failed and we were unable to recover it. 00:28:18.655 [2024-04-25 03:28:52.910741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.655 [2024-04-25 03:28:52.910928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.655 [2024-04-25 03:28:52.910953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.655 qpair failed and we were unable to recover it. 00:28:18.655 [2024-04-25 03:28:52.911144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.655 [2024-04-25 03:28:52.911355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.655 [2024-04-25 03:28:52.911380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.655 qpair failed and we were unable to recover it. 00:28:18.655 [2024-04-25 03:28:52.911551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.655 [2024-04-25 03:28:52.911756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.655 [2024-04-25 03:28:52.911781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.655 qpair failed and we were unable to recover it. 00:28:18.655 [2024-04-25 03:28:52.911951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.655 [2024-04-25 03:28:52.912148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.655 [2024-04-25 03:28:52.912172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.655 qpair failed and we were unable to recover it. 00:28:18.655 [2024-04-25 03:28:52.912408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.655 [2024-04-25 03:28:52.912633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.655 [2024-04-25 03:28:52.912658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.655 qpair failed and we were unable to recover it. 00:28:18.655 [2024-04-25 03:28:52.912889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.655 [2024-04-25 03:28:52.913085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.655 [2024-04-25 03:28:52.913109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.655 qpair failed and we were unable to recover it. 00:28:18.655 [2024-04-25 03:28:52.913311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.655 [2024-04-25 03:28:52.913503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.655 [2024-04-25 03:28:52.913527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.655 qpair failed and we were unable to recover it. 00:28:18.655 [2024-04-25 03:28:52.913726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.655 [2024-04-25 03:28:52.913897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.655 [2024-04-25 03:28:52.913921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.655 qpair failed and we were unable to recover it. 00:28:18.655 [2024-04-25 03:28:52.914148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.655 [2024-04-25 03:28:52.914338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.655 [2024-04-25 03:28:52.914362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.655 qpair failed and we were unable to recover it. 00:28:18.655 [2024-04-25 03:28:52.914582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.655 [2024-04-25 03:28:52.914767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.655 [2024-04-25 03:28:52.914790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.655 qpair failed and we were unable to recover it. 00:28:18.655 [2024-04-25 03:28:52.914966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.655 [2024-04-25 03:28:52.915158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.655 [2024-04-25 03:28:52.915182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.655 qpair failed and we were unable to recover it. 00:28:18.655 [2024-04-25 03:28:52.915377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.655 [2024-04-25 03:28:52.915542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.655 [2024-04-25 03:28:52.915566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.655 qpair failed and we were unable to recover it. 00:28:18.655 [2024-04-25 03:28:52.915747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.655 [2024-04-25 03:28:52.915947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.655 [2024-04-25 03:28:52.915971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.655 qpair failed and we were unable to recover it. 00:28:18.655 [2024-04-25 03:28:52.916175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.655 [2024-04-25 03:28:52.916338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.655 [2024-04-25 03:28:52.916362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.655 qpair failed and we were unable to recover it. 00:28:18.655 [2024-04-25 03:28:52.916535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.655 [2024-04-25 03:28:52.916738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.655 [2024-04-25 03:28:52.916763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.655 qpair failed and we were unable to recover it. 00:28:18.655 [2024-04-25 03:28:52.916926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.655 [2024-04-25 03:28:52.917128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.655 [2024-04-25 03:28:52.917152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.655 qpair failed and we were unable to recover it. 00:28:18.655 [2024-04-25 03:28:52.917346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.655 [2024-04-25 03:28:52.917558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.655 [2024-04-25 03:28:52.917582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.655 qpair failed and we were unable to recover it. 00:28:18.655 [2024-04-25 03:28:52.917816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.655 [2024-04-25 03:28:52.917992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.655 [2024-04-25 03:28:52.918018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.655 qpair failed and we were unable to recover it. 00:28:18.655 [2024-04-25 03:28:52.918217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.655 [2024-04-25 03:28:52.918380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.655 [2024-04-25 03:28:52.918404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.655 qpair failed and we were unable to recover it. 00:28:18.655 [2024-04-25 03:28:52.918580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.655 [2024-04-25 03:28:52.918752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.655 [2024-04-25 03:28:52.918777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.655 qpair failed and we were unable to recover it. 00:28:18.655 [2024-04-25 03:28:52.918954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.655 [2024-04-25 03:28:52.919145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.655 [2024-04-25 03:28:52.919169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.655 qpair failed and we were unable to recover it. 00:28:18.655 [2024-04-25 03:28:52.919382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.655 [2024-04-25 03:28:52.919604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.655 [2024-04-25 03:28:52.919631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.655 qpair failed and we were unable to recover it. 00:28:18.655 [2024-04-25 03:28:52.919825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.655 [2024-04-25 03:28:52.920029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.655 [2024-04-25 03:28:52.920053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.655 qpair failed and we were unable to recover it. 00:28:18.655 [2024-04-25 03:28:52.920247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.655 [2024-04-25 03:28:52.920467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.655 [2024-04-25 03:28:52.920491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.655 qpair failed and we were unable to recover it. 00:28:18.655 [2024-04-25 03:28:52.920698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.655 [2024-04-25 03:28:52.920858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.655 [2024-04-25 03:28:52.920881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.655 qpair failed and we were unable to recover it. 00:28:18.655 [2024-04-25 03:28:52.921103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.655 [2024-04-25 03:28:52.921272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.655 [2024-04-25 03:28:52.921296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.655 qpair failed and we were unable to recover it. 00:28:18.655 [2024-04-25 03:28:52.921517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.655 [2024-04-25 03:28:52.921727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.655 [2024-04-25 03:28:52.921756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.655 qpair failed and we were unable to recover it. 00:28:18.655 [2024-04-25 03:28:52.921938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.655 [2024-04-25 03:28:52.922106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.655 [2024-04-25 03:28:52.922130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.655 qpair failed and we were unable to recover it. 00:28:18.655 [2024-04-25 03:28:52.922328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.655 [2024-04-25 03:28:52.922555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.655 [2024-04-25 03:28:52.922580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.655 qpair failed and we were unable to recover it. 00:28:18.655 [2024-04-25 03:28:52.922757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.655 [2024-04-25 03:28:52.922956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.655 [2024-04-25 03:28:52.922981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.655 qpair failed and we were unable to recover it. 00:28:18.655 [2024-04-25 03:28:52.923183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.655 [2024-04-25 03:28:52.923378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.655 [2024-04-25 03:28:52.923402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.655 qpair failed and we were unable to recover it. 00:28:18.655 [2024-04-25 03:28:52.923594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.655 [2024-04-25 03:28:52.923792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.655 [2024-04-25 03:28:52.923816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.655 qpair failed and we were unable to recover it. 00:28:18.655 [2024-04-25 03:28:52.924017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.655 [2024-04-25 03:28:52.924240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.655 [2024-04-25 03:28:52.924264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.655 qpair failed and we were unable to recover it. 00:28:18.655 [2024-04-25 03:28:52.924465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.655 [2024-04-25 03:28:52.924661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.655 [2024-04-25 03:28:52.924686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.655 qpair failed and we were unable to recover it. 00:28:18.655 [2024-04-25 03:28:52.924884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.655 [2024-04-25 03:28:52.925084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.655 [2024-04-25 03:28:52.925108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.655 qpair failed and we were unable to recover it. 00:28:18.655 [2024-04-25 03:28:52.925275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.655 [2024-04-25 03:28:52.925473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.655 [2024-04-25 03:28:52.925497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.655 qpair failed and we were unable to recover it. 00:28:18.655 [2024-04-25 03:28:52.925700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.655 [2024-04-25 03:28:52.925874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.655 [2024-04-25 03:28:52.925898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.655 qpair failed and we were unable to recover it. 00:28:18.655 [2024-04-25 03:28:52.926140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.655 [2024-04-25 03:28:52.926299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.655 [2024-04-25 03:28:52.926324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.655 qpair failed and we were unable to recover it. 00:28:18.655 [2024-04-25 03:28:52.926515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.655 [2024-04-25 03:28:52.926710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.655 [2024-04-25 03:28:52.926736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.655 qpair failed and we were unable to recover it. 00:28:18.655 [2024-04-25 03:28:52.926907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.655 [2024-04-25 03:28:52.927137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.655 [2024-04-25 03:28:52.927161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.655 qpair failed and we were unable to recover it. 00:28:18.655 [2024-04-25 03:28:52.927358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.655 [2024-04-25 03:28:52.927528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.655 [2024-04-25 03:28:52.927552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.655 qpair failed and we were unable to recover it. 00:28:18.655 [2024-04-25 03:28:52.927761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.655 [2024-04-25 03:28:52.927936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.655 [2024-04-25 03:28:52.927960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.655 qpair failed and we were unable to recover it. 00:28:18.655 [2024-04-25 03:28:52.928127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.655 [2024-04-25 03:28:52.928332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.655 [2024-04-25 03:28:52.928356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.655 qpair failed and we were unable to recover it. 00:28:18.655 [2024-04-25 03:28:52.928579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.655 [2024-04-25 03:28:52.928784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.655 [2024-04-25 03:28:52.928809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.655 qpair failed and we were unable to recover it. 00:28:18.655 [2024-04-25 03:28:52.929034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.655 [2024-04-25 03:28:52.929228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.655 [2024-04-25 03:28:52.929252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.655 qpair failed and we were unable to recover it. 00:28:18.655 [2024-04-25 03:28:52.929413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.655 [2024-04-25 03:28:52.929607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.655 [2024-04-25 03:28:52.929636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.655 qpair failed and we were unable to recover it. 00:28:18.655 [2024-04-25 03:28:52.929811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.655 [2024-04-25 03:28:52.930001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.655 [2024-04-25 03:28:52.930025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.655 qpair failed and we were unable to recover it. 00:28:18.655 [2024-04-25 03:28:52.930198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.655 [2024-04-25 03:28:52.930393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.655 [2024-04-25 03:28:52.930417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.656 qpair failed and we were unable to recover it. 00:28:18.656 [2024-04-25 03:28:52.930586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.656 [2024-04-25 03:28:52.930832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.656 [2024-04-25 03:28:52.930857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.656 qpair failed and we were unable to recover it. 00:28:18.656 [2024-04-25 03:28:52.931059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.656 [2024-04-25 03:28:52.931253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.656 [2024-04-25 03:28:52.931277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.656 qpair failed and we were unable to recover it. 00:28:18.656 [2024-04-25 03:28:52.931473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.656 [2024-04-25 03:28:52.931669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.656 [2024-04-25 03:28:52.931694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.656 qpair failed and we were unable to recover it. 00:28:18.656 [2024-04-25 03:28:52.931869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.656 [2024-04-25 03:28:52.932035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.656 [2024-04-25 03:28:52.932060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.656 qpair failed and we were unable to recover it. 00:28:18.656 [2024-04-25 03:28:52.932272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.656 [2024-04-25 03:28:52.932460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.656 [2024-04-25 03:28:52.932484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.656 qpair failed and we were unable to recover it. 00:28:18.656 [2024-04-25 03:28:52.932657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.656 [2024-04-25 03:28:52.932835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.656 [2024-04-25 03:28:52.932859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.656 qpair failed and we were unable to recover it. 00:28:18.656 [2024-04-25 03:28:52.933092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.656 [2024-04-25 03:28:52.933285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.656 [2024-04-25 03:28:52.933308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.656 qpair failed and we were unable to recover it. 00:28:18.656 [2024-04-25 03:28:52.933469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.656 [2024-04-25 03:28:52.933642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.656 [2024-04-25 03:28:52.933669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.656 qpair failed and we were unable to recover it. 00:28:18.656 [2024-04-25 03:28:52.933869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.656 [2024-04-25 03:28:52.934073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.656 [2024-04-25 03:28:52.934098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.656 qpair failed and we were unable to recover it. 00:28:18.656 [2024-04-25 03:28:52.934274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.656 [2024-04-25 03:28:52.934469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.656 [2024-04-25 03:28:52.934493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.656 qpair failed and we were unable to recover it. 00:28:18.656 [2024-04-25 03:28:52.934727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.656 [2024-04-25 03:28:52.934927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.656 [2024-04-25 03:28:52.934950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.656 qpair failed and we were unable to recover it. 00:28:18.656 [2024-04-25 03:28:52.935111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.656 [2024-04-25 03:28:52.935302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.656 [2024-04-25 03:28:52.935326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.656 qpair failed and we were unable to recover it. 00:28:18.656 [2024-04-25 03:28:52.935500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.656 [2024-04-25 03:28:52.935664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.656 [2024-04-25 03:28:52.935689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.656 qpair failed and we were unable to recover it. 00:28:18.656 [2024-04-25 03:28:52.935889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.656 [2024-04-25 03:28:52.936059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.656 [2024-04-25 03:28:52.936083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.656 qpair failed and we were unable to recover it. 00:28:18.656 [2024-04-25 03:28:52.936191] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:18.656 [2024-04-25 03:28:52.936227] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:18.656 [2024-04-25 03:28:52.936241] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:18.656 [2024-04-25 03:28:52.936253] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:18.656 [2024-04-25 03:28:52.936264] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:18.656 [2024-04-25 03:28:52.936279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.656 [2024-04-25 03:28:52.936326] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:28:18.656 [2024-04-25 03:28:52.936383] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:28:18.656 [2024-04-25 03:28:52.936452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.656 [2024-04-25 03:28:52.936476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.656 qpair failed and we were unable to recover it. 00:28:18.656 [2024-04-25 03:28:52.936354] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:28:18.656 [2024-04-25 03:28:52.936391] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:28:18.656 [2024-04-25 03:28:52.936693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.656 [2024-04-25 03:28:52.936884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.656 [2024-04-25 03:28:52.936908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.656 qpair failed and we were unable to recover it. 00:28:18.656 [2024-04-25 03:28:52.937075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.656 [2024-04-25 03:28:52.937244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.656 [2024-04-25 03:28:52.937267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.656 qpair failed and we were unable to recover it. 00:28:18.656 [2024-04-25 03:28:52.937549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.656 [2024-04-25 03:28:52.937752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.656 [2024-04-25 03:28:52.937777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.656 qpair failed and we were unable to recover it. 00:28:18.656 [2024-04-25 03:28:52.937948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.656 [2024-04-25 03:28:52.938138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.656 [2024-04-25 03:28:52.938160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.656 qpair failed and we were unable to recover it. 00:28:18.656 [2024-04-25 03:28:52.938362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.656 [2024-04-25 03:28:52.938557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.656 [2024-04-25 03:28:52.938580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.656 qpair failed and we were unable to recover it. 00:28:18.656 [2024-04-25 03:28:52.938783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.656 [2024-04-25 03:28:52.938949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.656 [2024-04-25 03:28:52.938975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.656 qpair failed and we were unable to recover it. 00:28:18.656 [2024-04-25 03:28:52.939167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.656 [2024-04-25 03:28:52.939342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.656 [2024-04-25 03:28:52.939365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.656 qpair failed and we were unable to recover it. 00:28:18.656 [2024-04-25 03:28:52.939570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.656 [2024-04-25 03:28:52.939749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.656 [2024-04-25 03:28:52.939774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.656 qpair failed and we were unable to recover it. 00:28:18.656 [2024-04-25 03:28:52.939966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.656 [2024-04-25 03:28:52.940125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.656 [2024-04-25 03:28:52.940149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.656 qpair failed and we were unable to recover it. 00:28:18.656 [2024-04-25 03:28:52.940366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.656 [2024-04-25 03:28:52.940565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.656 [2024-04-25 03:28:52.940590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.656 qpair failed and we were unable to recover it. 00:28:18.656 [2024-04-25 03:28:52.940800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.656 [2024-04-25 03:28:52.940998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.656 [2024-04-25 03:28:52.941022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.656 qpair failed and we were unable to recover it. 00:28:18.656 [2024-04-25 03:28:52.941220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.656 [2024-04-25 03:28:52.941393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.656 [2024-04-25 03:28:52.941418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.656 qpair failed and we were unable to recover it. 00:28:18.656 [2024-04-25 03:28:52.941619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.656 [2024-04-25 03:28:52.941823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.656 [2024-04-25 03:28:52.941847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.656 qpair failed and we were unable to recover it. 00:28:18.656 [2024-04-25 03:28:52.942034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.656 [2024-04-25 03:28:52.942231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.656 [2024-04-25 03:28:52.942254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.656 qpair failed and we were unable to recover it. 00:28:18.656 [2024-04-25 03:28:52.942425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.656 [2024-04-25 03:28:52.942586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.656 [2024-04-25 03:28:52.942610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.656 qpair failed and we were unable to recover it. 00:28:18.656 [2024-04-25 03:28:52.942788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.656 [2024-04-25 03:28:52.942970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.656 [2024-04-25 03:28:52.942994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.656 qpair failed and we were unable to recover it. 00:28:18.656 [2024-04-25 03:28:52.943175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.656 [2024-04-25 03:28:52.943400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.656 [2024-04-25 03:28:52.943424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.656 qpair failed and we were unable to recover it. 00:28:18.656 [2024-04-25 03:28:52.943602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.656 [2024-04-25 03:28:52.943800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.656 [2024-04-25 03:28:52.943824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.656 qpair failed and we were unable to recover it. 00:28:18.656 [2024-04-25 03:28:52.943992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.656 [2024-04-25 03:28:52.944190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.656 [2024-04-25 03:28:52.944214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.656 qpair failed and we were unable to recover it. 00:28:18.656 [2024-04-25 03:28:52.944378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.656 [2024-04-25 03:28:52.944548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.656 [2024-04-25 03:28:52.944572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.656 qpair failed and we were unable to recover it. 00:28:18.656 [2024-04-25 03:28:52.944764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.656 [2024-04-25 03:28:52.944933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.656 [2024-04-25 03:28:52.944956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.656 qpair failed and we were unable to recover it. 00:28:18.656 [2024-04-25 03:28:52.945152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.656 [2024-04-25 03:28:52.945324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.656 [2024-04-25 03:28:52.945350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.656 qpair failed and we were unable to recover it. 00:28:18.656 [2024-04-25 03:28:52.945564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.656 [2024-04-25 03:28:52.945756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.656 [2024-04-25 03:28:52.945781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.656 qpair failed and we were unable to recover it. 00:28:18.656 [2024-04-25 03:28:52.945945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.656 [2024-04-25 03:28:52.946153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.656 [2024-04-25 03:28:52.946178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.656 qpair failed and we were unable to recover it. 00:28:18.656 [2024-04-25 03:28:52.946385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.656 [2024-04-25 03:28:52.946558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.656 [2024-04-25 03:28:52.946582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.656 qpair failed and we were unable to recover it. 00:28:18.656 [2024-04-25 03:28:52.946766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.656 [2024-04-25 03:28:52.946947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.657 [2024-04-25 03:28:52.946970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.657 qpair failed and we were unable to recover it. 00:28:18.657 [2024-04-25 03:28:52.947164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.657 [2024-04-25 03:28:52.947327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.657 [2024-04-25 03:28:52.947351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.657 qpair failed and we were unable to recover it. 00:28:18.657 [2024-04-25 03:28:52.947548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.657 [2024-04-25 03:28:52.947721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.657 [2024-04-25 03:28:52.947745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.657 qpair failed and we were unable to recover it. 00:28:18.657 [2024-04-25 03:28:52.947915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.657 [2024-04-25 03:28:52.948089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.657 [2024-04-25 03:28:52.948114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.657 qpair failed and we were unable to recover it. 00:28:18.657 [2024-04-25 03:28:52.948287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.657 [2024-04-25 03:28:52.948450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.657 [2024-04-25 03:28:52.948474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.657 qpair failed and we were unable to recover it. 00:28:18.657 [2024-04-25 03:28:52.948698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.657 [2024-04-25 03:28:52.948894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.657 [2024-04-25 03:28:52.948918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.657 qpair failed and we were unable to recover it. 00:28:18.657 [2024-04-25 03:28:52.949083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.657 [2024-04-25 03:28:52.949250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.657 [2024-04-25 03:28:52.949273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.657 qpair failed and we were unable to recover it. 00:28:18.657 [2024-04-25 03:28:52.949478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.657 [2024-04-25 03:28:52.949645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.657 [2024-04-25 03:28:52.949674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.657 qpair failed and we were unable to recover it. 00:28:18.657 [2024-04-25 03:28:52.949883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.657 [2024-04-25 03:28:52.950084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.657 [2024-04-25 03:28:52.950109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.657 qpair failed and we were unable to recover it. 00:28:18.657 [2024-04-25 03:28:52.950284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.657 [2024-04-25 03:28:52.950481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.657 [2024-04-25 03:28:52.950505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.657 qpair failed and we were unable to recover it. 00:28:18.657 [2024-04-25 03:28:52.950684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.657 [2024-04-25 03:28:52.950883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.657 [2024-04-25 03:28:52.950907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.657 qpair failed and we were unable to recover it. 00:28:18.657 [2024-04-25 03:28:52.951078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.657 [2024-04-25 03:28:52.951272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.657 [2024-04-25 03:28:52.951297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.657 qpair failed and we were unable to recover it. 00:28:18.657 [2024-04-25 03:28:52.951478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.657 [2024-04-25 03:28:52.951671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.657 [2024-04-25 03:28:52.951700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.657 qpair failed and we were unable to recover it. 00:28:18.657 [2024-04-25 03:28:52.951873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.657 [2024-04-25 03:28:52.952053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.657 [2024-04-25 03:28:52.952077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.657 qpair failed and we were unable to recover it. 00:28:18.657 [2024-04-25 03:28:52.952298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.657 [2024-04-25 03:28:52.952485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.657 [2024-04-25 03:28:52.952509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.657 qpair failed and we were unable to recover it. 00:28:18.657 [2024-04-25 03:28:52.952702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.657 [2024-04-25 03:28:52.952868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.657 [2024-04-25 03:28:52.952892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.657 qpair failed and we were unable to recover it. 00:28:18.657 [2024-04-25 03:28:52.953091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.657 [2024-04-25 03:28:52.953286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.657 [2024-04-25 03:28:52.953311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.657 qpair failed and we were unable to recover it. 00:28:18.657 [2024-04-25 03:28:52.953516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.657 [2024-04-25 03:28:52.953689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.657 [2024-04-25 03:28:52.953715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.657 qpair failed and we were unable to recover it. 00:28:18.657 [2024-04-25 03:28:52.953918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.657 [2024-04-25 03:28:52.954087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.657 [2024-04-25 03:28:52.954112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.657 qpair failed and we were unable to recover it. 00:28:18.657 [2024-04-25 03:28:52.954312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.657 [2024-04-25 03:28:52.954474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.657 [2024-04-25 03:28:52.954499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.657 qpair failed and we were unable to recover it. 00:28:18.657 [2024-04-25 03:28:52.954712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.657 [2024-04-25 03:28:52.954886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.657 [2024-04-25 03:28:52.954911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.657 qpair failed and we were unable to recover it. 00:28:18.657 [2024-04-25 03:28:52.955106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.657 [2024-04-25 03:28:52.955312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.657 [2024-04-25 03:28:52.955338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.657 qpair failed and we were unable to recover it. 00:28:18.657 [2024-04-25 03:28:52.955624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.657 [2024-04-25 03:28:52.955841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.657 [2024-04-25 03:28:52.955866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.657 qpair failed and we were unable to recover it. 00:28:18.657 [2024-04-25 03:28:52.956054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.657 [2024-04-25 03:28:52.956246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.657 [2024-04-25 03:28:52.956271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.657 qpair failed and we were unable to recover it. 00:28:18.657 [2024-04-25 03:28:52.956558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.657 [2024-04-25 03:28:52.956767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.657 [2024-04-25 03:28:52.956792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.657 qpair failed and we were unable to recover it. 00:28:18.657 [2024-04-25 03:28:52.956967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.657 [2024-04-25 03:28:52.957278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.657 [2024-04-25 03:28:52.957303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.657 qpair failed and we were unable to recover it. 00:28:18.657 [2024-04-25 03:28:52.957488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.657 [2024-04-25 03:28:52.957709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.657 [2024-04-25 03:28:52.957734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.657 qpair failed and we were unable to recover it. 00:28:18.657 [2024-04-25 03:28:52.957904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.657 [2024-04-25 03:28:52.958073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.657 [2024-04-25 03:28:52.958098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.657 qpair failed and we were unable to recover it. 00:28:18.657 [2024-04-25 03:28:52.958297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.657 [2024-04-25 03:28:52.958500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.657 [2024-04-25 03:28:52.958525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.657 qpair failed and we were unable to recover it. 00:28:18.657 [2024-04-25 03:28:52.958713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.657 [2024-04-25 03:28:52.958894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.657 [2024-04-25 03:28:52.958922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.657 qpair failed and we were unable to recover it. 00:28:18.657 [2024-04-25 03:28:52.959103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.657 [2024-04-25 03:28:52.959304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.657 [2024-04-25 03:28:52.959330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.657 qpair failed and we were unable to recover it. 00:28:18.657 [2024-04-25 03:28:52.959519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.657 [2024-04-25 03:28:52.959684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.657 [2024-04-25 03:28:52.959709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.657 qpair failed and we were unable to recover it. 00:28:18.657 [2024-04-25 03:28:52.959920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.657 [2024-04-25 03:28:52.960116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.657 [2024-04-25 03:28:52.960142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.657 qpair failed and we were unable to recover it. 00:28:18.657 [2024-04-25 03:28:52.960322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.657 [2024-04-25 03:28:52.960597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.657 [2024-04-25 03:28:52.960622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.657 qpair failed and we were unable to recover it. 00:28:18.657 [2024-04-25 03:28:52.960806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.657 [2024-04-25 03:28:52.960997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.657 [2024-04-25 03:28:52.961022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.657 qpair failed and we were unable to recover it. 00:28:18.657 [2024-04-25 03:28:52.961193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.657 [2024-04-25 03:28:52.961406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.657 [2024-04-25 03:28:52.961431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.657 qpair failed and we were unable to recover it. 00:28:18.657 [2024-04-25 03:28:52.961605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.657 [2024-04-25 03:28:52.961795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.657 [2024-04-25 03:28:52.961820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.657 qpair failed and we were unable to recover it. 00:28:18.657 [2024-04-25 03:28:52.962167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.657 [2024-04-25 03:28:52.962350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.657 [2024-04-25 03:28:52.962376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.657 qpair failed and we were unable to recover it. 00:28:18.657 [2024-04-25 03:28:52.962703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.657 [2024-04-25 03:28:52.962873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.657 [2024-04-25 03:28:52.962898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.657 qpair failed and we were unable to recover it. 00:28:18.657 [2024-04-25 03:28:52.963063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.657 [2024-04-25 03:28:52.963388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.657 [2024-04-25 03:28:52.963414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.657 qpair failed and we were unable to recover it. 00:28:18.657 [2024-04-25 03:28:52.963590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.657 [2024-04-25 03:28:52.963761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.657 [2024-04-25 03:28:52.963786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.657 qpair failed and we were unable to recover it. 00:28:18.657 [2024-04-25 03:28:52.963956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.657 [2024-04-25 03:28:52.964120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.657 [2024-04-25 03:28:52.964145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.657 qpair failed and we were unable to recover it. 00:28:18.657 [2024-04-25 03:28:52.964319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.657 [2024-04-25 03:28:52.964491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.657 [2024-04-25 03:28:52.964516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.657 qpair failed and we were unable to recover it. 00:28:18.657 [2024-04-25 03:28:52.964706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.657 [2024-04-25 03:28:52.964888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.657 [2024-04-25 03:28:52.964913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.657 qpair failed and we were unable to recover it. 00:28:18.657 [2024-04-25 03:28:52.965086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.657 [2024-04-25 03:28:52.965292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.657 [2024-04-25 03:28:52.965317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.657 qpair failed and we were unable to recover it. 00:28:18.657 [2024-04-25 03:28:52.965495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.657 [2024-04-25 03:28:52.965662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.657 [2024-04-25 03:28:52.965688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.657 qpair failed and we were unable to recover it. 00:28:18.657 [2024-04-25 03:28:52.965884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.657 [2024-04-25 03:28:52.966060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.657 [2024-04-25 03:28:52.966086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.657 qpair failed and we were unable to recover it. 00:28:18.657 [2024-04-25 03:28:52.966252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.657 [2024-04-25 03:28:52.966460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.657 [2024-04-25 03:28:52.966485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.657 qpair failed and we were unable to recover it. 00:28:18.657 [2024-04-25 03:28:52.966652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.657 [2024-04-25 03:28:52.966849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.657 [2024-04-25 03:28:52.966878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.657 qpair failed and we were unable to recover it. 00:28:18.658 [2024-04-25 03:28:52.967071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.658 [2024-04-25 03:28:52.967257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.658 [2024-04-25 03:28:52.967282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.658 qpair failed and we were unable to recover it. 00:28:18.658 [2024-04-25 03:28:52.967479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.658 [2024-04-25 03:28:52.967653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.658 [2024-04-25 03:28:52.967682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.658 qpair failed and we were unable to recover it. 00:28:18.658 [2024-04-25 03:28:52.967864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.658 [2024-04-25 03:28:52.968066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.658 [2024-04-25 03:28:52.968091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.658 qpair failed and we were unable to recover it. 00:28:18.658 [2024-04-25 03:28:52.968301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.658 [2024-04-25 03:28:52.968484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.658 [2024-04-25 03:28:52.968510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.658 qpair failed and we were unable to recover it. 00:28:18.658 [2024-04-25 03:28:52.968750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.658 [2024-04-25 03:28:52.968935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.658 [2024-04-25 03:28:52.968961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.658 qpair failed and we were unable to recover it. 00:28:18.658 [2024-04-25 03:28:52.969277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.658 [2024-04-25 03:28:52.969472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.658 [2024-04-25 03:28:52.969497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.658 qpair failed and we were unable to recover it. 00:28:18.658 [2024-04-25 03:28:52.969674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.658 [2024-04-25 03:28:52.969893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.658 [2024-04-25 03:28:52.969930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.658 qpair failed and we were unable to recover it. 00:28:18.658 [2024-04-25 03:28:52.970113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.658 [2024-04-25 03:28:52.970321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.658 [2024-04-25 03:28:52.970346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.658 qpair failed and we were unable to recover it. 00:28:18.658 [2024-04-25 03:28:52.970533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.658 [2024-04-25 03:28:52.970699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.658 [2024-04-25 03:28:52.970725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.658 qpair failed and we were unable to recover it. 00:28:18.658 [2024-04-25 03:28:52.970904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.658 [2024-04-25 03:28:52.971126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.658 [2024-04-25 03:28:52.971155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.658 qpair failed and we were unable to recover it. 00:28:18.658 [2024-04-25 03:28:52.971357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.658 [2024-04-25 03:28:52.971557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.658 [2024-04-25 03:28:52.971582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.658 qpair failed and we were unable to recover it. 00:28:18.658 [2024-04-25 03:28:52.971750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.658 [2024-04-25 03:28:52.971931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.658 [2024-04-25 03:28:52.971957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.658 qpair failed and we were unable to recover it. 00:28:18.658 [2024-04-25 03:28:52.972155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.658 [2024-04-25 03:28:52.972354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.658 [2024-04-25 03:28:52.972379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.658 qpair failed and we were unable to recover it. 00:28:18.658 [2024-04-25 03:28:52.972581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.658 [2024-04-25 03:28:52.972789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.658 [2024-04-25 03:28:52.972815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.658 qpair failed and we were unable to recover it. 00:28:18.658 [2024-04-25 03:28:52.973005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.658 [2024-04-25 03:28:52.973197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.658 [2024-04-25 03:28:52.973222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.658 qpair failed and we were unable to recover it. 00:28:18.658 [2024-04-25 03:28:52.973422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.658 [2024-04-25 03:28:52.973617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.658 [2024-04-25 03:28:52.973648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.658 qpair failed and we were unable to recover it. 00:28:18.658 [2024-04-25 03:28:52.973970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.658 [2024-04-25 03:28:52.974181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.658 [2024-04-25 03:28:52.974206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.658 qpair failed and we were unable to recover it. 00:28:18.658 [2024-04-25 03:28:52.974391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.658 [2024-04-25 03:28:52.974595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.658 [2024-04-25 03:28:52.974620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.658 qpair failed and we were unable to recover it. 00:28:18.658 [2024-04-25 03:28:52.974796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.658 [2024-04-25 03:28:52.974992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.658 [2024-04-25 03:28:52.975016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.658 qpair failed and we were unable to recover it. 00:28:18.658 [2024-04-25 03:28:52.975233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.658 [2024-04-25 03:28:52.975415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.658 [2024-04-25 03:28:52.975440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.658 qpair failed and we were unable to recover it. 00:28:18.658 [2024-04-25 03:28:52.975645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.658 [2024-04-25 03:28:52.975805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.658 [2024-04-25 03:28:52.975829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.658 qpair failed and we were unable to recover it. 00:28:18.658 [2024-04-25 03:28:52.975995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.658 [2024-04-25 03:28:52.976162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.658 [2024-04-25 03:28:52.976189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.658 qpair failed and we were unable to recover it. 00:28:18.658 [2024-04-25 03:28:52.976359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.658 [2024-04-25 03:28:52.976524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.658 [2024-04-25 03:28:52.976549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.658 qpair failed and we were unable to recover it. 00:28:18.658 [2024-04-25 03:28:52.976741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.658 [2024-04-25 03:28:52.976937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.658 [2024-04-25 03:28:52.976962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.658 qpair failed and we were unable to recover it. 00:28:18.658 [2024-04-25 03:28:52.977159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.658 [2024-04-25 03:28:52.977350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.658 [2024-04-25 03:28:52.977375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.658 qpair failed and we were unable to recover it. 00:28:18.658 [2024-04-25 03:28:52.977537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.658 [2024-04-25 03:28:52.977761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.658 [2024-04-25 03:28:52.977787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.658 qpair failed and we were unable to recover it. 00:28:18.658 [2024-04-25 03:28:52.977984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.658 [2024-04-25 03:28:52.978171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.658 [2024-04-25 03:28:52.978196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.658 qpair failed and we were unable to recover it. 00:28:18.658 [2024-04-25 03:28:52.978356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.658 [2024-04-25 03:28:52.978562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.658 [2024-04-25 03:28:52.978588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.658 qpair failed and we were unable to recover it. 00:28:18.658 [2024-04-25 03:28:52.978797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.658 [2024-04-25 03:28:52.978971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.658 [2024-04-25 03:28:52.978997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.658 qpair failed and we were unable to recover it. 00:28:18.658 [2024-04-25 03:28:52.979187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.658 [2024-04-25 03:28:52.979385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.658 [2024-04-25 03:28:52.979409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.658 qpair failed and we were unable to recover it. 00:28:18.658 [2024-04-25 03:28:52.979611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.658 [2024-04-25 03:28:52.979790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.658 [2024-04-25 03:28:52.979815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.658 qpair failed and we were unable to recover it. 00:28:18.658 [2024-04-25 03:28:52.979990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.658 [2024-04-25 03:28:52.980191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.658 [2024-04-25 03:28:52.980217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.658 qpair failed and we were unable to recover it. 00:28:18.658 [2024-04-25 03:28:52.980416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.658 [2024-04-25 03:28:52.980607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.658 [2024-04-25 03:28:52.980639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.658 qpair failed and we were unable to recover it. 00:28:18.658 [2024-04-25 03:28:52.980812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.658 [2024-04-25 03:28:52.980978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.658 [2024-04-25 03:28:52.981002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.658 qpair failed and we were unable to recover it. 00:28:18.658 [2024-04-25 03:28:52.981170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.658 [2024-04-25 03:28:52.981341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.658 [2024-04-25 03:28:52.981365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.658 qpair failed and we were unable to recover it. 00:28:18.658 [2024-04-25 03:28:52.981559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.658 [2024-04-25 03:28:52.981775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.658 [2024-04-25 03:28:52.981801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.658 qpair failed and we were unable to recover it. 00:28:18.658 [2024-04-25 03:28:52.982024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.658 [2024-04-25 03:28:52.982200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.658 [2024-04-25 03:28:52.982224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.658 qpair failed and we were unable to recover it. 00:28:18.658 [2024-04-25 03:28:52.982432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.658 [2024-04-25 03:28:52.982637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.658 [2024-04-25 03:28:52.982663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.658 qpair failed and we were unable to recover it. 00:28:18.658 [2024-04-25 03:28:52.982831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.658 [2024-04-25 03:28:52.983008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.658 [2024-04-25 03:28:52.983036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.658 qpair failed and we were unable to recover it. 00:28:18.658 [2024-04-25 03:28:52.983252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.658 [2024-04-25 03:28:52.983443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.658 [2024-04-25 03:28:52.983471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.658 qpair failed and we were unable to recover it. 00:28:18.658 [2024-04-25 03:28:52.983666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.658 [2024-04-25 03:28:52.983855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.658 [2024-04-25 03:28:52.983880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.658 qpair failed and we were unable to recover it. 00:28:18.658 [2024-04-25 03:28:52.984077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.658 [2024-04-25 03:28:52.984252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.658 [2024-04-25 03:28:52.984276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.658 qpair failed and we were unable to recover it. 00:28:18.658 [2024-04-25 03:28:52.984564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.658 [2024-04-25 03:28:52.984763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.658 [2024-04-25 03:28:52.984789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.658 qpair failed and we were unable to recover it. 00:28:18.658 [2024-04-25 03:28:52.985007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.658 [2024-04-25 03:28:52.985197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.658 [2024-04-25 03:28:52.985222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.658 qpair failed and we were unable to recover it. 00:28:18.658 [2024-04-25 03:28:52.985421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.658 [2024-04-25 03:28:52.985586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.658 [2024-04-25 03:28:52.985610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.658 qpair failed and we were unable to recover it. 00:28:18.658 [2024-04-25 03:28:52.985796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.658 [2024-04-25 03:28:52.985998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.658 [2024-04-25 03:28:52.986023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.658 qpair failed and we were unable to recover it. 00:28:18.658 [2024-04-25 03:28:52.986207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.658 [2024-04-25 03:28:52.986435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.658 [2024-04-25 03:28:52.986460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.658 qpair failed and we were unable to recover it. 00:28:18.658 [2024-04-25 03:28:52.986626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.658 [2024-04-25 03:28:52.986827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.658 [2024-04-25 03:28:52.986852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.658 qpair failed and we were unable to recover it. 00:28:18.658 [2024-04-25 03:28:52.987022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.658 [2024-04-25 03:28:52.987243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.658 [2024-04-25 03:28:52.987267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.658 qpair failed and we were unable to recover it. 00:28:18.658 [2024-04-25 03:28:52.987504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.658 [2024-04-25 03:28:52.987694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.658 [2024-04-25 03:28:52.987720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.658 qpair failed and we were unable to recover it. 00:28:18.658 [2024-04-25 03:28:52.987919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.658 [2024-04-25 03:28:52.988095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.658 [2024-04-25 03:28:52.988129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.658 qpair failed and we were unable to recover it. 00:28:18.658 [2024-04-25 03:28:52.988294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.658 [2024-04-25 03:28:52.988518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.659 [2024-04-25 03:28:52.988543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.659 qpair failed and we were unable to recover it. 00:28:18.659 [2024-04-25 03:28:52.988739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.659 [2024-04-25 03:28:52.988910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.659 [2024-04-25 03:28:52.988935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.659 qpair failed and we were unable to recover it. 00:28:18.659 [2024-04-25 03:28:52.989122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.659 [2024-04-25 03:28:52.989309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.659 [2024-04-25 03:28:52.989334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.659 qpair failed and we were unable to recover it. 00:28:18.659 [2024-04-25 03:28:52.989520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.659 [2024-04-25 03:28:52.989718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.659 [2024-04-25 03:28:52.989743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.659 qpair failed and we were unable to recover it. 00:28:18.659 [2024-04-25 03:28:52.989939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.659 [2024-04-25 03:28:52.990108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.659 [2024-04-25 03:28:52.990134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.659 qpair failed and we were unable to recover it. 00:28:18.659 [2024-04-25 03:28:52.990301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.659 [2024-04-25 03:28:52.990498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.659 [2024-04-25 03:28:52.990522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.659 qpair failed and we were unable to recover it. 00:28:18.659 [2024-04-25 03:28:52.990697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.659 [2024-04-25 03:28:52.990872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.659 [2024-04-25 03:28:52.990898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.659 qpair failed and we were unable to recover it. 00:28:18.659 [2024-04-25 03:28:52.991072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.659 [2024-04-25 03:28:52.991267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.659 [2024-04-25 03:28:52.991291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.659 qpair failed and we were unable to recover it. 00:28:18.659 [2024-04-25 03:28:52.991491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.659 [2024-04-25 03:28:52.991700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.659 [2024-04-25 03:28:52.991726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.659 qpair failed and we were unable to recover it. 00:28:18.659 [2024-04-25 03:28:52.991892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.659 [2024-04-25 03:28:52.992082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.659 [2024-04-25 03:28:52.992107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.659 qpair failed and we were unable to recover it. 00:28:18.659 [2024-04-25 03:28:52.992288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.659 [2024-04-25 03:28:52.992478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.659 [2024-04-25 03:28:52.992503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.659 qpair failed and we were unable to recover it. 00:28:18.659 [2024-04-25 03:28:52.992680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.659 [2024-04-25 03:28:52.992848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.659 [2024-04-25 03:28:52.992873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.659 qpair failed and we were unable to recover it. 00:28:18.659 [2024-04-25 03:28:52.993041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.659 [2024-04-25 03:28:52.993207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.659 [2024-04-25 03:28:52.993233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.659 qpair failed and we were unable to recover it. 00:28:18.659 [2024-04-25 03:28:52.993408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.659 [2024-04-25 03:28:52.993598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.659 [2024-04-25 03:28:52.993624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.659 qpair failed and we were unable to recover it. 00:28:18.659 [2024-04-25 03:28:52.993817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.659 [2024-04-25 03:28:52.994009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.659 [2024-04-25 03:28:52.994034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.659 qpair failed and we were unable to recover it. 00:28:18.659 [2024-04-25 03:28:52.994227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.659 [2024-04-25 03:28:52.994441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.659 [2024-04-25 03:28:52.994466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.659 qpair failed and we were unable to recover it. 00:28:18.659 [2024-04-25 03:28:52.994688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.659 [2024-04-25 03:28:52.994894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.659 [2024-04-25 03:28:52.994920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.659 qpair failed and we were unable to recover it. 00:28:18.659 [2024-04-25 03:28:52.995109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.659 [2024-04-25 03:28:52.995320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.659 [2024-04-25 03:28:52.995345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.659 qpair failed and we were unable to recover it. 00:28:18.659 [2024-04-25 03:28:52.995539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.659 [2024-04-25 03:28:52.995743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.659 [2024-04-25 03:28:52.995769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.659 qpair failed and we were unable to recover it. 00:28:18.659 [2024-04-25 03:28:52.995948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.659 [2024-04-25 03:28:52.996112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.659 [2024-04-25 03:28:52.996137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.659 qpair failed and we were unable to recover it. 00:28:18.659 [2024-04-25 03:28:52.996301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.659 [2024-04-25 03:28:52.996468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.659 [2024-04-25 03:28:52.996493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.659 qpair failed and we were unable to recover it. 00:28:18.659 [2024-04-25 03:28:52.996663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.659 [2024-04-25 03:28:52.996864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.659 [2024-04-25 03:28:52.996889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.659 qpair failed and we were unable to recover it. 00:28:18.659 [2024-04-25 03:28:52.997060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.659 [2024-04-25 03:28:52.997261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.659 [2024-04-25 03:28:52.997287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.659 qpair failed and we were unable to recover it. 00:28:18.659 [2024-04-25 03:28:52.997451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.659 [2024-04-25 03:28:52.997643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.659 [2024-04-25 03:28:52.997669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.659 qpair failed and we were unable to recover it. 00:28:18.659 [2024-04-25 03:28:52.997865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.659 [2024-04-25 03:28:52.998054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.659 [2024-04-25 03:28:52.998080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.659 qpair failed and we were unable to recover it. 00:28:18.659 [2024-04-25 03:28:52.998297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.659 [2024-04-25 03:28:52.998473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.659 [2024-04-25 03:28:52.998502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.659 qpair failed and we were unable to recover it. 00:28:18.659 [2024-04-25 03:28:52.998707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.659 [2024-04-25 03:28:52.998883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.659 [2024-04-25 03:28:52.998908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.659 qpair failed and we were unable to recover it. 00:28:18.659 [2024-04-25 03:28:52.999077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.659 [2024-04-25 03:28:52.999270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.659 [2024-04-25 03:28:52.999296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.659 qpair failed and we were unable to recover it. 00:28:18.659 [2024-04-25 03:28:52.999493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.659 [2024-04-25 03:28:52.999722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.659 [2024-04-25 03:28:52.999747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.659 qpair failed and we were unable to recover it. 00:28:18.659 [2024-04-25 03:28:52.999935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.659 [2024-04-25 03:28:53.000124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.659 [2024-04-25 03:28:53.000149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.659 qpair failed and we were unable to recover it. 00:28:18.659 [2024-04-25 03:28:53.000324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.659 [2024-04-25 03:28:53.000649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.659 [2024-04-25 03:28:53.000675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.659 qpair failed and we were unable to recover it. 00:28:18.659 [2024-04-25 03:28:53.000869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.659 [2024-04-25 03:28:53.001071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.659 [2024-04-25 03:28:53.001097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.659 qpair failed and we were unable to recover it. 00:28:18.659 [2024-04-25 03:28:53.001297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.659 [2024-04-25 03:28:53.001478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.659 [2024-04-25 03:28:53.001503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.659 qpair failed and we were unable to recover it. 00:28:18.659 [2024-04-25 03:28:53.001671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.659 [2024-04-25 03:28:53.001869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.659 [2024-04-25 03:28:53.001895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.659 qpair failed and we were unable to recover it. 00:28:18.659 [2024-04-25 03:28:53.002071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.659 [2024-04-25 03:28:53.002238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.659 [2024-04-25 03:28:53.002263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.659 qpair failed and we were unable to recover it. 00:28:18.659 [2024-04-25 03:28:53.002463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.659 [2024-04-25 03:28:53.002665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.659 [2024-04-25 03:28:53.002691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.659 qpair failed and we were unable to recover it. 00:28:18.659 [2024-04-25 03:28:53.002864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.659 [2024-04-25 03:28:53.003034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.659 [2024-04-25 03:28:53.003059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.659 qpair failed and we were unable to recover it. 00:28:18.659 [2024-04-25 03:28:53.003219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.659 [2024-04-25 03:28:53.003381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.659 [2024-04-25 03:28:53.003406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.659 qpair failed and we were unable to recover it. 00:28:18.659 [2024-04-25 03:28:53.003596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.659 [2024-04-25 03:28:53.003925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.659 [2024-04-25 03:28:53.003954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.659 qpair failed and we were unable to recover it. 00:28:18.659 [2024-04-25 03:28:53.004175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.659 [2024-04-25 03:28:53.004367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.659 [2024-04-25 03:28:53.004393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.659 qpair failed and we were unable to recover it. 00:28:18.659 [2024-04-25 03:28:53.004587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.659 [2024-04-25 03:28:53.004780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.659 [2024-04-25 03:28:53.004806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.659 qpair failed and we were unable to recover it. 00:28:18.659 [2024-04-25 03:28:53.005005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.659 [2024-04-25 03:28:53.005178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.659 [2024-04-25 03:28:53.005203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.659 qpair failed and we were unable to recover it. 00:28:18.659 [2024-04-25 03:28:53.005401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.659 [2024-04-25 03:28:53.005576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.659 [2024-04-25 03:28:53.005602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.659 qpair failed and we were unable to recover it. 00:28:18.659 [2024-04-25 03:28:53.005818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.659 [2024-04-25 03:28:53.005986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.659 [2024-04-25 03:28:53.006011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.659 qpair failed and we were unable to recover it. 00:28:18.659 [2024-04-25 03:28:53.006209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.659 [2024-04-25 03:28:53.006533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.659 [2024-04-25 03:28:53.006559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.659 qpair failed and we were unable to recover it. 00:28:18.659 [2024-04-25 03:28:53.006756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.659 [2024-04-25 03:28:53.007007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.659 [2024-04-25 03:28:53.007033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.659 qpair failed and we were unable to recover it. 00:28:18.660 [2024-04-25 03:28:53.007236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.660 [2024-04-25 03:28:53.007454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.660 [2024-04-25 03:28:53.007480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.660 qpair failed and we were unable to recover it. 00:28:18.660 [2024-04-25 03:28:53.007685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.660 [2024-04-25 03:28:53.007862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.660 [2024-04-25 03:28:53.007888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.660 qpair failed and we were unable to recover it. 00:28:18.660 [2024-04-25 03:28:53.008051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.660 [2024-04-25 03:28:53.008226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.660 [2024-04-25 03:28:53.008252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.660 qpair failed and we were unable to recover it. 00:28:18.660 [2024-04-25 03:28:53.008414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.660 [2024-04-25 03:28:53.008611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.660 [2024-04-25 03:28:53.008642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.660 qpair failed and we were unable to recover it. 00:28:18.660 [2024-04-25 03:28:53.008816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.660 [2024-04-25 03:28:53.008983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.660 [2024-04-25 03:28:53.009012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.660 qpair failed and we were unable to recover it. 00:28:18.660 [2024-04-25 03:28:53.009237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.660 [2024-04-25 03:28:53.009409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.660 [2024-04-25 03:28:53.009434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.660 qpair failed and we were unable to recover it. 00:28:18.660 [2024-04-25 03:28:53.009610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.660 [2024-04-25 03:28:53.009930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.660 [2024-04-25 03:28:53.009956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.660 qpair failed and we were unable to recover it. 00:28:18.660 [2024-04-25 03:28:53.010158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.660 [2024-04-25 03:28:53.010323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.660 [2024-04-25 03:28:53.010349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.660 qpair failed and we were unable to recover it. 00:28:18.660 [2024-04-25 03:28:53.010549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.660 [2024-04-25 03:28:53.010869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.660 [2024-04-25 03:28:53.010896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.660 qpair failed and we were unable to recover it. 00:28:18.660 [2024-04-25 03:28:53.011094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.660 [2024-04-25 03:28:53.011279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.660 [2024-04-25 03:28:53.011305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.660 qpair failed and we were unable to recover it. 00:28:18.660 [2024-04-25 03:28:53.011505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.660 [2024-04-25 03:28:53.011677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.660 [2024-04-25 03:28:53.011704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.660 qpair failed and we were unable to recover it. 00:28:18.660 [2024-04-25 03:28:53.011894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.660 [2024-04-25 03:28:53.012088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.660 [2024-04-25 03:28:53.012113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.660 qpair failed and we were unable to recover it. 00:28:18.660 [2024-04-25 03:28:53.012299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.660 [2024-04-25 03:28:53.012499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.660 [2024-04-25 03:28:53.012524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.660 qpair failed and we were unable to recover it. 00:28:18.660 [2024-04-25 03:28:53.012699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.660 [2024-04-25 03:28:53.012868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.660 [2024-04-25 03:28:53.012895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.660 qpair failed and we were unable to recover it. 00:28:18.660 [2024-04-25 03:28:53.013209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.660 [2024-04-25 03:28:53.013412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.660 [2024-04-25 03:28:53.013438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.660 qpair failed and we were unable to recover it. 00:28:18.660 [2024-04-25 03:28:53.013612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.660 [2024-04-25 03:28:53.013791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.660 [2024-04-25 03:28:53.013816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.660 qpair failed and we were unable to recover it. 00:28:18.660 [2024-04-25 03:28:53.013991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.660 [2024-04-25 03:28:53.014189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.660 [2024-04-25 03:28:53.014214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.660 qpair failed and we were unable to recover it. 00:28:18.660 [2024-04-25 03:28:53.014560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.660 [2024-04-25 03:28:53.014777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.660 [2024-04-25 03:28:53.014803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.660 qpair failed and we were unable to recover it. 00:28:18.660 [2024-04-25 03:28:53.014995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.660 [2024-04-25 03:28:53.015181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.660 [2024-04-25 03:28:53.015207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.660 qpair failed and we were unable to recover it. 00:28:18.660 [2024-04-25 03:28:53.015409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.660 [2024-04-25 03:28:53.015617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.660 [2024-04-25 03:28:53.015647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.660 qpair failed and we were unable to recover it. 00:28:18.660 [2024-04-25 03:28:53.015831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.660 [2024-04-25 03:28:53.016068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.660 [2024-04-25 03:28:53.016093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.660 qpair failed and we were unable to recover it. 00:28:18.660 [2024-04-25 03:28:53.016279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.660 [2024-04-25 03:28:53.016478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.660 [2024-04-25 03:28:53.016503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.660 qpair failed and we were unable to recover it. 00:28:18.660 [2024-04-25 03:28:53.016680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.660 [2024-04-25 03:28:53.016862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.660 [2024-04-25 03:28:53.016888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.660 qpair failed and we were unable to recover it. 00:28:18.660 [2024-04-25 03:28:53.017114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.660 [2024-04-25 03:28:53.017328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.660 [2024-04-25 03:28:53.017353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.660 qpair failed and we were unable to recover it. 00:28:18.660 [2024-04-25 03:28:53.017548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.660 [2024-04-25 03:28:53.017722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.660 [2024-04-25 03:28:53.017747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.660 qpair failed and we were unable to recover it. 00:28:18.660 [2024-04-25 03:28:53.018042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.660 [2024-04-25 03:28:53.018240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.660 [2024-04-25 03:28:53.018265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.660 qpair failed and we were unable to recover it. 00:28:18.660 [2024-04-25 03:28:53.018442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.660 [2024-04-25 03:28:53.018604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.660 [2024-04-25 03:28:53.018638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.660 qpair failed and we were unable to recover it. 00:28:18.660 [2024-04-25 03:28:53.018827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.660 [2024-04-25 03:28:53.019021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.660 [2024-04-25 03:28:53.019045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.660 qpair failed and we were unable to recover it. 00:28:18.660 [2024-04-25 03:28:53.019228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.660 [2024-04-25 03:28:53.019417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.660 [2024-04-25 03:28:53.019443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.660 qpair failed and we were unable to recover it. 00:28:18.660 [2024-04-25 03:28:53.019620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.660 [2024-04-25 03:28:53.019806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.660 [2024-04-25 03:28:53.019831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.660 qpair failed and we were unable to recover it. 00:28:18.660 [2024-04-25 03:28:53.020013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.660 [2024-04-25 03:28:53.020205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.660 [2024-04-25 03:28:53.020231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.660 qpair failed and we were unable to recover it. 00:28:18.660 [2024-04-25 03:28:53.020392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.660 [2024-04-25 03:28:53.020595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.660 [2024-04-25 03:28:53.020620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.660 qpair failed and we were unable to recover it. 00:28:18.660 [2024-04-25 03:28:53.020798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.660 [2024-04-25 03:28:53.020992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.660 [2024-04-25 03:28:53.021017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.660 qpair failed and we were unable to recover it. 00:28:18.660 [2024-04-25 03:28:53.021219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.660 [2024-04-25 03:28:53.021410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.660 [2024-04-25 03:28:53.021435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.660 qpair failed and we were unable to recover it. 00:28:18.660 [2024-04-25 03:28:53.021637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.660 [2024-04-25 03:28:53.021809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.660 [2024-04-25 03:28:53.021835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.660 qpair failed and we were unable to recover it. 00:28:18.660 [2024-04-25 03:28:53.022004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.660 [2024-04-25 03:28:53.022171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.660 [2024-04-25 03:28:53.022197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.660 qpair failed and we were unable to recover it. 00:28:18.660 [2024-04-25 03:28:53.022409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.660 [2024-04-25 03:28:53.022609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.660 [2024-04-25 03:28:53.022641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.660 qpair failed and we were unable to recover it. 00:28:18.660 [2024-04-25 03:28:53.022824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.660 [2024-04-25 03:28:53.023037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.660 [2024-04-25 03:28:53.023062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.660 qpair failed and we were unable to recover it. 00:28:18.660 [2024-04-25 03:28:53.023241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.660 [2024-04-25 03:28:53.023405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.660 [2024-04-25 03:28:53.023431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.660 qpair failed and we were unable to recover it. 00:28:18.660 [2024-04-25 03:28:53.023602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.660 [2024-04-25 03:28:53.023793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.660 [2024-04-25 03:28:53.023818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.660 qpair failed and we were unable to recover it. 00:28:18.660 [2024-04-25 03:28:53.024019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.660 [2024-04-25 03:28:53.024214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.660 [2024-04-25 03:28:53.024241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.660 qpair failed and we were unable to recover it. 00:28:18.660 [2024-04-25 03:28:53.024408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.660 [2024-04-25 03:28:53.024607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.660 [2024-04-25 03:28:53.024654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.660 qpair failed and we were unable to recover it. 00:28:18.660 [2024-04-25 03:28:53.024829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.660 [2024-04-25 03:28:53.025036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.660 [2024-04-25 03:28:53.025063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.660 qpair failed and we were unable to recover it. 00:28:18.660 [2024-04-25 03:28:53.025235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.660 [2024-04-25 03:28:53.025415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.660 [2024-04-25 03:28:53.025442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.660 qpair failed and we were unable to recover it. 00:28:18.660 [2024-04-25 03:28:53.025618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.660 [2024-04-25 03:28:53.025789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.660 [2024-04-25 03:28:53.025813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.660 qpair failed and we were unable to recover it. 00:28:18.660 [2024-04-25 03:28:53.026004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.660 [2024-04-25 03:28:53.026220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.660 [2024-04-25 03:28:53.026250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.660 qpair failed and we were unable to recover it. 00:28:18.660 [2024-04-25 03:28:53.026418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.660 [2024-04-25 03:28:53.026611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.660 [2024-04-25 03:28:53.026642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.660 qpair failed and we were unable to recover it. 00:28:18.660 [2024-04-25 03:28:53.026857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.660 [2024-04-25 03:28:53.027021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.660 [2024-04-25 03:28:53.027048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.660 qpair failed and we were unable to recover it. 00:28:18.660 [2024-04-25 03:28:53.027225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.660 [2024-04-25 03:28:53.027429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.660 [2024-04-25 03:28:53.027454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.660 qpair failed and we were unable to recover it. 00:28:18.660 [2024-04-25 03:28:53.027632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.660 [2024-04-25 03:28:53.027811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.660 [2024-04-25 03:28:53.027837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.660 qpair failed and we were unable to recover it. 00:28:18.660 [2024-04-25 03:28:53.028033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.660 [2024-04-25 03:28:53.028202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.660 [2024-04-25 03:28:53.028226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.660 qpair failed and we were unable to recover it. 00:28:18.660 [2024-04-25 03:28:53.028394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.660 [2024-04-25 03:28:53.028614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.660 [2024-04-25 03:28:53.028647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.660 qpair failed and we were unable to recover it. 00:28:18.660 [2024-04-25 03:28:53.028839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.661 [2024-04-25 03:28:53.029043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.661 [2024-04-25 03:28:53.029069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.661 qpair failed and we were unable to recover it. 00:28:18.661 [2024-04-25 03:28:53.029255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.661 [2024-04-25 03:28:53.029440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.661 [2024-04-25 03:28:53.029465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.661 qpair failed and we were unable to recover it. 00:28:18.661 [2024-04-25 03:28:53.029722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.661 [2024-04-25 03:28:53.029899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.661 [2024-04-25 03:28:53.029937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.661 qpair failed and we were unable to recover it. 00:28:18.661 [2024-04-25 03:28:53.030110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.661 [2024-04-25 03:28:53.030303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.661 [2024-04-25 03:28:53.030333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.661 qpair failed and we were unable to recover it. 00:28:18.661 [2024-04-25 03:28:53.030500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.661 [2024-04-25 03:28:53.030700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.661 [2024-04-25 03:28:53.030726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.661 qpair failed and we were unable to recover it. 00:28:18.661 [2024-04-25 03:28:53.030923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.661 [2024-04-25 03:28:53.031107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.661 [2024-04-25 03:28:53.031134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.661 qpair failed and we were unable to recover it. 00:28:18.661 [2024-04-25 03:28:53.031365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.661 [2024-04-25 03:28:53.031540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.661 [2024-04-25 03:28:53.031565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.661 qpair failed and we were unable to recover it. 00:28:18.661 [2024-04-25 03:28:53.031762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.661 [2024-04-25 03:28:53.031947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.661 [2024-04-25 03:28:53.031973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.661 qpair failed and we were unable to recover it. 00:28:18.661 [2024-04-25 03:28:53.032153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.661 [2024-04-25 03:28:53.032347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.661 [2024-04-25 03:28:53.032372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.661 qpair failed and we were unable to recover it. 00:28:18.661 [2024-04-25 03:28:53.032565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.661 [2024-04-25 03:28:53.032738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.661 [2024-04-25 03:28:53.032763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.661 qpair failed and we were unable to recover it. 00:28:18.661 [2024-04-25 03:28:53.032937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.661 [2024-04-25 03:28:53.033107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.661 [2024-04-25 03:28:53.033134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.661 qpair failed and we were unable to recover it. 00:28:18.661 [2024-04-25 03:28:53.033336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.661 [2024-04-25 03:28:53.033515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.661 [2024-04-25 03:28:53.033540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.661 qpair failed and we were unable to recover it. 00:28:18.661 [2024-04-25 03:28:53.033736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.661 [2024-04-25 03:28:53.033931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.661 [2024-04-25 03:28:53.033957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.661 qpair failed and we were unable to recover it. 00:28:18.661 [2024-04-25 03:28:53.034156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.661 [2024-04-25 03:28:53.034332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.661 [2024-04-25 03:28:53.034358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.661 qpair failed and we were unable to recover it. 00:28:18.661 [2024-04-25 03:28:53.034542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.661 [2024-04-25 03:28:53.034714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.661 [2024-04-25 03:28:53.034739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.661 qpair failed and we were unable to recover it. 00:28:18.661 [2024-04-25 03:28:53.034923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.661 [2024-04-25 03:28:53.035091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.661 [2024-04-25 03:28:53.035117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.661 qpair failed and we were unable to recover it. 00:28:18.661 [2024-04-25 03:28:53.035328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.661 [2024-04-25 03:28:53.035503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.661 [2024-04-25 03:28:53.035529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.661 qpair failed and we were unable to recover it. 00:28:18.661 [2024-04-25 03:28:53.035732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.661 [2024-04-25 03:28:53.035906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.661 [2024-04-25 03:28:53.035940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.661 qpair failed and we were unable to recover it. 00:28:18.661 [2024-04-25 03:28:53.036137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.661 [2024-04-25 03:28:53.036353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.661 [2024-04-25 03:28:53.036378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.661 qpair failed and we were unable to recover it. 00:28:18.661 [2024-04-25 03:28:53.036550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.661 [2024-04-25 03:28:53.036783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.661 [2024-04-25 03:28:53.036808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.661 qpair failed and we were unable to recover it. 00:28:18.661 [2024-04-25 03:28:53.036974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.661 [2024-04-25 03:28:53.037184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.661 [2024-04-25 03:28:53.037210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.661 qpair failed and we were unable to recover it. 00:28:18.661 [2024-04-25 03:28:53.037375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.661 [2024-04-25 03:28:53.037609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.661 [2024-04-25 03:28:53.037640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.661 qpair failed and we were unable to recover it. 00:28:18.661 [2024-04-25 03:28:53.037812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.661 [2024-04-25 03:28:53.037985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.661 [2024-04-25 03:28:53.038011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.661 qpair failed and we were unable to recover it. 00:28:18.661 [2024-04-25 03:28:53.038200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.661 [2024-04-25 03:28:53.038369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.661 [2024-04-25 03:28:53.038394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.661 qpair failed and we were unable to recover it. 00:28:18.661 [2024-04-25 03:28:53.038571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.661 [2024-04-25 03:28:53.038768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.661 [2024-04-25 03:28:53.038794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.661 qpair failed and we were unable to recover it. 00:28:18.661 [2024-04-25 03:28:53.038999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.661 [2024-04-25 03:28:53.039218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.661 [2024-04-25 03:28:53.039243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.661 qpair failed and we were unable to recover it. 00:28:18.661 [2024-04-25 03:28:53.039414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.661 [2024-04-25 03:28:53.039603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.661 [2024-04-25 03:28:53.039634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.661 qpair failed and we were unable to recover it. 00:28:18.661 [2024-04-25 03:28:53.039804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.661 [2024-04-25 03:28:53.039969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.661 [2024-04-25 03:28:53.039995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.661 qpair failed and we were unable to recover it. 00:28:18.661 [2024-04-25 03:28:53.040182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.661 [2024-04-25 03:28:53.040377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.661 [2024-04-25 03:28:53.040403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.661 qpair failed and we were unable to recover it. 00:28:18.661 [2024-04-25 03:28:53.040561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.661 [2024-04-25 03:28:53.040749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.661 [2024-04-25 03:28:53.040775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.661 qpair failed and we were unable to recover it. 00:28:18.661 [2024-04-25 03:28:53.040952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.661 [2024-04-25 03:28:53.041122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.661 [2024-04-25 03:28:53.041148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.661 qpair failed and we were unable to recover it. 00:28:18.661 [2024-04-25 03:28:53.041370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.661 [2024-04-25 03:28:53.041528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.661 [2024-04-25 03:28:53.041555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.661 qpair failed and we were unable to recover it. 00:28:18.661 [2024-04-25 03:28:53.041726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.661 [2024-04-25 03:28:53.041929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.661 [2024-04-25 03:28:53.041960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.661 qpair failed and we were unable to recover it. 00:28:18.661 [2024-04-25 03:28:53.042133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.661 [2024-04-25 03:28:53.042327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.661 [2024-04-25 03:28:53.042353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.661 qpair failed and we were unable to recover it. 00:28:18.661 [2024-04-25 03:28:53.042517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.661 [2024-04-25 03:28:53.042731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.661 [2024-04-25 03:28:53.042757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.661 qpair failed and we were unable to recover it. 00:28:18.661 [2024-04-25 03:28:53.042926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.661 [2024-04-25 03:28:53.043114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.661 [2024-04-25 03:28:53.043140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.661 qpair failed and we were unable to recover it. 00:28:18.661 [2024-04-25 03:28:53.043306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.661 [2024-04-25 03:28:53.043476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.661 [2024-04-25 03:28:53.043501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.661 qpair failed and we were unable to recover it. 00:28:18.661 [2024-04-25 03:28:53.043716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.661 [2024-04-25 03:28:53.043877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.661 [2024-04-25 03:28:53.043902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.661 qpair failed and we were unable to recover it. 00:28:18.661 [2024-04-25 03:28:53.044093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.661 [2024-04-25 03:28:53.044259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.661 [2024-04-25 03:28:53.044284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.661 qpair failed and we were unable to recover it. 00:28:18.661 [2024-04-25 03:28:53.044478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.661 [2024-04-25 03:28:53.044650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.661 [2024-04-25 03:28:53.044678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.661 qpair failed and we were unable to recover it. 00:28:18.661 [2024-04-25 03:28:53.044849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.661 [2024-04-25 03:28:53.045060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.661 [2024-04-25 03:28:53.045085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.661 qpair failed and we were unable to recover it. 00:28:18.661 [2024-04-25 03:28:53.045244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.661 [2024-04-25 03:28:53.045453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.661 [2024-04-25 03:28:53.045479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.661 qpair failed and we were unable to recover it. 00:28:18.661 [2024-04-25 03:28:53.045650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.661 [2024-04-25 03:28:53.045840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.661 [2024-04-25 03:28:53.045866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.661 qpair failed and we were unable to recover it. 00:28:18.661 [2024-04-25 03:28:53.046061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.661 [2024-04-25 03:28:53.046254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.661 [2024-04-25 03:28:53.046279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.661 qpair failed and we were unable to recover it. 00:28:18.661 [2024-04-25 03:28:53.046502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.661 [2024-04-25 03:28:53.046702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.661 [2024-04-25 03:28:53.046733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.661 qpair failed and we were unable to recover it. 00:28:18.661 [2024-04-25 03:28:53.046908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.661 [2024-04-25 03:28:53.047067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.661 [2024-04-25 03:28:53.047092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.661 qpair failed and we were unable to recover it. 00:28:18.661 [2024-04-25 03:28:53.047288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.661 [2024-04-25 03:28:53.047456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.661 [2024-04-25 03:28:53.047481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.661 qpair failed and we were unable to recover it. 00:28:18.661 [2024-04-25 03:28:53.047644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.661 [2024-04-25 03:28:53.047824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.661 [2024-04-25 03:28:53.047849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.661 qpair failed and we were unable to recover it. 00:28:18.661 [2024-04-25 03:28:53.048042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.661 [2024-04-25 03:28:53.048234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.661 [2024-04-25 03:28:53.048260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.661 qpair failed and we were unable to recover it. 00:28:18.661 [2024-04-25 03:28:53.048453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.661 [2024-04-25 03:28:53.048647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.661 [2024-04-25 03:28:53.048685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.661 qpair failed and we were unable to recover it. 00:28:18.661 [2024-04-25 03:28:53.048908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.662 [2024-04-25 03:28:53.049070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.662 [2024-04-25 03:28:53.049096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.662 qpair failed and we were unable to recover it. 00:28:18.662 [2024-04-25 03:28:53.049256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.662 [2024-04-25 03:28:53.049427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.662 [2024-04-25 03:28:53.049453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.662 qpair failed and we were unable to recover it. 00:28:18.662 [2024-04-25 03:28:53.049649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.662 [2024-04-25 03:28:53.049821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.662 [2024-04-25 03:28:53.049848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.662 qpair failed and we were unable to recover it. 00:28:18.662 [2024-04-25 03:28:53.050018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.662 [2024-04-25 03:28:53.050211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.662 [2024-04-25 03:28:53.050236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.662 qpair failed and we were unable to recover it. 00:28:18.662 [2024-04-25 03:28:53.050465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.662 [2024-04-25 03:28:53.050637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.662 [2024-04-25 03:28:53.050663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.662 qpair failed and we were unable to recover it. 00:28:18.662 [2024-04-25 03:28:53.050845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.662 [2024-04-25 03:28:53.051037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.662 [2024-04-25 03:28:53.051063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.662 qpair failed and we were unable to recover it. 00:28:18.662 [2024-04-25 03:28:53.051228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.662 [2024-04-25 03:28:53.051422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.662 [2024-04-25 03:28:53.051447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.662 qpair failed and we were unable to recover it. 00:28:18.662 [2024-04-25 03:28:53.051646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.662 [2024-04-25 03:28:53.051821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.662 [2024-04-25 03:28:53.051846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.662 qpair failed and we were unable to recover it. 00:28:18.662 [2024-04-25 03:28:53.052044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.662 [2024-04-25 03:28:53.052237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.662 [2024-04-25 03:28:53.052263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.662 qpair failed and we were unable to recover it. 00:28:18.662 [2024-04-25 03:28:53.052432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.662 [2024-04-25 03:28:53.052622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.662 [2024-04-25 03:28:53.052660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.662 qpair failed and we were unable to recover it. 00:28:18.662 [2024-04-25 03:28:53.052839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.662 [2024-04-25 03:28:53.053038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.662 [2024-04-25 03:28:53.053064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.662 qpair failed and we were unable to recover it. 00:28:18.662 [2024-04-25 03:28:53.053235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.662 [2024-04-25 03:28:53.053425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.662 [2024-04-25 03:28:53.053451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.662 qpair failed and we were unable to recover it. 00:28:18.662 [2024-04-25 03:28:53.053618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.662 [2024-04-25 03:28:53.053802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.662 [2024-04-25 03:28:53.053828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.662 qpair failed and we were unable to recover it. 00:28:18.662 [2024-04-25 03:28:53.054017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.662 [2024-04-25 03:28:53.054211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.662 [2024-04-25 03:28:53.054237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.662 qpair failed and we were unable to recover it. 00:28:18.662 [2024-04-25 03:28:53.054429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.662 [2024-04-25 03:28:53.054598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.662 [2024-04-25 03:28:53.054623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.662 qpair failed and we were unable to recover it. 00:28:18.662 [2024-04-25 03:28:53.054795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.662 [2024-04-25 03:28:53.054999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.662 [2024-04-25 03:28:53.055025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.662 qpair failed and we were unable to recover it. 00:28:18.662 [2024-04-25 03:28:53.055217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.662 [2024-04-25 03:28:53.055378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.662 [2024-04-25 03:28:53.055405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.662 qpair failed and we were unable to recover it. 00:28:18.662 [2024-04-25 03:28:53.055633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.662 [2024-04-25 03:28:53.055797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.662 [2024-04-25 03:28:53.055822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.662 qpair failed and we were unable to recover it. 00:28:18.662 [2024-04-25 03:28:53.055994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.662 [2024-04-25 03:28:53.056160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.662 [2024-04-25 03:28:53.056185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.662 qpair failed and we were unable to recover it. 00:28:18.662 [2024-04-25 03:28:53.056346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.662 [2024-04-25 03:28:53.056544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.662 [2024-04-25 03:28:53.056569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.662 qpair failed and we were unable to recover it. 00:28:18.662 [2024-04-25 03:28:53.056783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.662 [2024-04-25 03:28:53.056978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.662 [2024-04-25 03:28:53.057002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.662 qpair failed and we were unable to recover it. 00:28:18.662 [2024-04-25 03:28:53.057160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.662 [2024-04-25 03:28:53.057323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.662 [2024-04-25 03:28:53.057346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.662 qpair failed and we were unable to recover it. 00:28:18.662 [2024-04-25 03:28:53.057549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.662 [2024-04-25 03:28:53.057743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.662 [2024-04-25 03:28:53.057768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.662 qpair failed and we were unable to recover it. 00:28:18.662 [2024-04-25 03:28:53.057960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.662 [2024-04-25 03:28:53.058151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.662 [2024-04-25 03:28:53.058174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.662 qpair failed and we were unable to recover it. 00:28:18.662 [2024-04-25 03:28:53.058389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.662 [2024-04-25 03:28:53.058580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.662 [2024-04-25 03:28:53.058603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.662 qpair failed and we were unable to recover it. 00:28:18.662 [2024-04-25 03:28:53.058817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.662 [2024-04-25 03:28:53.059049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.662 [2024-04-25 03:28:53.059072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.662 qpair failed and we were unable to recover it. 00:28:18.662 [2024-04-25 03:28:53.059298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.662 [2024-04-25 03:28:53.059459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.662 [2024-04-25 03:28:53.059483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.662 qpair failed and we were unable to recover it. 00:28:18.662 [2024-04-25 03:28:53.059652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.662 [2024-04-25 03:28:53.059841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.662 [2024-04-25 03:28:53.059865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.662 qpair failed and we were unable to recover it. 00:28:18.662 [2024-04-25 03:28:53.060072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.662 [2024-04-25 03:28:53.060243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.662 [2024-04-25 03:28:53.060267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.662 qpair failed and we were unable to recover it. 00:28:18.662 [2024-04-25 03:28:53.060442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.662 [2024-04-25 03:28:53.060609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.662 [2024-04-25 03:28:53.060638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.662 qpair failed and we were unable to recover it. 00:28:18.662 [2024-04-25 03:28:53.060808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.662 [2024-04-25 03:28:53.061005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.662 [2024-04-25 03:28:53.061031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.662 qpair failed and we were unable to recover it. 00:28:18.662 [2024-04-25 03:28:53.061250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.662 [2024-04-25 03:28:53.061458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.662 [2024-04-25 03:28:53.061484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.662 qpair failed and we were unable to recover it. 00:28:18.662 [2024-04-25 03:28:53.061680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.662 [2024-04-25 03:28:53.061849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.662 [2024-04-25 03:28:53.061874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.662 qpair failed and we were unable to recover it. 00:28:18.662 [2024-04-25 03:28:53.062050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.662 [2024-04-25 03:28:53.062230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.662 [2024-04-25 03:28:53.062256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.662 qpair failed and we were unable to recover it. 00:28:18.662 [2024-04-25 03:28:53.062448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.662 [2024-04-25 03:28:53.062617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.662 [2024-04-25 03:28:53.062647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.662 qpair failed and we were unable to recover it. 00:28:18.662 [2024-04-25 03:28:53.062818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.662 [2024-04-25 03:28:53.063009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.662 [2024-04-25 03:28:53.063035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.662 qpair failed and we were unable to recover it. 00:28:18.662 [2024-04-25 03:28:53.063234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.662 [2024-04-25 03:28:53.063429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.662 [2024-04-25 03:28:53.063455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.662 qpair failed and we were unable to recover it. 00:28:18.662 [2024-04-25 03:28:53.063655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.662 [2024-04-25 03:28:53.063827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.662 [2024-04-25 03:28:53.063852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.662 qpair failed and we were unable to recover it. 00:28:18.662 [2024-04-25 03:28:53.064017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.662 [2024-04-25 03:28:53.064183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.662 [2024-04-25 03:28:53.064208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.662 qpair failed and we were unable to recover it. 00:28:18.662 [2024-04-25 03:28:53.064438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.662 [2024-04-25 03:28:53.064641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.662 [2024-04-25 03:28:53.064678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.662 qpair failed and we were unable to recover it. 00:28:18.662 [2024-04-25 03:28:53.064847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.662 [2024-04-25 03:28:53.065048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.662 [2024-04-25 03:28:53.065074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.662 qpair failed and we were unable to recover it. 00:28:18.662 [2024-04-25 03:28:53.065241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.662 [2024-04-25 03:28:53.065442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.662 [2024-04-25 03:28:53.065467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.662 qpair failed and we were unable to recover it. 00:28:18.662 [2024-04-25 03:28:53.065675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.662 [2024-04-25 03:28:53.065840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.662 [2024-04-25 03:28:53.065865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.662 qpair failed and we were unable to recover it. 00:28:18.662 [2024-04-25 03:28:53.066034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.662 [2024-04-25 03:28:53.066253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.662 [2024-04-25 03:28:53.066278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.662 qpair failed and we were unable to recover it. 00:28:18.662 [2024-04-25 03:28:53.066497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.662 [2024-04-25 03:28:53.066693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.662 [2024-04-25 03:28:53.066719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.662 qpair failed and we were unable to recover it. 00:28:18.662 [2024-04-25 03:28:53.066921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.662 [2024-04-25 03:28:53.067098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.662 [2024-04-25 03:28:53.067127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.662 qpair failed and we were unable to recover it. 00:28:18.662 [2024-04-25 03:28:53.067325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.662 [2024-04-25 03:28:53.067523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.663 [2024-04-25 03:28:53.067550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.663 qpair failed and we were unable to recover it. 00:28:18.663 [2024-04-25 03:28:53.067717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.663 [2024-04-25 03:28:53.067916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.663 [2024-04-25 03:28:53.067942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.663 qpair failed and we were unable to recover it. 00:28:18.663 [2024-04-25 03:28:53.068105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.663 [2024-04-25 03:28:53.068269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.663 [2024-04-25 03:28:53.068304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.663 qpair failed and we were unable to recover it. 00:28:18.663 [2024-04-25 03:28:53.068518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.663 [2024-04-25 03:28:53.068712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.663 [2024-04-25 03:28:53.068738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.663 qpair failed and we were unable to recover it. 00:28:18.663 [2024-04-25 03:28:53.068937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.663 [2024-04-25 03:28:53.069121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.663 [2024-04-25 03:28:53.069146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.663 qpair failed and we were unable to recover it. 00:28:18.663 [2024-04-25 03:28:53.069350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.663 [2024-04-25 03:28:53.069517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.663 [2024-04-25 03:28:53.069542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.663 qpair failed and we were unable to recover it. 00:28:18.663 [2024-04-25 03:28:53.069712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.663 [2024-04-25 03:28:53.069927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.663 [2024-04-25 03:28:53.069953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.663 qpair failed and we were unable to recover it. 00:28:18.663 [2024-04-25 03:28:53.070151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.663 [2024-04-25 03:28:53.070344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.663 [2024-04-25 03:28:53.070370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.663 qpair failed and we were unable to recover it. 00:28:18.663 [2024-04-25 03:28:53.070564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.663 [2024-04-25 03:28:53.070747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.663 [2024-04-25 03:28:53.070773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.663 qpair failed and we were unable to recover it. 00:28:18.663 [2024-04-25 03:28:53.070939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.663 [2024-04-25 03:28:53.071137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.663 [2024-04-25 03:28:53.071162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.663 qpair failed and we were unable to recover it. 00:28:18.663 [2024-04-25 03:28:53.071335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.663 [2024-04-25 03:28:53.071529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.663 [2024-04-25 03:28:53.071554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.663 qpair failed and we were unable to recover it. 00:28:18.663 [2024-04-25 03:28:53.071745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.663 [2024-04-25 03:28:53.071913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.663 [2024-04-25 03:28:53.071938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.663 qpair failed and we were unable to recover it. 00:28:18.663 [2024-04-25 03:28:53.072114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.663 [2024-04-25 03:28:53.072301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.663 [2024-04-25 03:28:53.072327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.663 qpair failed and we were unable to recover it. 00:28:18.663 [2024-04-25 03:28:53.072526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.663 [2024-04-25 03:28:53.072719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.663 [2024-04-25 03:28:53.072746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.663 qpair failed and we were unable to recover it. 00:28:18.663 [2024-04-25 03:28:53.072920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.663 [2024-04-25 03:28:53.073125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.663 [2024-04-25 03:28:53.073151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.663 qpair failed and we were unable to recover it. 00:28:18.663 [2024-04-25 03:28:53.073322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.663 [2024-04-25 03:28:53.073511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.663 [2024-04-25 03:28:53.073536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.663 qpair failed and we were unable to recover it. 00:28:18.663 [2024-04-25 03:28:53.073705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.663 [2024-04-25 03:28:53.073896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.663 [2024-04-25 03:28:53.073921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.663 qpair failed and we were unable to recover it. 00:28:18.663 [2024-04-25 03:28:53.074097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.663 [2024-04-25 03:28:53.074286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.663 [2024-04-25 03:28:53.074312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.663 qpair failed and we were unable to recover it. 00:28:18.663 [2024-04-25 03:28:53.074481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.663 [2024-04-25 03:28:53.074676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.663 [2024-04-25 03:28:53.074701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.663 qpair failed and we were unable to recover it. 00:28:18.663 [2024-04-25 03:28:53.074908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.663 [2024-04-25 03:28:53.075077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.663 [2024-04-25 03:28:53.075104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.663 qpair failed and we were unable to recover it. 00:28:18.663 [2024-04-25 03:28:53.075291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.663 [2024-04-25 03:28:53.075488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.663 [2024-04-25 03:28:53.075515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.663 qpair failed and we were unable to recover it. 00:28:18.663 [2024-04-25 03:28:53.075676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.663 [2024-04-25 03:28:53.075870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.663 [2024-04-25 03:28:53.075896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.663 qpair failed and we were unable to recover it. 00:28:18.663 [2024-04-25 03:28:53.076064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.663 [2024-04-25 03:28:53.076253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.663 [2024-04-25 03:28:53.076279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.663 qpair failed and we were unable to recover it. 00:28:18.663 [2024-04-25 03:28:53.076479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.663 [2024-04-25 03:28:53.076654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.663 [2024-04-25 03:28:53.076688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.663 qpair failed and we were unable to recover it. 00:28:18.663 [2024-04-25 03:28:53.076864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.663 [2024-04-25 03:28:53.077081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.663 [2024-04-25 03:28:53.077106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.663 qpair failed and we were unable to recover it. 00:28:18.663 [2024-04-25 03:28:53.077273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.663 [2024-04-25 03:28:53.077469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.663 [2024-04-25 03:28:53.077495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.663 qpair failed and we were unable to recover it. 00:28:18.663 [2024-04-25 03:28:53.077691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.663 [2024-04-25 03:28:53.077877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.663 [2024-04-25 03:28:53.077903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.663 qpair failed and we were unable to recover it. 00:28:18.663 [2024-04-25 03:28:53.078101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.663 [2024-04-25 03:28:53.078273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.663 [2024-04-25 03:28:53.078298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.663 qpair failed and we were unable to recover it. 00:28:18.663 [2024-04-25 03:28:53.078527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.663 [2024-04-25 03:28:53.078712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.663 [2024-04-25 03:28:53.078739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.663 qpair failed and we were unable to recover it. 00:28:18.663 [2024-04-25 03:28:53.078901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.663 [2024-04-25 03:28:53.079090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.663 [2024-04-25 03:28:53.079116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.663 qpair failed and we were unable to recover it. 00:28:18.663 [2024-04-25 03:28:53.079282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.663 [2024-04-25 03:28:53.079449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.663 [2024-04-25 03:28:53.079475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.663 qpair failed and we were unable to recover it. 00:28:18.663 [2024-04-25 03:28:53.079640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.663 [2024-04-25 03:28:53.079848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.663 [2024-04-25 03:28:53.079874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.663 qpair failed and we were unable to recover it. 00:28:18.663 [2024-04-25 03:28:53.080056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.663 [2024-04-25 03:28:53.080247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.663 [2024-04-25 03:28:53.080273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.663 qpair failed and we were unable to recover it. 00:28:18.663 [2024-04-25 03:28:53.080439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.663 [2024-04-25 03:28:53.080653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.663 [2024-04-25 03:28:53.080679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.663 qpair failed and we were unable to recover it. 00:28:18.663 [2024-04-25 03:28:53.080856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.663 [2024-04-25 03:28:53.081039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.663 [2024-04-25 03:28:53.081065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.663 qpair failed and we were unable to recover it. 00:28:18.663 [2024-04-25 03:28:53.081265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.663 [2024-04-25 03:28:53.081451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.663 [2024-04-25 03:28:53.081476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.663 qpair failed and we were unable to recover it. 00:28:18.663 [2024-04-25 03:28:53.081693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.663 [2024-04-25 03:28:53.081914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.663 [2024-04-25 03:28:53.081940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.663 qpair failed and we were unable to recover it. 00:28:18.663 [2024-04-25 03:28:53.082109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.663 [2024-04-25 03:28:53.082297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.663 [2024-04-25 03:28:53.082323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.663 qpair failed and we were unable to recover it. 00:28:18.663 [2024-04-25 03:28:53.082500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.663 [2024-04-25 03:28:53.082749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.663 [2024-04-25 03:28:53.082775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.663 qpair failed and we were unable to recover it. 00:28:18.663 [2024-04-25 03:28:53.082934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.663 [2024-04-25 03:28:53.083136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.663 [2024-04-25 03:28:53.083161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.663 qpair failed and we were unable to recover it. 00:28:18.663 [2024-04-25 03:28:53.083349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.663 [2024-04-25 03:28:53.083514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.663 [2024-04-25 03:28:53.083544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.663 qpair failed and we were unable to recover it. 00:28:18.663 [2024-04-25 03:28:53.083724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.663 [2024-04-25 03:28:53.083894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.663 [2024-04-25 03:28:53.083919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.663 qpair failed and we were unable to recover it. 00:28:18.663 [2024-04-25 03:28:53.084078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.663 [2024-04-25 03:28:53.084271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.663 [2024-04-25 03:28:53.084297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.663 qpair failed and we were unable to recover it. 00:28:18.663 [2024-04-25 03:28:53.084468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.663 [2024-04-25 03:28:53.084638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.663 [2024-04-25 03:28:53.084664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.663 qpair failed and we were unable to recover it. 00:28:18.663 [2024-04-25 03:28:53.084841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.663 [2024-04-25 03:28:53.085031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.663 [2024-04-25 03:28:53.085056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.663 qpair failed and we were unable to recover it. 00:28:18.663 [2024-04-25 03:28:53.085245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.663 [2024-04-25 03:28:53.085443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.663 [2024-04-25 03:28:53.085468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.663 qpair failed and we were unable to recover it. 00:28:18.663 [2024-04-25 03:28:53.085644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.663 [2024-04-25 03:28:53.085808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.663 [2024-04-25 03:28:53.085835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.663 qpair failed and we were unable to recover it. 00:28:18.663 [2024-04-25 03:28:53.086002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.663 [2024-04-25 03:28:53.086190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.663 [2024-04-25 03:28:53.086215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.663 qpair failed and we were unable to recover it. 00:28:18.663 [2024-04-25 03:28:53.086431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.663 [2024-04-25 03:28:53.086615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.663 [2024-04-25 03:28:53.086647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.663 qpair failed and we were unable to recover it. 00:28:18.663 [2024-04-25 03:28:53.086819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.663 [2024-04-25 03:28:53.086992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.663 [2024-04-25 03:28:53.087018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.663 qpair failed and we were unable to recover it. 00:28:18.663 [2024-04-25 03:28:53.087185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.663 [2024-04-25 03:28:53.087360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.663 [2024-04-25 03:28:53.087391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.663 qpair failed and we were unable to recover it. 00:28:18.663 [2024-04-25 03:28:53.087586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.663 [2024-04-25 03:28:53.087769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.663 [2024-04-25 03:28:53.087795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.664 qpair failed and we were unable to recover it. 00:28:18.664 [2024-04-25 03:28:53.087969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.664 [2024-04-25 03:28:53.088156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.664 [2024-04-25 03:28:53.088181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.664 qpair failed and we were unable to recover it. 00:28:18.664 [2024-04-25 03:28:53.088353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.664 [2024-04-25 03:28:53.088573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.664 [2024-04-25 03:28:53.088598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.664 qpair failed and we were unable to recover it. 00:28:18.664 [2024-04-25 03:28:53.088798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.664 [2024-04-25 03:28:53.088985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.664 [2024-04-25 03:28:53.089012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.664 qpair failed and we were unable to recover it. 00:28:18.664 [2024-04-25 03:28:53.089175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.664 [2024-04-25 03:28:53.089366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.664 [2024-04-25 03:28:53.089392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.664 qpair failed and we were unable to recover it. 00:28:18.664 [2024-04-25 03:28:53.089583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.664 [2024-04-25 03:28:53.089767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.664 [2024-04-25 03:28:53.089793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.664 qpair failed and we were unable to recover it. 00:28:18.664 [2024-04-25 03:28:53.089955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.664 [2024-04-25 03:28:53.090121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.664 [2024-04-25 03:28:53.090146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.664 qpair failed and we were unable to recover it. 00:28:18.664 [2024-04-25 03:28:53.090306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.664 [2024-04-25 03:28:53.090476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.664 [2024-04-25 03:28:53.090502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.664 qpair failed and we were unable to recover it. 00:28:18.664 [2024-04-25 03:28:53.090693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.664 [2024-04-25 03:28:53.090892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.664 [2024-04-25 03:28:53.090918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.664 qpair failed and we were unable to recover it. 00:28:18.664 [2024-04-25 03:28:53.091108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.664 [2024-04-25 03:28:53.091303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.664 [2024-04-25 03:28:53.091329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.664 qpair failed and we were unable to recover it. 00:28:18.664 [2024-04-25 03:28:53.091547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.664 [2024-04-25 03:28:53.091749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.664 [2024-04-25 03:28:53.091776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.664 qpair failed and we were unable to recover it. 00:28:18.664 [2024-04-25 03:28:53.091942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.664 [2024-04-25 03:28:53.092107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.664 [2024-04-25 03:28:53.092133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.664 qpair failed and we were unable to recover it. 00:28:18.664 [2024-04-25 03:28:53.092303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.664 [2024-04-25 03:28:53.092472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.664 [2024-04-25 03:28:53.092498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.664 qpair failed and we were unable to recover it. 00:28:18.664 [2024-04-25 03:28:53.092699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.664 [2024-04-25 03:28:53.092862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.664 [2024-04-25 03:28:53.092888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.664 qpair failed and we were unable to recover it. 00:28:18.664 [2024-04-25 03:28:53.093056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.664 [2024-04-25 03:28:53.093243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.664 [2024-04-25 03:28:53.093269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.664 qpair failed and we were unable to recover it. 00:28:18.664 [2024-04-25 03:28:53.093458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.664 [2024-04-25 03:28:53.093625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.664 [2024-04-25 03:28:53.093660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.664 qpair failed and we were unable to recover it. 00:28:18.664 [2024-04-25 03:28:53.093864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.664 [2024-04-25 03:28:53.094065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.664 [2024-04-25 03:28:53.094092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.664 qpair failed and we were unable to recover it. 00:28:18.664 [2024-04-25 03:28:53.094285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.664 [2024-04-25 03:28:53.094475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.664 [2024-04-25 03:28:53.094501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.664 qpair failed and we were unable to recover it. 00:28:18.664 [2024-04-25 03:28:53.094660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.664 [2024-04-25 03:28:53.094861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.664 [2024-04-25 03:28:53.094887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.664 qpair failed and we were unable to recover it. 00:28:18.664 [2024-04-25 03:28:53.095078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.664 [2024-04-25 03:28:53.095238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.664 [2024-04-25 03:28:53.095263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.664 qpair failed and we were unable to recover it. 00:28:18.664 [2024-04-25 03:28:53.095424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.664 [2024-04-25 03:28:53.095619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.664 [2024-04-25 03:28:53.095650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.664 qpair failed and we were unable to recover it. 00:28:18.664 [2024-04-25 03:28:53.095819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.664 [2024-04-25 03:28:53.095991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.664 [2024-04-25 03:28:53.096017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.664 qpair failed and we were unable to recover it. 00:28:18.664 [2024-04-25 03:28:53.096180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.664 [2024-04-25 03:28:53.096372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.664 [2024-04-25 03:28:53.096398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.664 qpair failed and we were unable to recover it. 00:28:18.664 [2024-04-25 03:28:53.096598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.664 [2024-04-25 03:28:53.096777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.664 [2024-04-25 03:28:53.096803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.664 qpair failed and we were unable to recover it. 00:28:18.664 [2024-04-25 03:28:53.096999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.664 [2024-04-25 03:28:53.097201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.664 [2024-04-25 03:28:53.097227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.664 qpair failed and we were unable to recover it. 00:28:18.664 [2024-04-25 03:28:53.097384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.664 [2024-04-25 03:28:53.097548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.664 [2024-04-25 03:28:53.097573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.664 qpair failed and we were unable to recover it. 00:28:18.664 [2024-04-25 03:28:53.097769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.664 [2024-04-25 03:28:53.097930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.664 [2024-04-25 03:28:53.097955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.664 qpair failed and we were unable to recover it. 00:28:18.664 [2024-04-25 03:28:53.098157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.664 [2024-04-25 03:28:53.098372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.664 [2024-04-25 03:28:53.098398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.664 qpair failed and we were unable to recover it. 00:28:18.664 [2024-04-25 03:28:53.098583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.664 [2024-04-25 03:28:53.098783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.664 [2024-04-25 03:28:53.098809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.664 qpair failed and we were unable to recover it. 00:28:18.664 [2024-04-25 03:28:53.099006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.664 [2024-04-25 03:28:53.099206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.664 [2024-04-25 03:28:53.099231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.664 qpair failed and we were unable to recover it. 00:28:18.664 [2024-04-25 03:28:53.099430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.664 [2024-04-25 03:28:53.099594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.664 [2024-04-25 03:28:53.099620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.664 qpair failed and we were unable to recover it. 00:28:18.664 [2024-04-25 03:28:53.099802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.664 [2024-04-25 03:28:53.099963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.664 [2024-04-25 03:28:53.099989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.664 qpair failed and we were unable to recover it. 00:28:18.664 [2024-04-25 03:28:53.100152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.664 [2024-04-25 03:28:53.100354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.664 [2024-04-25 03:28:53.100380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.664 qpair failed and we were unable to recover it. 00:28:18.664 [2024-04-25 03:28:53.100569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.664 [2024-04-25 03:28:53.100774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.664 [2024-04-25 03:28:53.100801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.664 qpair failed and we were unable to recover it. 00:28:18.664 [2024-04-25 03:28:53.100969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.664 [2024-04-25 03:28:53.101159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.664 [2024-04-25 03:28:53.101184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.664 qpair failed and we were unable to recover it. 00:28:18.664 [2024-04-25 03:28:53.101374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.664 [2024-04-25 03:28:53.101542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.664 [2024-04-25 03:28:53.101568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.664 qpair failed and we were unable to recover it. 00:28:18.664 [2024-04-25 03:28:53.101764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.664 [2024-04-25 03:28:53.101938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.664 [2024-04-25 03:28:53.101963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.664 qpair failed and we were unable to recover it. 00:28:18.664 [2024-04-25 03:28:53.102128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.664 [2024-04-25 03:28:53.102323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.664 [2024-04-25 03:28:53.102348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.664 qpair failed and we were unable to recover it. 00:28:18.664 [2024-04-25 03:28:53.102545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.664 [2024-04-25 03:28:53.102737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.664 [2024-04-25 03:28:53.102763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.664 qpair failed and we were unable to recover it. 00:28:18.664 [2024-04-25 03:28:53.102965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.664 [2024-04-25 03:28:53.103152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.664 [2024-04-25 03:28:53.103177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.664 qpair failed and we were unable to recover it. 00:28:18.664 [2024-04-25 03:28:53.103353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.664 [2024-04-25 03:28:53.103536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.664 [2024-04-25 03:28:53.103566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.664 qpair failed and we were unable to recover it. 00:28:18.664 [2024-04-25 03:28:53.103745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.664 [2024-04-25 03:28:53.103908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.664 [2024-04-25 03:28:53.103934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.664 qpair failed and we were unable to recover it. 00:28:18.664 [2024-04-25 03:28:53.104124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.664 [2024-04-25 03:28:53.104323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.664 [2024-04-25 03:28:53.104349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.664 qpair failed and we were unable to recover it. 00:28:18.664 [2024-04-25 03:28:53.104522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.664 [2024-04-25 03:28:53.104678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.664 [2024-04-25 03:28:53.104703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.664 qpair failed and we were unable to recover it. 00:28:18.664 [2024-04-25 03:28:53.104912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.664 [2024-04-25 03:28:53.105108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.665 [2024-04-25 03:28:53.105134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.665 qpair failed and we were unable to recover it. 00:28:18.665 [2024-04-25 03:28:53.105326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.665 [2024-04-25 03:28:53.105516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.665 [2024-04-25 03:28:53.105541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.665 qpair failed and we were unable to recover it. 00:28:18.665 [2024-04-25 03:28:53.105714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.665 [2024-04-25 03:28:53.105876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.665 [2024-04-25 03:28:53.105902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.665 qpair failed and we were unable to recover it. 00:28:18.665 [2024-04-25 03:28:53.106091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.665 [2024-04-25 03:28:53.106256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.665 [2024-04-25 03:28:53.106282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.665 qpair failed and we were unable to recover it. 00:28:18.665 [2024-04-25 03:28:53.106501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.665 [2024-04-25 03:28:53.106664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.665 [2024-04-25 03:28:53.106690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.665 qpair failed and we were unable to recover it. 00:28:18.665 [2024-04-25 03:28:53.106887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.665 [2024-04-25 03:28:53.107084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.665 [2024-04-25 03:28:53.107110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.665 qpair failed and we were unable to recover it. 00:28:18.665 [2024-04-25 03:28:53.107306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.665 [2024-04-25 03:28:53.107505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.665 [2024-04-25 03:28:53.107531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.665 qpair failed and we were unable to recover it. 00:28:18.665 [2024-04-25 03:28:53.107704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.665 [2024-04-25 03:28:53.107875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.665 [2024-04-25 03:28:53.107901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.665 qpair failed and we were unable to recover it. 00:28:18.665 [2024-04-25 03:28:53.108073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.665 [2024-04-25 03:28:53.108265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.665 [2024-04-25 03:28:53.108292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.665 qpair failed and we were unable to recover it. 00:28:18.665 [2024-04-25 03:28:53.108467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.665 [2024-04-25 03:28:53.108637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.665 [2024-04-25 03:28:53.108665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.665 qpair failed and we were unable to recover it. 00:28:18.665 [2024-04-25 03:28:53.108888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.665 [2024-04-25 03:28:53.109087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.665 [2024-04-25 03:28:53.109113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.665 qpair failed and we were unable to recover it. 00:28:18.665 [2024-04-25 03:28:53.109306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.665 [2024-04-25 03:28:53.109465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.665 [2024-04-25 03:28:53.109491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.665 qpair failed and we were unable to recover it. 00:28:18.665 [2024-04-25 03:28:53.109686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.665 [2024-04-25 03:28:53.109859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.665 [2024-04-25 03:28:53.109885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.665 qpair failed and we were unable to recover it. 00:28:18.665 [2024-04-25 03:28:53.110109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.665 [2024-04-25 03:28:53.110295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.665 [2024-04-25 03:28:53.110321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.665 qpair failed and we were unable to recover it. 00:28:18.665 [2024-04-25 03:28:53.110483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.665 [2024-04-25 03:28:53.110650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.665 [2024-04-25 03:28:53.110677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.665 qpair failed and we were unable to recover it. 00:28:18.665 [2024-04-25 03:28:53.110874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.665 [2024-04-25 03:28:53.111038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.665 [2024-04-25 03:28:53.111064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.665 qpair failed and we were unable to recover it. 00:28:18.665 [2024-04-25 03:28:53.111249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.665 [2024-04-25 03:28:53.111418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.665 [2024-04-25 03:28:53.111445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.665 qpair failed and we were unable to recover it. 00:28:18.665 [2024-04-25 03:28:53.111619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.665 [2024-04-25 03:28:53.111854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.665 [2024-04-25 03:28:53.111880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.665 qpair failed and we were unable to recover it. 00:28:18.665 [2024-04-25 03:28:53.112045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.665 [2024-04-25 03:28:53.112237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.665 [2024-04-25 03:28:53.112263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.665 qpair failed and we were unable to recover it. 00:28:18.665 [2024-04-25 03:28:53.112450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.665 [2024-04-25 03:28:53.112648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.665 [2024-04-25 03:28:53.112675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.665 qpair failed and we were unable to recover it. 00:28:18.665 [2024-04-25 03:28:53.112842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.665 [2024-04-25 03:28:53.113011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.665 [2024-04-25 03:28:53.113039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.665 qpair failed and we were unable to recover it. 00:28:18.665 [2024-04-25 03:28:53.113212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.665 [2024-04-25 03:28:53.113375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.665 [2024-04-25 03:28:53.113401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.665 qpair failed and we were unable to recover it. 00:28:18.665 [2024-04-25 03:28:53.113600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.665 [2024-04-25 03:28:53.113843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.665 [2024-04-25 03:28:53.113869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.665 qpair failed and we were unable to recover it. 00:28:18.665 [2024-04-25 03:28:53.114087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.665 [2024-04-25 03:28:53.114258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.665 [2024-04-25 03:28:53.114284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.665 qpair failed and we were unable to recover it. 00:28:18.665 [2024-04-25 03:28:53.114484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.665 [2024-04-25 03:28:53.114647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.665 [2024-04-25 03:28:53.114673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.665 qpair failed and we were unable to recover it. 00:28:18.665 [2024-04-25 03:28:53.114872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.665 [2024-04-25 03:28:53.115063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.665 [2024-04-25 03:28:53.115089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.665 qpair failed and we were unable to recover it. 00:28:18.665 [2024-04-25 03:28:53.115262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.665 [2024-04-25 03:28:53.115425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.665 [2024-04-25 03:28:53.115451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.665 qpair failed and we were unable to recover it. 00:28:18.665 [2024-04-25 03:28:53.115644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.665 [2024-04-25 03:28:53.115843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.665 [2024-04-25 03:28:53.115869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.665 qpair failed and we were unable to recover it. 00:28:18.665 [2024-04-25 03:28:53.116086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.665 [2024-04-25 03:28:53.116247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.665 [2024-04-25 03:28:53.116272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.665 qpair failed and we were unable to recover it. 00:28:18.665 [2024-04-25 03:28:53.116437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.665 [2024-04-25 03:28:53.116652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.665 [2024-04-25 03:28:53.116678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.665 qpair failed and we were unable to recover it. 00:28:18.665 [2024-04-25 03:28:53.116877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.665 [2024-04-25 03:28:53.117052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.665 [2024-04-25 03:28:53.117078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.665 qpair failed and we were unable to recover it. 00:28:18.665 [2024-04-25 03:28:53.117275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.665 [2024-04-25 03:28:53.117470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.665 [2024-04-25 03:28:53.117497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.665 qpair failed and we were unable to recover it. 00:28:18.665 [2024-04-25 03:28:53.117680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.665 [2024-04-25 03:28:53.117846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.665 [2024-04-25 03:28:53.117873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.665 qpair failed and we were unable to recover it. 00:28:18.665 [2024-04-25 03:28:53.118064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.665 [2024-04-25 03:28:53.118266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.665 [2024-04-25 03:28:53.118292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.665 qpair failed and we were unable to recover it. 00:28:18.665 [2024-04-25 03:28:53.118509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.665 [2024-04-25 03:28:53.118688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.665 [2024-04-25 03:28:53.118714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.665 qpair failed and we were unable to recover it. 00:28:18.665 [2024-04-25 03:28:53.118934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.665 [2024-04-25 03:28:53.119153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.665 [2024-04-25 03:28:53.119178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.665 qpair failed and we were unable to recover it. 00:28:18.665 [2024-04-25 03:28:53.119406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.665 [2024-04-25 03:28:53.119606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.665 [2024-04-25 03:28:53.119637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.665 qpair failed and we were unable to recover it. 00:28:18.665 [2024-04-25 03:28:53.119811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.665 [2024-04-25 03:28:53.120009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.665 [2024-04-25 03:28:53.120035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.665 qpair failed and we were unable to recover it. 00:28:18.665 [2024-04-25 03:28:53.120222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.665 [2024-04-25 03:28:53.120412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.665 [2024-04-25 03:28:53.120439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.665 qpair failed and we were unable to recover it. 00:28:18.665 [2024-04-25 03:28:53.120635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.665 [2024-04-25 03:28:53.120823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.665 [2024-04-25 03:28:53.120849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.665 qpair failed and we were unable to recover it. 00:28:18.665 [2024-04-25 03:28:53.121025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.665 [2024-04-25 03:28:53.121210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.665 [2024-04-25 03:28:53.121236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.665 qpair failed and we were unable to recover it. 00:28:18.665 [2024-04-25 03:28:53.121407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.665 [2024-04-25 03:28:53.121582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.665 [2024-04-25 03:28:53.121608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.665 qpair failed and we were unable to recover it. 00:28:18.665 [2024-04-25 03:28:53.121777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.665 [2024-04-25 03:28:53.121945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.665 [2024-04-25 03:28:53.121973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.665 qpair failed and we were unable to recover it. 00:28:18.665 [2024-04-25 03:28:53.122198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.665 [2024-04-25 03:28:53.122365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.665 [2024-04-25 03:28:53.122391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.665 qpair failed and we were unable to recover it. 00:28:18.665 [2024-04-25 03:28:53.122591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.665 [2024-04-25 03:28:53.122830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.665 [2024-04-25 03:28:53.122856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.665 qpair failed and we were unable to recover it. 00:28:18.665 [2024-04-25 03:28:53.123053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.665 [2024-04-25 03:28:53.123239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.665 [2024-04-25 03:28:53.123265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.665 qpair failed and we were unable to recover it. 00:28:18.665 [2024-04-25 03:28:53.123493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.665 [2024-04-25 03:28:53.123673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.665 [2024-04-25 03:28:53.123713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.665 qpair failed and we were unable to recover it. 00:28:18.665 [2024-04-25 03:28:53.123876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.665 [2024-04-25 03:28:53.124036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.666 [2024-04-25 03:28:53.124065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.666 qpair failed and we were unable to recover it. 00:28:18.666 [2024-04-25 03:28:53.124233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.666 [2024-04-25 03:28:53.124398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.666 [2024-04-25 03:28:53.124423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.666 qpair failed and we were unable to recover it. 00:28:18.666 [2024-04-25 03:28:53.124611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.666 [2024-04-25 03:28:53.124816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.666 [2024-04-25 03:28:53.124842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.666 qpair failed and we were unable to recover it. 00:28:18.666 [2024-04-25 03:28:53.125043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.666 [2024-04-25 03:28:53.125235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.666 [2024-04-25 03:28:53.125260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.666 qpair failed and we were unable to recover it. 00:28:18.666 [2024-04-25 03:28:53.125482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.666 [2024-04-25 03:28:53.125648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.666 [2024-04-25 03:28:53.125675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.666 qpair failed and we were unable to recover it. 00:28:18.666 [2024-04-25 03:28:53.125833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.666 [2024-04-25 03:28:53.126027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.666 [2024-04-25 03:28:53.126052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.666 qpair failed and we were unable to recover it. 00:28:18.666 [2024-04-25 03:28:53.126228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.666 [2024-04-25 03:28:53.126392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.666 [2024-04-25 03:28:53.126419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.666 qpair failed and we were unable to recover it. 00:28:18.666 [2024-04-25 03:28:53.126610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.666 [2024-04-25 03:28:53.126790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.666 [2024-04-25 03:28:53.126816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.666 qpair failed and we were unable to recover it. 00:28:18.666 [2024-04-25 03:28:53.127024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.666 [2024-04-25 03:28:53.127220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.666 [2024-04-25 03:28:53.127248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.666 qpair failed and we were unable to recover it. 00:28:18.666 [2024-04-25 03:28:53.127444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.666 [2024-04-25 03:28:53.127616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.666 [2024-04-25 03:28:53.127647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.666 qpair failed and we were unable to recover it. 00:28:18.666 [2024-04-25 03:28:53.127815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.666 [2024-04-25 03:28:53.128014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.666 [2024-04-25 03:28:53.128040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.666 qpair failed and we were unable to recover it. 00:28:18.666 [2024-04-25 03:28:53.128206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.666 [2024-04-25 03:28:53.128380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.666 [2024-04-25 03:28:53.128407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.666 qpair failed and we were unable to recover it. 00:28:18.666 [2024-04-25 03:28:53.128571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.666 [2024-04-25 03:28:53.128741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.666 [2024-04-25 03:28:53.128768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.666 qpair failed and we were unable to recover it. 00:28:18.666 [2024-04-25 03:28:53.128967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.666 [2024-04-25 03:28:53.129167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.666 [2024-04-25 03:28:53.129193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.666 qpair failed and we were unable to recover it. 00:28:18.666 [2024-04-25 03:28:53.129361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.666 [2024-04-25 03:28:53.129524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.666 [2024-04-25 03:28:53.129550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.666 qpair failed and we were unable to recover it. 00:28:18.666 [2024-04-25 03:28:53.129712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.666 [2024-04-25 03:28:53.129877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.666 [2024-04-25 03:28:53.129903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.666 qpair failed and we were unable to recover it. 00:28:18.666 [2024-04-25 03:28:53.130119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.666 [2024-04-25 03:28:53.130297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.666 [2024-04-25 03:28:53.130324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.666 qpair failed and we were unable to recover it. 00:28:18.666 [2024-04-25 03:28:53.130496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.666 [2024-04-25 03:28:53.130681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.666 [2024-04-25 03:28:53.130707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.666 qpair failed and we were unable to recover it. 00:28:18.666 [2024-04-25 03:28:53.130903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.666 [2024-04-25 03:28:53.131139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.666 [2024-04-25 03:28:53.131165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.666 qpair failed and we were unable to recover it. 00:28:18.666 [2024-04-25 03:28:53.131354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.666 [2024-04-25 03:28:53.131553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.666 [2024-04-25 03:28:53.131579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.666 qpair failed and we were unable to recover it. 00:28:18.666 [2024-04-25 03:28:53.131751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.666 [2024-04-25 03:28:53.131915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.666 [2024-04-25 03:28:53.131940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.666 qpair failed and we were unable to recover it. 00:28:18.666 [2024-04-25 03:28:53.132109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.666 [2024-04-25 03:28:53.132310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.666 [2024-04-25 03:28:53.132336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.666 qpair failed and we were unable to recover it. 00:28:18.666 [2024-04-25 03:28:53.132502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.666 [2024-04-25 03:28:53.132696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.666 [2024-04-25 03:28:53.132723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.666 qpair failed and we were unable to recover it. 00:28:18.666 [2024-04-25 03:28:53.132890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.666 [2024-04-25 03:28:53.133112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.666 [2024-04-25 03:28:53.133137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.666 qpair failed and we were unable to recover it. 00:28:18.666 [2024-04-25 03:28:53.133325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.666 [2024-04-25 03:28:53.133491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.666 [2024-04-25 03:28:53.133517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.666 qpair failed and we were unable to recover it. 00:28:18.666 [2024-04-25 03:28:53.133739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.666 [2024-04-25 03:28:53.133911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.666 [2024-04-25 03:28:53.133936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.666 qpair failed and we were unable to recover it. 00:28:18.666 [2024-04-25 03:28:53.134132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.666 [2024-04-25 03:28:53.134301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.666 [2024-04-25 03:28:53.134329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.666 qpair failed and we were unable to recover it. 00:28:18.666 [2024-04-25 03:28:53.134491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.666 [2024-04-25 03:28:53.134720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.666 [2024-04-25 03:28:53.134747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.666 qpair failed and we were unable to recover it. 00:28:18.666 [2024-04-25 03:28:53.134915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.666 [2024-04-25 03:28:53.135110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.666 [2024-04-25 03:28:53.135135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.666 qpair failed and we were unable to recover it. 00:28:18.666 [2024-04-25 03:28:53.135323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.666 [2024-04-25 03:28:53.135513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.666 [2024-04-25 03:28:53.135539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.666 qpair failed and we were unable to recover it. 00:28:18.666 [2024-04-25 03:28:53.135801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.666 [2024-04-25 03:28:53.136007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.666 [2024-04-25 03:28:53.136033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.666 qpair failed and we were unable to recover it. 00:28:18.666 [2024-04-25 03:28:53.136229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.666 [2024-04-25 03:28:53.136401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.666 [2024-04-25 03:28:53.136428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.666 qpair failed and we were unable to recover it. 00:28:18.666 [2024-04-25 03:28:53.136586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.666 [2024-04-25 03:28:53.136761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.666 [2024-04-25 03:28:53.136789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.666 qpair failed and we were unable to recover it. 00:28:18.941 [2024-04-25 03:28:53.136964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.941 [2024-04-25 03:28:53.137175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.941 [2024-04-25 03:28:53.137202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.941 qpair failed and we were unable to recover it. 00:28:18.941 [2024-04-25 03:28:53.137366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.942 [2024-04-25 03:28:53.137552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.942 [2024-04-25 03:28:53.137578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.942 qpair failed and we were unable to recover it. 00:28:18.942 [2024-04-25 03:28:53.137746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.942 [2024-04-25 03:28:53.137922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.942 [2024-04-25 03:28:53.137950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.942 qpair failed and we were unable to recover it. 00:28:18.942 [2024-04-25 03:28:53.138119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.942 [2024-04-25 03:28:53.138309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.942 [2024-04-25 03:28:53.138335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.942 qpair failed and we were unable to recover it. 00:28:18.942 [2024-04-25 03:28:53.138496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.942 [2024-04-25 03:28:53.138660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.942 [2024-04-25 03:28:53.138687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.942 qpair failed and we were unable to recover it. 00:28:18.942 [2024-04-25 03:28:53.138849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.942 [2024-04-25 03:28:53.139061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.942 [2024-04-25 03:28:53.139087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.942 qpair failed and we were unable to recover it. 00:28:18.942 [2024-04-25 03:28:53.139249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.942 [2024-04-25 03:28:53.139455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.942 [2024-04-25 03:28:53.139480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.942 qpair failed and we were unable to recover it. 00:28:18.942 [2024-04-25 03:28:53.139671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.942 [2024-04-25 03:28:53.139841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.942 [2024-04-25 03:28:53.139867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.942 qpair failed and we were unable to recover it. 00:28:18.942 [2024-04-25 03:28:53.140086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.942 [2024-04-25 03:28:53.140253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.942 [2024-04-25 03:28:53.140286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.942 qpair failed and we were unable to recover it. 00:28:18.942 [2024-04-25 03:28:53.140479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.942 [2024-04-25 03:28:53.140681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.942 [2024-04-25 03:28:53.140707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.942 qpair failed and we were unable to recover it. 00:28:18.942 [2024-04-25 03:28:53.140876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.942 [2024-04-25 03:28:53.141039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.942 [2024-04-25 03:28:53.141065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.942 qpair failed and we were unable to recover it. 00:28:18.942 [2024-04-25 03:28:53.141259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.942 [2024-04-25 03:28:53.141445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.942 [2024-04-25 03:28:53.141470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.942 qpair failed and we were unable to recover it. 00:28:18.942 [2024-04-25 03:28:53.141638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.942 [2024-04-25 03:28:53.141813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.942 [2024-04-25 03:28:53.141839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.942 qpair failed and we were unable to recover it. 00:28:18.942 [2024-04-25 03:28:53.142026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.942 [2024-04-25 03:28:53.142188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.942 [2024-04-25 03:28:53.142213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.942 qpair failed and we were unable to recover it. 00:28:18.942 [2024-04-25 03:28:53.142379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.942 [2024-04-25 03:28:53.142601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.942 [2024-04-25 03:28:53.142626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.942 qpair failed and we were unable to recover it. 00:28:18.942 [2024-04-25 03:28:53.142811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.942 [2024-04-25 03:28:53.142996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.942 [2024-04-25 03:28:53.143021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.942 qpair failed and we were unable to recover it. 00:28:18.942 [2024-04-25 03:28:53.143191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.942 [2024-04-25 03:28:53.143388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.942 [2024-04-25 03:28:53.143414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.942 qpair failed and we were unable to recover it. 00:28:18.942 [2024-04-25 03:28:53.143616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.942 [2024-04-25 03:28:53.143813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.942 [2024-04-25 03:28:53.143838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.942 qpair failed and we were unable to recover it. 00:28:18.942 [2024-04-25 03:28:53.144010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.942 [2024-04-25 03:28:53.144196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.942 [2024-04-25 03:28:53.144225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.942 qpair failed and we were unable to recover it. 00:28:18.942 [2024-04-25 03:28:53.144424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.942 [2024-04-25 03:28:53.144624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.942 [2024-04-25 03:28:53.144655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.942 qpair failed and we were unable to recover it. 00:28:18.942 [2024-04-25 03:28:53.144824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.942 [2024-04-25 03:28:53.145018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.942 [2024-04-25 03:28:53.145044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.942 qpair failed and we were unable to recover it. 00:28:18.942 [2024-04-25 03:28:53.145218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.942 [2024-04-25 03:28:53.145386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.942 [2024-04-25 03:28:53.145412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.942 qpair failed and we were unable to recover it. 00:28:18.942 [2024-04-25 03:28:53.145573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.942 [2024-04-25 03:28:53.145750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.942 [2024-04-25 03:28:53.145777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.942 qpair failed and we were unable to recover it. 00:28:18.942 [2024-04-25 03:28:53.145947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.942 [2024-04-25 03:28:53.146151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.942 [2024-04-25 03:28:53.146178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.942 qpair failed and we were unable to recover it. 00:28:18.942 [2024-04-25 03:28:53.146394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.942 [2024-04-25 03:28:53.146557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.942 [2024-04-25 03:28:53.146583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.942 qpair failed and we were unable to recover it. 00:28:18.942 [2024-04-25 03:28:53.146792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.942 [2024-04-25 03:28:53.146959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.942 [2024-04-25 03:28:53.146985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.942 qpair failed and we were unable to recover it. 00:28:18.942 [2024-04-25 03:28:53.147154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.942 [2024-04-25 03:28:53.147348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.942 [2024-04-25 03:28:53.147374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.942 qpair failed and we were unable to recover it. 00:28:18.942 [2024-04-25 03:28:53.147539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.942 [2024-04-25 03:28:53.147751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.942 [2024-04-25 03:28:53.147778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.942 qpair failed and we were unable to recover it. 00:28:18.942 [2024-04-25 03:28:53.147980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.942 [2024-04-25 03:28:53.148205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.942 [2024-04-25 03:28:53.148230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.942 qpair failed and we were unable to recover it. 00:28:18.942 [2024-04-25 03:28:53.148400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.942 [2024-04-25 03:28:53.148560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.942 [2024-04-25 03:28:53.148586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.942 qpair failed and we were unable to recover it. 00:28:18.943 [2024-04-25 03:28:53.148764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.943 [2024-04-25 03:28:53.148939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.943 [2024-04-25 03:28:53.148964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.943 qpair failed and we were unable to recover it. 00:28:18.943 [2024-04-25 03:28:53.149142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.943 [2024-04-25 03:28:53.149335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.943 [2024-04-25 03:28:53.149361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.943 qpair failed and we were unable to recover it. 00:28:18.943 [2024-04-25 03:28:53.149559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.943 [2024-04-25 03:28:53.149754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.943 [2024-04-25 03:28:53.149780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.943 qpair failed and we were unable to recover it. 00:28:18.943 [2024-04-25 03:28:53.149978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.943 [2024-04-25 03:28:53.150171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.943 [2024-04-25 03:28:53.150197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.943 qpair failed and we were unable to recover it. 00:28:18.943 [2024-04-25 03:28:53.150418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.943 [2024-04-25 03:28:53.150611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.943 [2024-04-25 03:28:53.150642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.943 qpair failed and we were unable to recover it. 00:28:18.943 [2024-04-25 03:28:53.150821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.943 [2024-04-25 03:28:53.150981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.943 [2024-04-25 03:28:53.151007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.943 qpair failed and we were unable to recover it. 00:28:18.943 [2024-04-25 03:28:53.151190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.943 [2024-04-25 03:28:53.151409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.943 [2024-04-25 03:28:53.151435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.943 qpair failed and we were unable to recover it. 00:28:18.943 [2024-04-25 03:28:53.151596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.943 [2024-04-25 03:28:53.151792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.943 [2024-04-25 03:28:53.151819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.943 qpair failed and we were unable to recover it. 00:28:18.943 [2024-04-25 03:28:53.151986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.943 [2024-04-25 03:28:53.152171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.943 [2024-04-25 03:28:53.152197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.943 qpair failed and we were unable to recover it. 00:28:18.943 [2024-04-25 03:28:53.152400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.943 [2024-04-25 03:28:53.152561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.943 [2024-04-25 03:28:53.152587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.943 qpair failed and we were unable to recover it. 00:28:18.943 [2024-04-25 03:28:53.152783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.943 [2024-04-25 03:28:53.152951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.943 [2024-04-25 03:28:53.152977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.943 qpair failed and we were unable to recover it. 00:28:18.943 [2024-04-25 03:28:53.153146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.943 [2024-04-25 03:28:53.153314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.943 [2024-04-25 03:28:53.153339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.943 qpair failed and we were unable to recover it. 00:28:18.943 [2024-04-25 03:28:53.153509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.943 [2024-04-25 03:28:53.153709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.943 [2024-04-25 03:28:53.153736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.943 qpair failed and we were unable to recover it. 00:28:18.943 [2024-04-25 03:28:53.153898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.943 [2024-04-25 03:28:53.154066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.943 [2024-04-25 03:28:53.154092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.943 qpair failed and we were unable to recover it. 00:28:18.943 [2024-04-25 03:28:53.154283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.943 [2024-04-25 03:28:53.154474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.943 [2024-04-25 03:28:53.154500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.943 qpair failed and we were unable to recover it. 00:28:18.943 [2024-04-25 03:28:53.154665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.943 [2024-04-25 03:28:53.154857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.943 [2024-04-25 03:28:53.154883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.943 qpair failed and we were unable to recover it. 00:28:18.943 [2024-04-25 03:28:53.155056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.943 [2024-04-25 03:28:53.155217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.943 [2024-04-25 03:28:53.155243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.943 qpair failed and we were unable to recover it. 00:28:18.943 [2024-04-25 03:28:53.155455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.943 [2024-04-25 03:28:53.155620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.943 [2024-04-25 03:28:53.155651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.943 qpair failed and we were unable to recover it. 00:28:18.943 [2024-04-25 03:28:53.155814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.943 [2024-04-25 03:28:53.156011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.943 [2024-04-25 03:28:53.156037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.943 qpair failed and we were unable to recover it. 00:28:18.943 [2024-04-25 03:28:53.156201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.943 [2024-04-25 03:28:53.156406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.943 [2024-04-25 03:28:53.156433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.943 qpair failed and we were unable to recover it. 00:28:18.943 [2024-04-25 03:28:53.156594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.943 [2024-04-25 03:28:53.156758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.943 [2024-04-25 03:28:53.156784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.943 qpair failed and we were unable to recover it. 00:28:18.943 [2024-04-25 03:28:53.156966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.943 [2024-04-25 03:28:53.157139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.943 [2024-04-25 03:28:53.157165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.943 qpair failed and we were unable to recover it. 00:28:18.943 [2024-04-25 03:28:53.157350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.943 [2024-04-25 03:28:53.157571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.943 [2024-04-25 03:28:53.157597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.943 qpair failed and we were unable to recover it. 00:28:18.943 [2024-04-25 03:28:53.157790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.943 [2024-04-25 03:28:53.157968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.943 [2024-04-25 03:28:53.157994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.943 qpair failed and we were unable to recover it. 00:28:18.943 [2024-04-25 03:28:53.158162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.943 [2024-04-25 03:28:53.158325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.943 [2024-04-25 03:28:53.158351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.943 qpair failed and we were unable to recover it. 00:28:18.943 [2024-04-25 03:28:53.158519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.943 [2024-04-25 03:28:53.158709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.943 [2024-04-25 03:28:53.158736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.943 qpair failed and we were unable to recover it. 00:28:18.943 [2024-04-25 03:28:53.158933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.943 [2024-04-25 03:28:53.159131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.943 [2024-04-25 03:28:53.159157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.943 qpair failed and we were unable to recover it. 00:28:18.943 [2024-04-25 03:28:53.159333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.943 [2024-04-25 03:28:53.159518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.943 [2024-04-25 03:28:53.159544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.943 qpair failed and we were unable to recover it. 00:28:18.943 [2024-04-25 03:28:53.159734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.943 [2024-04-25 03:28:53.159934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.944 [2024-04-25 03:28:53.159960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.944 qpair failed and we were unable to recover it. 00:28:18.944 [2024-04-25 03:28:53.160125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.944 [2024-04-25 03:28:53.160298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.944 [2024-04-25 03:28:53.160330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.944 qpair failed and we were unable to recover it. 00:28:18.944 [2024-04-25 03:28:53.160521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.944 [2024-04-25 03:28:53.160693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.944 [2024-04-25 03:28:53.160719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.944 qpair failed and we were unable to recover it. 00:28:18.944 [2024-04-25 03:28:53.160905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.944 [2024-04-25 03:28:53.161079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.944 [2024-04-25 03:28:53.161106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.944 qpair failed and we were unable to recover it. 00:28:18.944 [2024-04-25 03:28:53.161340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.944 [2024-04-25 03:28:53.161561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.944 [2024-04-25 03:28:53.161587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.944 qpair failed and we were unable to recover it. 00:28:18.944 [2024-04-25 03:28:53.161797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.944 [2024-04-25 03:28:53.161992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.944 [2024-04-25 03:28:53.162017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.944 qpair failed and we were unable to recover it. 00:28:18.944 [2024-04-25 03:28:53.162208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.944 [2024-04-25 03:28:53.162406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.944 [2024-04-25 03:28:53.162432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.944 qpair failed and we were unable to recover it. 00:28:18.944 [2024-04-25 03:28:53.162626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.944 [2024-04-25 03:28:53.162823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.944 [2024-04-25 03:28:53.162850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.944 qpair failed and we were unable to recover it. 00:28:18.944 [2024-04-25 03:28:53.163017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.944 [2024-04-25 03:28:53.163230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.944 [2024-04-25 03:28:53.163256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.944 qpair failed and we were unable to recover it. 00:28:18.944 [2024-04-25 03:28:53.163424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.944 [2024-04-25 03:28:53.163589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.944 [2024-04-25 03:28:53.163615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.944 qpair failed and we were unable to recover it. 00:28:18.944 [2024-04-25 03:28:53.163814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.944 [2024-04-25 03:28:53.163984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.944 [2024-04-25 03:28:53.164010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.944 qpair failed and we were unable to recover it. 00:28:18.944 [2024-04-25 03:28:53.164207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.944 [2024-04-25 03:28:53.164400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.944 [2024-04-25 03:28:53.164426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.944 qpair failed and we were unable to recover it. 00:28:18.944 [2024-04-25 03:28:53.164638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.944 [2024-04-25 03:28:53.164833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.944 [2024-04-25 03:28:53.164859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.944 qpair failed and we were unable to recover it. 00:28:18.944 [2024-04-25 03:28:53.165075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.944 [2024-04-25 03:28:53.165243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.944 [2024-04-25 03:28:53.165269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.944 qpair failed and we were unable to recover it. 00:28:18.944 [2024-04-25 03:28:53.165459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.944 [2024-04-25 03:28:53.165655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.944 [2024-04-25 03:28:53.165682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.944 qpair failed and we were unable to recover it. 00:28:18.944 [2024-04-25 03:28:53.165852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.944 [2024-04-25 03:28:53.166038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.944 [2024-04-25 03:28:53.166064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.944 qpair failed and we were unable to recover it. 00:28:18.944 [2024-04-25 03:28:53.166256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.944 [2024-04-25 03:28:53.166413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.944 [2024-04-25 03:28:53.166439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.944 qpair failed and we were unable to recover it. 00:28:18.944 [2024-04-25 03:28:53.166658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.944 [2024-04-25 03:28:53.166859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.944 [2024-04-25 03:28:53.166886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.944 qpair failed and we were unable to recover it. 00:28:18.944 [2024-04-25 03:28:53.167088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.944 [2024-04-25 03:28:53.167256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.944 [2024-04-25 03:28:53.167282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.944 qpair failed and we were unable to recover it. 00:28:18.944 [2024-04-25 03:28:53.167461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.944 [2024-04-25 03:28:53.167638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.944 [2024-04-25 03:28:53.167664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.944 qpair failed and we were unable to recover it. 00:28:18.944 [2024-04-25 03:28:53.167860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.944 [2024-04-25 03:28:53.168063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.944 [2024-04-25 03:28:53.168090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.944 qpair failed and we were unable to recover it. 00:28:18.944 [2024-04-25 03:28:53.168322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.944 [2024-04-25 03:28:53.168491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.944 [2024-04-25 03:28:53.168517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.944 qpair failed and we were unable to recover it. 00:28:18.944 [2024-04-25 03:28:53.168715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.944 [2024-04-25 03:28:53.168904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.944 [2024-04-25 03:28:53.168931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.944 qpair failed and we were unable to recover it. 00:28:18.944 [2024-04-25 03:28:53.169125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.944 [2024-04-25 03:28:53.169283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.944 [2024-04-25 03:28:53.169309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.944 qpair failed and we were unable to recover it. 00:28:18.944 [2024-04-25 03:28:53.169504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.944 [2024-04-25 03:28:53.169725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.944 [2024-04-25 03:28:53.169751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.944 qpair failed and we were unable to recover it. 00:28:18.944 [2024-04-25 03:28:53.169944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.944 [2024-04-25 03:28:53.170139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.944 [2024-04-25 03:28:53.170165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.944 qpair failed and we were unable to recover it. 00:28:18.944 [2024-04-25 03:28:53.170338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.944 [2024-04-25 03:28:53.170564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.944 [2024-04-25 03:28:53.170590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.944 qpair failed and we were unable to recover it. 00:28:18.944 [2024-04-25 03:28:53.170795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.944 [2024-04-25 03:28:53.170972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.944 [2024-04-25 03:28:53.170999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.944 qpair failed and we were unable to recover it. 00:28:18.944 [2024-04-25 03:28:53.171195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.944 [2024-04-25 03:28:53.171370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.944 [2024-04-25 03:28:53.171397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.944 qpair failed and we were unable to recover it. 00:28:18.944 [2024-04-25 03:28:53.171558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.944 [2024-04-25 03:28:53.171738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.945 [2024-04-25 03:28:53.171765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.945 qpair failed and we were unable to recover it. 00:28:18.945 [2024-04-25 03:28:53.171941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.945 [2024-04-25 03:28:53.172106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.945 [2024-04-25 03:28:53.172132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.945 qpair failed and we were unable to recover it. 00:28:18.945 [2024-04-25 03:28:53.172300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.945 [2024-04-25 03:28:53.172518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.945 [2024-04-25 03:28:53.172544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.945 qpair failed and we were unable to recover it. 00:28:18.945 [2024-04-25 03:28:53.172753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.945 [2024-04-25 03:28:53.172918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.945 [2024-04-25 03:28:53.172944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.945 qpair failed and we were unable to recover it. 00:28:18.945 [2024-04-25 03:28:53.173134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.945 [2024-04-25 03:28:53.173303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.945 [2024-04-25 03:28:53.173329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.945 qpair failed and we were unable to recover it. 00:28:18.945 [2024-04-25 03:28:53.173527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.945 [2024-04-25 03:28:53.173730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.945 [2024-04-25 03:28:53.173757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.945 qpair failed and we were unable to recover it. 00:28:18.945 [2024-04-25 03:28:53.173954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.945 [2024-04-25 03:28:53.174152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.945 [2024-04-25 03:28:53.174179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.945 qpair failed and we were unable to recover it. 00:28:18.945 [2024-04-25 03:28:53.174380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.945 [2024-04-25 03:28:53.174605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.945 [2024-04-25 03:28:53.174636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.945 qpair failed and we were unable to recover it. 00:28:18.945 [2024-04-25 03:28:53.174809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.945 [2024-04-25 03:28:53.175004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.945 [2024-04-25 03:28:53.175031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.945 qpair failed and we were unable to recover it. 00:28:18.945 [2024-04-25 03:28:53.175229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.945 [2024-04-25 03:28:53.175427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.945 [2024-04-25 03:28:53.175453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.945 qpair failed and we were unable to recover it. 00:28:18.945 [2024-04-25 03:28:53.175646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.945 [2024-04-25 03:28:53.175837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.945 [2024-04-25 03:28:53.175863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.945 qpair failed and we were unable to recover it. 00:28:18.945 [2024-04-25 03:28:53.176028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.945 [2024-04-25 03:28:53.176195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.945 [2024-04-25 03:28:53.176221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.945 qpair failed and we were unable to recover it. 00:28:18.945 [2024-04-25 03:28:53.176439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.945 [2024-04-25 03:28:53.176644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.945 [2024-04-25 03:28:53.176671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.945 qpair failed and we were unable to recover it. 00:28:18.945 [2024-04-25 03:28:53.176836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.945 [2024-04-25 03:28:53.177014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.945 [2024-04-25 03:28:53.177040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.945 qpair failed and we were unable to recover it. 00:28:18.945 [2024-04-25 03:28:53.177240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.945 [2024-04-25 03:28:53.177409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.945 [2024-04-25 03:28:53.177435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.945 qpair failed and we were unable to recover it. 00:28:18.945 [2024-04-25 03:28:53.177632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.945 [2024-04-25 03:28:53.177805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.945 [2024-04-25 03:28:53.177830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.945 qpair failed and we were unable to recover it. 00:28:18.945 [2024-04-25 03:28:53.178004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.945 [2024-04-25 03:28:53.178188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.945 [2024-04-25 03:28:53.178214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.945 qpair failed and we were unable to recover it. 00:28:18.945 [2024-04-25 03:28:53.178410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.945 [2024-04-25 03:28:53.178568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.945 [2024-04-25 03:28:53.178593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.945 qpair failed and we were unable to recover it. 00:28:18.945 [2024-04-25 03:28:53.178790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.945 [2024-04-25 03:28:53.179005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.945 [2024-04-25 03:28:53.179031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.945 qpair failed and we were unable to recover it. 00:28:18.945 [2024-04-25 03:28:53.179220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.945 [2024-04-25 03:28:53.179409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.945 [2024-04-25 03:28:53.179437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.945 qpair failed and we were unable to recover it. 00:28:18.945 [2024-04-25 03:28:53.179653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.945 [2024-04-25 03:28:53.179822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.945 [2024-04-25 03:28:53.179848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.945 qpair failed and we were unable to recover it. 00:28:18.945 [2024-04-25 03:28:53.180028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.945 [2024-04-25 03:28:53.180215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.945 [2024-04-25 03:28:53.180241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.945 qpair failed and we were unable to recover it. 00:28:18.945 [2024-04-25 03:28:53.180420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.945 [2024-04-25 03:28:53.180585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.945 [2024-04-25 03:28:53.180612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.945 qpair failed and we were unable to recover it. 00:28:18.945 [2024-04-25 03:28:53.180777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.945 [2024-04-25 03:28:53.180940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.945 [2024-04-25 03:28:53.180971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.945 qpair failed and we were unable to recover it. 00:28:18.945 [2024-04-25 03:28:53.181139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.945 [2024-04-25 03:28:53.181334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.945 [2024-04-25 03:28:53.181360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.945 qpair failed and we were unable to recover it. 00:28:18.945 [2024-04-25 03:28:53.181551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.945 [2024-04-25 03:28:53.181750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.945 [2024-04-25 03:28:53.181777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.945 qpair failed and we were unable to recover it. 00:28:18.945 [2024-04-25 03:28:53.181944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.945 [2024-04-25 03:28:53.182106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.945 [2024-04-25 03:28:53.182132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.945 qpair failed and we were unable to recover it. 00:28:18.945 [2024-04-25 03:28:53.182330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.945 [2024-04-25 03:28:53.182529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.945 [2024-04-25 03:28:53.182555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.945 qpair failed and we were unable to recover it. 00:28:18.945 [2024-04-25 03:28:53.182722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.945 [2024-04-25 03:28:53.182878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.945 [2024-04-25 03:28:53.182904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.945 qpair failed and we were unable to recover it. 00:28:18.945 [2024-04-25 03:28:53.183091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.945 [2024-04-25 03:28:53.183256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.945 [2024-04-25 03:28:53.183282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.946 qpair failed and we were unable to recover it. 00:28:18.946 [2024-04-25 03:28:53.183477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.946 [2024-04-25 03:28:53.183671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.946 [2024-04-25 03:28:53.183698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.946 qpair failed and we were unable to recover it. 00:28:18.946 [2024-04-25 03:28:53.183893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.946 [2024-04-25 03:28:53.184058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.946 [2024-04-25 03:28:53.184084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.946 qpair failed and we were unable to recover it. 00:28:18.946 [2024-04-25 03:28:53.184304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.946 [2024-04-25 03:28:53.184461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.946 [2024-04-25 03:28:53.184487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.946 qpair failed and we were unable to recover it. 00:28:18.946 [2024-04-25 03:28:53.184659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.946 [2024-04-25 03:28:53.184858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.946 [2024-04-25 03:28:53.184885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.946 qpair failed and we were unable to recover it. 00:28:18.946 [2024-04-25 03:28:53.185110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.946 [2024-04-25 03:28:53.185271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.946 [2024-04-25 03:28:53.185297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.946 qpair failed and we were unable to recover it. 00:28:18.946 [2024-04-25 03:28:53.185489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.946 [2024-04-25 03:28:53.185649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.946 [2024-04-25 03:28:53.185676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.946 qpair failed and we were unable to recover it. 00:28:18.946 [2024-04-25 03:28:53.185861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.946 [2024-04-25 03:28:53.186036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.946 [2024-04-25 03:28:53.186062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.946 qpair failed and we were unable to recover it. 00:28:18.946 [2024-04-25 03:28:53.186263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.946 [2024-04-25 03:28:53.186453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.946 [2024-04-25 03:28:53.186480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.946 qpair failed and we were unable to recover it. 00:28:18.946 [2024-04-25 03:28:53.186705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.946 [2024-04-25 03:28:53.186866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.946 [2024-04-25 03:28:53.186892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.946 qpair failed and we were unable to recover it. 00:28:18.946 [2024-04-25 03:28:53.187087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.946 [2024-04-25 03:28:53.187303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.946 [2024-04-25 03:28:53.187328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.946 qpair failed and we were unable to recover it. 00:28:18.946 [2024-04-25 03:28:53.187516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.946 [2024-04-25 03:28:53.187745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.946 [2024-04-25 03:28:53.187770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.946 qpair failed and we were unable to recover it. 00:28:18.946 [2024-04-25 03:28:53.187941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.946 [2024-04-25 03:28:53.188116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.946 [2024-04-25 03:28:53.188142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.946 qpair failed and we were unable to recover it. 00:28:18.946 [2024-04-25 03:28:53.188308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.946 [2024-04-25 03:28:53.188472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.946 [2024-04-25 03:28:53.188497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.946 qpair failed and we were unable to recover it. 00:28:18.946 [2024-04-25 03:28:53.188697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.946 [2024-04-25 03:28:53.188854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.946 [2024-04-25 03:28:53.188881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.946 qpair failed and we were unable to recover it. 00:28:18.946 [2024-04-25 03:28:53.189052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.946 [2024-04-25 03:28:53.189214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.946 [2024-04-25 03:28:53.189240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.946 qpair failed and we were unable to recover it. 00:28:18.946 [2024-04-25 03:28:53.189435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.946 [2024-04-25 03:28:53.189604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.946 [2024-04-25 03:28:53.189636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.946 qpair failed and we were unable to recover it. 00:28:18.946 [2024-04-25 03:28:53.189832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.946 [2024-04-25 03:28:53.189997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.946 [2024-04-25 03:28:53.190023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.946 qpair failed and we were unable to recover it. 00:28:18.946 [2024-04-25 03:28:53.190210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.946 [2024-04-25 03:28:53.190399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.946 [2024-04-25 03:28:53.190425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.946 qpair failed and we were unable to recover it. 00:28:18.946 [2024-04-25 03:28:53.190585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.946 [2024-04-25 03:28:53.190768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.946 [2024-04-25 03:28:53.190794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.946 qpair failed and we were unable to recover it. 00:28:18.946 [2024-04-25 03:28:53.190986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.946 [2024-04-25 03:28:53.191199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.946 [2024-04-25 03:28:53.191225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.946 qpair failed and we were unable to recover it. 00:28:18.946 [2024-04-25 03:28:53.191416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.946 [2024-04-25 03:28:53.191609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.946 [2024-04-25 03:28:53.191653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.946 qpair failed and we were unable to recover it. 00:28:18.946 [2024-04-25 03:28:53.191848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.946 [2024-04-25 03:28:53.192039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.946 [2024-04-25 03:28:53.192065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.946 qpair failed and we were unable to recover it. 00:28:18.946 [2024-04-25 03:28:53.192282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.946 [2024-04-25 03:28:53.192456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.946 [2024-04-25 03:28:53.192481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.946 qpair failed and we were unable to recover it. 00:28:18.946 [2024-04-25 03:28:53.192652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.946 [2024-04-25 03:28:53.192813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.947 [2024-04-25 03:28:53.192840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.947 qpair failed and we were unable to recover it. 00:28:18.947 [2024-04-25 03:28:53.193010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.947 [2024-04-25 03:28:53.193208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.947 [2024-04-25 03:28:53.193235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.947 qpair failed and we were unable to recover it. 00:28:18.947 [2024-04-25 03:28:53.193403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.947 [2024-04-25 03:28:53.193624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.947 [2024-04-25 03:28:53.193655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.947 qpair failed and we were unable to recover it. 00:28:18.947 [2024-04-25 03:28:53.193843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.947 [2024-04-25 03:28:53.194015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.947 [2024-04-25 03:28:53.194041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.947 qpair failed and we were unable to recover it. 00:28:18.947 [2024-04-25 03:28:53.194235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.947 [2024-04-25 03:28:53.194429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.947 [2024-04-25 03:28:53.194455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.947 qpair failed and we were unable to recover it. 00:28:18.947 [2024-04-25 03:28:53.194652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.947 [2024-04-25 03:28:53.194821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.947 [2024-04-25 03:28:53.194846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.947 qpair failed and we were unable to recover it. 00:28:18.947 [2024-04-25 03:28:53.195037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.947 [2024-04-25 03:28:53.195223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.947 [2024-04-25 03:28:53.195249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.947 qpair failed and we were unable to recover it. 00:28:18.947 [2024-04-25 03:28:53.195415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.947 [2024-04-25 03:28:53.195614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.947 [2024-04-25 03:28:53.195644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.947 qpair failed and we were unable to recover it. 00:28:18.947 [2024-04-25 03:28:53.195837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.947 [2024-04-25 03:28:53.196030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.947 [2024-04-25 03:28:53.196056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.947 qpair failed and we were unable to recover it. 00:28:18.947 [2024-04-25 03:28:53.196219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.947 [2024-04-25 03:28:53.196415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.947 [2024-04-25 03:28:53.196441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.947 qpair failed and we were unable to recover it. 00:28:18.947 [2024-04-25 03:28:53.196622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.947 [2024-04-25 03:28:53.196822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.947 [2024-04-25 03:28:53.196848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.947 qpair failed and we were unable to recover it. 00:28:18.947 [2024-04-25 03:28:53.197021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.947 [2024-04-25 03:28:53.197184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.947 [2024-04-25 03:28:53.197215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.947 qpair failed and we were unable to recover it. 00:28:18.947 [2024-04-25 03:28:53.197409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.947 [2024-04-25 03:28:53.197576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.947 [2024-04-25 03:28:53.197604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.947 qpair failed and we were unable to recover it. 00:28:18.947 [2024-04-25 03:28:53.197797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.947 [2024-04-25 03:28:53.197998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.947 [2024-04-25 03:28:53.198024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.947 qpair failed and we were unable to recover it. 00:28:18.947 [2024-04-25 03:28:53.198218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.947 [2024-04-25 03:28:53.198382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.947 [2024-04-25 03:28:53.198408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.947 qpair failed and we were unable to recover it. 00:28:18.947 [2024-04-25 03:28:53.198598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.947 [2024-04-25 03:28:53.198786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.947 [2024-04-25 03:28:53.198813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.947 qpair failed and we were unable to recover it. 00:28:18.947 [2024-04-25 03:28:53.198984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.947 [2024-04-25 03:28:53.199184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.947 [2024-04-25 03:28:53.199210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.947 qpair failed and we were unable to recover it. 00:28:18.947 [2024-04-25 03:28:53.199407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.947 [2024-04-25 03:28:53.199604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.947 [2024-04-25 03:28:53.199647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.947 qpair failed and we were unable to recover it. 00:28:18.947 [2024-04-25 03:28:53.199816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.947 [2024-04-25 03:28:53.199988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.947 [2024-04-25 03:28:53.200014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.947 qpair failed and we were unable to recover it. 00:28:18.947 [2024-04-25 03:28:53.200205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.947 [2024-04-25 03:28:53.200398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.947 [2024-04-25 03:28:53.200424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.947 qpair failed and we were unable to recover it. 00:28:18.947 [2024-04-25 03:28:53.200586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.947 [2024-04-25 03:28:53.200833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.947 [2024-04-25 03:28:53.200859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.947 qpair failed and we were unable to recover it. 00:28:18.947 [2024-04-25 03:28:53.201033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.947 [2024-04-25 03:28:53.201208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.947 [2024-04-25 03:28:53.201238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.947 qpair failed and we were unable to recover it. 00:28:18.947 [2024-04-25 03:28:53.201400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.947 [2024-04-25 03:28:53.201599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.947 [2024-04-25 03:28:53.201625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.947 qpair failed and we were unable to recover it. 00:28:18.947 [2024-04-25 03:28:53.201834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.947 [2024-04-25 03:28:53.202006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.947 [2024-04-25 03:28:53.202032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.947 qpair failed and we were unable to recover it. 00:28:18.947 [2024-04-25 03:28:53.202228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.947 [2024-04-25 03:28:53.202423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.947 [2024-04-25 03:28:53.202449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.947 qpair failed and we were unable to recover it. 00:28:18.947 [2024-04-25 03:28:53.202644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.947 [2024-04-25 03:28:53.202829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.947 [2024-04-25 03:28:53.202854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.947 qpair failed and we were unable to recover it. 00:28:18.947 [2024-04-25 03:28:53.203048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.947 [2024-04-25 03:28:53.203230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.947 [2024-04-25 03:28:53.203255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.947 qpair failed and we were unable to recover it. 00:28:18.947 [2024-04-25 03:28:53.203425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.947 [2024-04-25 03:28:53.203647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.947 [2024-04-25 03:28:53.203673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.947 qpair failed and we were unable to recover it. 00:28:18.947 [2024-04-25 03:28:53.203844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.947 [2024-04-25 03:28:53.204012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.947 [2024-04-25 03:28:53.204037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.947 qpair failed and we were unable to recover it. 00:28:18.947 [2024-04-25 03:28:53.204226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.947 [2024-04-25 03:28:53.204386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.948 [2024-04-25 03:28:53.204411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.948 qpair failed and we were unable to recover it. 00:28:18.948 [2024-04-25 03:28:53.204583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.948 [2024-04-25 03:28:53.204751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.948 [2024-04-25 03:28:53.204777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.948 qpair failed and we were unable to recover it. 00:28:18.948 [2024-04-25 03:28:53.204948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.948 [2024-04-25 03:28:53.205143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.948 [2024-04-25 03:28:53.205168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.948 qpair failed and we were unable to recover it. 00:28:18.948 [2024-04-25 03:28:53.205356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.948 [2024-04-25 03:28:53.205514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.948 [2024-04-25 03:28:53.205539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.948 qpair failed and we were unable to recover it. 00:28:18.948 [2024-04-25 03:28:53.205727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.948 [2024-04-25 03:28:53.205893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.948 [2024-04-25 03:28:53.205919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.948 qpair failed and we were unable to recover it. 00:28:18.948 [2024-04-25 03:28:53.206118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.948 [2024-04-25 03:28:53.206283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.948 [2024-04-25 03:28:53.206310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.948 qpair failed and we were unable to recover it. 00:28:18.948 [2024-04-25 03:28:53.206471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.948 [2024-04-25 03:28:53.206664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.948 [2024-04-25 03:28:53.206691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.948 qpair failed and we were unable to recover it. 00:28:18.948 [2024-04-25 03:28:53.206873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.948 [2024-04-25 03:28:53.207069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.948 [2024-04-25 03:28:53.207095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.948 qpair failed and we were unable to recover it. 00:28:18.948 [2024-04-25 03:28:53.207288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.948 [2024-04-25 03:28:53.207454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.948 [2024-04-25 03:28:53.207480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.948 qpair failed and we were unable to recover it. 00:28:18.948 [2024-04-25 03:28:53.207695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.948 [2024-04-25 03:28:53.207895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.948 [2024-04-25 03:28:53.207921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.948 qpair failed and we were unable to recover it. 00:28:18.948 [2024-04-25 03:28:53.208122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.948 [2024-04-25 03:28:53.208320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.948 [2024-04-25 03:28:53.208346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.948 qpair failed and we were unable to recover it. 00:28:18.948 [2024-04-25 03:28:53.208514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.948 [2024-04-25 03:28:53.208682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.948 [2024-04-25 03:28:53.208709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.948 qpair failed and we were unable to recover it. 00:28:18.948 [2024-04-25 03:28:53.208869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.948 [2024-04-25 03:28:53.209037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.948 [2024-04-25 03:28:53.209063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.948 qpair failed and we were unable to recover it. 00:28:18.948 [2024-04-25 03:28:53.209232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.948 [2024-04-25 03:28:53.209404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.948 [2024-04-25 03:28:53.209430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.948 qpair failed and we were unable to recover it. 00:28:18.948 [2024-04-25 03:28:53.209592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.948 [2024-04-25 03:28:53.209792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.948 [2024-04-25 03:28:53.209819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.948 qpair failed and we were unable to recover it. 00:28:18.948 [2024-04-25 03:28:53.209985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.948 [2024-04-25 03:28:53.210159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.948 [2024-04-25 03:28:53.210185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.948 qpair failed and we were unable to recover it. 00:28:18.948 [2024-04-25 03:28:53.210377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.948 [2024-04-25 03:28:53.210575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.948 [2024-04-25 03:28:53.210601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.948 qpair failed and we were unable to recover it. 00:28:18.948 [2024-04-25 03:28:53.210797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.948 [2024-04-25 03:28:53.210969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.948 [2024-04-25 03:28:53.210995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.948 qpair failed and we were unable to recover it. 00:28:18.948 [2024-04-25 03:28:53.211181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.948 [2024-04-25 03:28:53.211363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.948 [2024-04-25 03:28:53.211388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.948 qpair failed and we were unable to recover it. 00:28:18.948 [2024-04-25 03:28:53.211583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.948 [2024-04-25 03:28:53.211789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.948 [2024-04-25 03:28:53.211816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.948 qpair failed and we were unable to recover it. 00:28:18.948 [2024-04-25 03:28:53.211978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.948 [2024-04-25 03:28:53.212151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.948 [2024-04-25 03:28:53.212176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.948 qpair failed and we were unable to recover it. 00:28:18.948 [2024-04-25 03:28:53.212344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.948 [2024-04-25 03:28:53.212560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.948 [2024-04-25 03:28:53.212586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.948 qpair failed and we were unable to recover it. 00:28:18.948 [2024-04-25 03:28:53.212769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.948 [2024-04-25 03:28:53.212984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.948 [2024-04-25 03:28:53.213010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.948 qpair failed and we were unable to recover it. 00:28:18.948 [2024-04-25 03:28:53.213173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.948 [2024-04-25 03:28:53.213344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.948 [2024-04-25 03:28:53.213371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.948 qpair failed and we were unable to recover it. 00:28:18.948 [2024-04-25 03:28:53.213562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.948 [2024-04-25 03:28:53.213782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.948 [2024-04-25 03:28:53.213809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.948 qpair failed and we were unable to recover it. 00:28:18.948 [2024-04-25 03:28:53.214018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.948 [2024-04-25 03:28:53.214213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.948 [2024-04-25 03:28:53.214239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.948 qpair failed and we were unable to recover it. 00:28:18.948 [2024-04-25 03:28:53.214417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.948 [2024-04-25 03:28:53.214634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.948 [2024-04-25 03:28:53.214660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.948 qpair failed and we were unable to recover it. 00:28:18.948 [2024-04-25 03:28:53.214823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.948 [2024-04-25 03:28:53.214987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.948 [2024-04-25 03:28:53.215012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.948 qpair failed and we were unable to recover it. 00:28:18.948 [2024-04-25 03:28:53.215175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.948 [2024-04-25 03:28:53.215340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.948 [2024-04-25 03:28:53.215365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.948 qpair failed and we were unable to recover it. 00:28:18.948 [2024-04-25 03:28:53.215539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.948 [2024-04-25 03:28:53.215717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.949 [2024-04-25 03:28:53.215743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.949 qpair failed and we were unable to recover it. 00:28:18.949 [2024-04-25 03:28:53.215932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.949 [2024-04-25 03:28:53.216094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.949 [2024-04-25 03:28:53.216120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.949 qpair failed and we were unable to recover it. 00:28:18.949 [2024-04-25 03:28:53.216359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.949 [2024-04-25 03:28:53.216521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.949 [2024-04-25 03:28:53.216546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.949 qpair failed and we were unable to recover it. 00:28:18.949 [2024-04-25 03:28:53.216739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.949 [2024-04-25 03:28:53.216905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.949 [2024-04-25 03:28:53.216931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.949 qpair failed and we were unable to recover it. 00:28:18.949 [2024-04-25 03:28:53.217121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.949 [2024-04-25 03:28:53.217312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.949 [2024-04-25 03:28:53.217342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.949 qpair failed and we were unable to recover it. 00:28:18.949 [2024-04-25 03:28:53.217530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.949 [2024-04-25 03:28:53.217691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.949 [2024-04-25 03:28:53.217717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.949 qpair failed and we were unable to recover it. 00:28:18.949 [2024-04-25 03:28:53.217886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.949 [2024-04-25 03:28:53.218050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.949 [2024-04-25 03:28:53.218076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.949 qpair failed and we were unable to recover it. 00:28:18.949 [2024-04-25 03:28:53.218244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.949 [2024-04-25 03:28:53.218412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.949 [2024-04-25 03:28:53.218437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.949 qpair failed and we were unable to recover it. 00:28:18.949 [2024-04-25 03:28:53.218623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.949 [2024-04-25 03:28:53.218796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.949 [2024-04-25 03:28:53.218822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.949 qpair failed and we were unable to recover it. 00:28:18.949 [2024-04-25 03:28:53.218986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.949 [2024-04-25 03:28:53.219144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.949 [2024-04-25 03:28:53.219170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.949 qpair failed and we were unable to recover it. 00:28:18.949 [2024-04-25 03:28:53.219336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.949 [2024-04-25 03:28:53.219553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.949 [2024-04-25 03:28:53.219578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.949 qpair failed and we were unable to recover it. 00:28:18.949 [2024-04-25 03:28:53.219773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.949 [2024-04-25 03:28:53.219970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.949 [2024-04-25 03:28:53.219996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.949 qpair failed and we were unable to recover it. 00:28:18.949 [2024-04-25 03:28:53.220162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.949 [2024-04-25 03:28:53.220325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.949 [2024-04-25 03:28:53.220350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.949 qpair failed and we were unable to recover it. 00:28:18.949 [2024-04-25 03:28:53.220550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.949 [2024-04-25 03:28:53.220742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.949 [2024-04-25 03:28:53.220768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.949 qpair failed and we were unable to recover it. 00:28:18.949 [2024-04-25 03:28:53.220959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.949 [2024-04-25 03:28:53.221115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.949 [2024-04-25 03:28:53.221140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.949 qpair failed and we were unable to recover it. 00:28:18.949 [2024-04-25 03:28:53.221316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.949 [2024-04-25 03:28:53.221540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.949 [2024-04-25 03:28:53.221566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.949 qpair failed and we were unable to recover it. 00:28:18.949 [2024-04-25 03:28:53.221758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.949 [2024-04-25 03:28:53.221932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.949 [2024-04-25 03:28:53.221958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.949 qpair failed and we were unable to recover it. 00:28:18.949 [2024-04-25 03:28:53.222148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.949 [2024-04-25 03:28:53.222318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.949 [2024-04-25 03:28:53.222344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.949 qpair failed and we were unable to recover it. 00:28:18.949 [2024-04-25 03:28:53.222510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.949 [2024-04-25 03:28:53.222731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.949 [2024-04-25 03:28:53.222757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.949 qpair failed and we were unable to recover it. 00:28:18.949 [2024-04-25 03:28:53.222949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.949 [2024-04-25 03:28:53.223135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.949 [2024-04-25 03:28:53.223160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.949 qpair failed and we were unable to recover it. 00:28:18.949 [2024-04-25 03:28:53.223356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.949 [2024-04-25 03:28:53.223578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.949 [2024-04-25 03:28:53.223605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.949 qpair failed and we were unable to recover it. 00:28:18.949 [2024-04-25 03:28:53.223789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.949 [2024-04-25 03:28:53.223951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.949 [2024-04-25 03:28:53.223976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.949 qpair failed and we were unable to recover it. 00:28:18.949 [2024-04-25 03:28:53.224136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.949 [2024-04-25 03:28:53.224312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.949 [2024-04-25 03:28:53.224339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.949 qpair failed and we were unable to recover it. 00:28:18.949 [2024-04-25 03:28:53.224531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.949 [2024-04-25 03:28:53.224703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.949 [2024-04-25 03:28:53.224731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.949 qpair failed and we were unable to recover it. 00:28:18.949 [2024-04-25 03:28:53.224898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.949 [2024-04-25 03:28:53.225059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.949 [2024-04-25 03:28:53.225085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.949 qpair failed and we were unable to recover it. 00:28:18.949 [2024-04-25 03:28:53.225278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.949 [2024-04-25 03:28:53.225450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.949 [2024-04-25 03:28:53.225478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.949 qpair failed and we were unable to recover it. 00:28:18.949 [2024-04-25 03:28:53.225672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.949 [2024-04-25 03:28:53.225843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.949 [2024-04-25 03:28:53.225869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.949 qpair failed and we were unable to recover it. 00:28:18.949 [2024-04-25 03:28:53.226035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.949 [2024-04-25 03:28:53.226233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.949 [2024-04-25 03:28:53.226259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.949 qpair failed and we were unable to recover it. 00:28:18.949 [2024-04-25 03:28:53.226450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.949 [2024-04-25 03:28:53.226607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.949 [2024-04-25 03:28:53.226639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.949 qpair failed and we were unable to recover it. 00:28:18.949 [2024-04-25 03:28:53.226843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.949 [2024-04-25 03:28:53.227017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.949 [2024-04-25 03:28:53.227042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.950 qpair failed and we were unable to recover it. 00:28:18.950 [2024-04-25 03:28:53.227203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.950 [2024-04-25 03:28:53.227393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.950 [2024-04-25 03:28:53.227418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.950 qpair failed and we were unable to recover it. 00:28:18.950 [2024-04-25 03:28:53.227617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.950 [2024-04-25 03:28:53.227804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.950 [2024-04-25 03:28:53.227831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.950 qpair failed and we were unable to recover it. 00:28:18.950 [2024-04-25 03:28:53.227995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.950 [2024-04-25 03:28:53.228156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.950 [2024-04-25 03:28:53.228182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.950 qpair failed and we were unable to recover it. 00:28:18.950 [2024-04-25 03:28:53.228386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.950 [2024-04-25 03:28:53.228577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.950 [2024-04-25 03:28:53.228603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.950 qpair failed and we were unable to recover it. 00:28:18.950 [2024-04-25 03:28:53.228800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.950 [2024-04-25 03:28:53.228995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.950 [2024-04-25 03:28:53.229022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.950 qpair failed and we were unable to recover it. 00:28:18.950 [2024-04-25 03:28:53.229217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.950 [2024-04-25 03:28:53.229406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.950 [2024-04-25 03:28:53.229432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.950 qpair failed and we were unable to recover it. 00:28:18.950 [2024-04-25 03:28:53.229623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.950 [2024-04-25 03:28:53.229820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.950 [2024-04-25 03:28:53.229845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.950 qpair failed and we were unable to recover it. 00:28:18.950 [2024-04-25 03:28:53.230009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.950 [2024-04-25 03:28:53.230165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.950 [2024-04-25 03:28:53.230191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.950 qpair failed and we were unable to recover it. 00:28:18.950 [2024-04-25 03:28:53.230406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.950 [2024-04-25 03:28:53.230611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.950 [2024-04-25 03:28:53.230642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.950 qpair failed and we were unable to recover it. 00:28:18.950 [2024-04-25 03:28:53.230837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.950 [2024-04-25 03:28:53.231041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.950 [2024-04-25 03:28:53.231067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.950 qpair failed and we were unable to recover it. 00:28:18.950 [2024-04-25 03:28:53.231263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.950 [2024-04-25 03:28:53.231437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.950 [2024-04-25 03:28:53.231463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.950 qpair failed and we were unable to recover it. 00:28:18.950 [2024-04-25 03:28:53.231652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.950 [2024-04-25 03:28:53.231824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.950 [2024-04-25 03:28:53.231850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.950 qpair failed and we were unable to recover it. 00:28:18.950 [2024-04-25 03:28:53.232037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.950 [2024-04-25 03:28:53.232206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.950 [2024-04-25 03:28:53.232232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.950 qpair failed and we were unable to recover it. 00:28:18.950 [2024-04-25 03:28:53.232431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.950 [2024-04-25 03:28:53.232596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.950 [2024-04-25 03:28:53.232622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.950 qpair failed and we were unable to recover it. 00:28:18.950 [2024-04-25 03:28:53.232788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.950 [2024-04-25 03:28:53.232955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.950 [2024-04-25 03:28:53.232980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.950 qpair failed and we were unable to recover it. 00:28:18.950 [2024-04-25 03:28:53.233152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.950 [2024-04-25 03:28:53.233322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.950 [2024-04-25 03:28:53.233348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.950 qpair failed and we were unable to recover it. 00:28:18.950 [2024-04-25 03:28:53.233541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.950 [2024-04-25 03:28:53.233733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.950 [2024-04-25 03:28:53.233760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.950 qpair failed and we were unable to recover it. 00:28:18.950 [2024-04-25 03:28:53.233952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.950 [2024-04-25 03:28:53.234142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.950 [2024-04-25 03:28:53.234168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.950 qpair failed and we were unable to recover it. 00:28:18.950 [2024-04-25 03:28:53.234352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.950 [2024-04-25 03:28:53.234516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.950 [2024-04-25 03:28:53.234542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.950 qpair failed and we were unable to recover it. 00:28:18.950 [2024-04-25 03:28:53.234715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.950 [2024-04-25 03:28:53.234878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.950 [2024-04-25 03:28:53.234904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.950 qpair failed and we were unable to recover it. 00:28:18.950 [2024-04-25 03:28:53.235073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.950 [2024-04-25 03:28:53.235268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.950 [2024-04-25 03:28:53.235294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.950 qpair failed and we were unable to recover it. 00:28:18.950 [2024-04-25 03:28:53.235475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.950 [2024-04-25 03:28:53.235644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.950 [2024-04-25 03:28:53.235670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.950 qpair failed and we were unable to recover it. 00:28:18.950 [2024-04-25 03:28:53.235844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.950 [2024-04-25 03:28:53.236016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.950 [2024-04-25 03:28:53.236041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.950 qpair failed and we were unable to recover it. 00:28:18.950 [2024-04-25 03:28:53.236233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.950 [2024-04-25 03:28:53.236407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.950 [2024-04-25 03:28:53.236434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.950 qpair failed and we were unable to recover it. 00:28:18.950 [2024-04-25 03:28:53.236634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.950 [2024-04-25 03:28:53.236834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.950 [2024-04-25 03:28:53.236860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.950 qpair failed and we were unable to recover it. 00:28:18.950 [2024-04-25 03:28:53.237023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.950 [2024-04-25 03:28:53.237185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.950 [2024-04-25 03:28:53.237215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.950 qpair failed and we were unable to recover it. 00:28:18.950 [2024-04-25 03:28:53.237388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.950 [2024-04-25 03:28:53.237556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.950 [2024-04-25 03:28:53.237582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.950 qpair failed and we were unable to recover it. 00:28:18.950 [2024-04-25 03:28:53.237774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.950 [2024-04-25 03:28:53.237944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.950 [2024-04-25 03:28:53.237971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.950 qpair failed and we were unable to recover it. 00:28:18.950 [2024-04-25 03:28:53.238144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.950 [2024-04-25 03:28:53.238309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.950 [2024-04-25 03:28:53.238335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.950 qpair failed and we were unable to recover it. 00:28:18.951 [2024-04-25 03:28:53.238494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.951 [2024-04-25 03:28:53.238688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.951 [2024-04-25 03:28:53.238715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.951 qpair failed and we were unable to recover it. 00:28:18.951 [2024-04-25 03:28:53.238877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.951 [2024-04-25 03:28:53.239082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.951 [2024-04-25 03:28:53.239107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.951 qpair failed and we were unable to recover it. 00:28:18.951 [2024-04-25 03:28:53.239297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.951 [2024-04-25 03:28:53.239494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.951 [2024-04-25 03:28:53.239521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.951 qpair failed and we were unable to recover it. 00:28:18.951 [2024-04-25 03:28:53.239696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.951 [2024-04-25 03:28:53.239891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.951 [2024-04-25 03:28:53.239916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.951 qpair failed and we were unable to recover it. 00:28:18.951 [2024-04-25 03:28:53.240114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.951 [2024-04-25 03:28:53.240288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.951 [2024-04-25 03:28:53.240314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.951 qpair failed and we were unable to recover it. 00:28:18.951 [2024-04-25 03:28:53.240477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.951 [2024-04-25 03:28:53.240647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.951 [2024-04-25 03:28:53.240673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.951 qpair failed and we were unable to recover it. 00:28:18.951 [2024-04-25 03:28:53.240833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.951 [2024-04-25 03:28:53.241022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.951 [2024-04-25 03:28:53.241047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.951 qpair failed and we were unable to recover it. 00:28:18.951 [2024-04-25 03:28:53.241229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.951 [2024-04-25 03:28:53.241420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.951 [2024-04-25 03:28:53.241446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.951 qpair failed and we were unable to recover it. 00:28:18.951 [2024-04-25 03:28:53.241615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.951 [2024-04-25 03:28:53.241814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.951 [2024-04-25 03:28:53.241840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.951 qpair failed and we were unable to recover it. 00:28:18.951 [2024-04-25 03:28:53.242030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.951 [2024-04-25 03:28:53.242251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.951 [2024-04-25 03:28:53.242277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.951 qpair failed and we were unable to recover it. 00:28:18.951 [2024-04-25 03:28:53.242472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.951 [2024-04-25 03:28:53.242636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.951 [2024-04-25 03:28:53.242663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.951 qpair failed and we were unable to recover it. 00:28:18.951 [2024-04-25 03:28:53.242860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.951 [2024-04-25 03:28:53.243033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.951 [2024-04-25 03:28:53.243058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.951 qpair failed and we were unable to recover it. 00:28:18.951 [2024-04-25 03:28:53.243217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.951 [2024-04-25 03:28:53.243404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.951 [2024-04-25 03:28:53.243430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.951 qpair failed and we were unable to recover it. 00:28:18.951 [2024-04-25 03:28:53.243592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.951 [2024-04-25 03:28:53.243786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.951 [2024-04-25 03:28:53.243813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.951 qpair failed and we were unable to recover it. 00:28:18.951 [2024-04-25 03:28:53.243992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.951 [2024-04-25 03:28:53.244216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.951 [2024-04-25 03:28:53.244242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.951 qpair failed and we were unable to recover it. 00:28:18.951 [2024-04-25 03:28:53.244412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.951 [2024-04-25 03:28:53.244607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.951 [2024-04-25 03:28:53.244638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.951 qpair failed and we were unable to recover it. 00:28:18.951 [2024-04-25 03:28:53.244830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.951 [2024-04-25 03:28:53.245020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.951 [2024-04-25 03:28:53.245046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.951 qpair failed and we were unable to recover it. 00:28:18.951 [2024-04-25 03:28:53.245265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.951 [2024-04-25 03:28:53.245434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.951 [2024-04-25 03:28:53.245460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.951 qpair failed and we were unable to recover it. 00:28:18.951 [2024-04-25 03:28:53.245617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.951 [2024-04-25 03:28:53.245801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.951 [2024-04-25 03:28:53.245828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.951 qpair failed and we were unable to recover it. 00:28:18.951 [2024-04-25 03:28:53.246020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.951 [2024-04-25 03:28:53.246213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.951 [2024-04-25 03:28:53.246240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.951 qpair failed and we were unable to recover it. 00:28:18.951 [2024-04-25 03:28:53.246407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.951 [2024-04-25 03:28:53.246601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.951 [2024-04-25 03:28:53.246633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.951 qpair failed and we were unable to recover it. 00:28:18.951 [2024-04-25 03:28:53.246835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.951 [2024-04-25 03:28:53.247008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.951 [2024-04-25 03:28:53.247034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.951 qpair failed and we were unable to recover it. 00:28:18.951 [2024-04-25 03:28:53.247204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.951 [2024-04-25 03:28:53.247366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.951 [2024-04-25 03:28:53.247392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.951 qpair failed and we were unable to recover it. 00:28:18.951 [2024-04-25 03:28:53.247563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.951 [2024-04-25 03:28:53.247733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.951 [2024-04-25 03:28:53.247760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.951 qpair failed and we were unable to recover it. 00:28:18.951 [2024-04-25 03:28:53.247932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.951 [2024-04-25 03:28:53.248128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.952 [2024-04-25 03:28:53.248154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.952 qpair failed and we were unable to recover it. 00:28:18.952 [2024-04-25 03:28:53.248314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.952 [2024-04-25 03:28:53.248520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.952 [2024-04-25 03:28:53.248545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.952 qpair failed and we were unable to recover it. 00:28:18.952 [2024-04-25 03:28:53.248716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.952 [2024-04-25 03:28:53.248887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.952 [2024-04-25 03:28:53.248913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.952 qpair failed and we were unable to recover it. 00:28:18.952 [2024-04-25 03:28:53.249082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.952 [2024-04-25 03:28:53.249256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.952 [2024-04-25 03:28:53.249282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.952 qpair failed and we were unable to recover it. 00:28:18.952 [2024-04-25 03:28:53.249479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.952 [2024-04-25 03:28:53.249681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.952 [2024-04-25 03:28:53.249708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.952 qpair failed and we were unable to recover it. 00:28:18.952 [2024-04-25 03:28:53.249903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.952 [2024-04-25 03:28:53.250092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.952 [2024-04-25 03:28:53.250118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.952 qpair failed and we were unable to recover it. 00:28:18.952 [2024-04-25 03:28:53.250281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.952 [2024-04-25 03:28:53.250438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.952 [2024-04-25 03:28:53.250464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.952 qpair failed and we were unable to recover it. 00:28:18.952 [2024-04-25 03:28:53.250655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.952 [2024-04-25 03:28:53.250831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.952 [2024-04-25 03:28:53.250857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.952 qpair failed and we were unable to recover it. 00:28:18.952 [2024-04-25 03:28:53.251017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.952 [2024-04-25 03:28:53.251243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.952 [2024-04-25 03:28:53.251269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.952 qpair failed and we were unable to recover it. 00:28:18.952 [2024-04-25 03:28:53.251431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.952 [2024-04-25 03:28:53.251622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.952 [2024-04-25 03:28:53.251653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.952 qpair failed and we were unable to recover it. 00:28:18.952 [2024-04-25 03:28:53.251823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.952 [2024-04-25 03:28:53.252016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.952 [2024-04-25 03:28:53.252042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.952 qpair failed and we were unable to recover it. 00:28:18.952 [2024-04-25 03:28:53.252240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.952 [2024-04-25 03:28:53.252441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.952 [2024-04-25 03:28:53.252467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.952 qpair failed and we were unable to recover it. 00:28:18.952 [2024-04-25 03:28:53.252688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.952 [2024-04-25 03:28:53.252853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.952 [2024-04-25 03:28:53.252879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.952 qpair failed and we were unable to recover it. 00:28:18.952 [2024-04-25 03:28:53.253043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.952 [2024-04-25 03:28:53.253285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.952 [2024-04-25 03:28:53.253314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.952 qpair failed and we were unable to recover it. 00:28:18.952 [2024-04-25 03:28:53.253521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.952 [2024-04-25 03:28:53.253694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.952 [2024-04-25 03:28:53.253720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.952 qpair failed and we were unable to recover it. 00:28:18.952 [2024-04-25 03:28:53.253890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.952 [2024-04-25 03:28:53.254089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.952 [2024-04-25 03:28:53.254114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.952 qpair failed and we were unable to recover it. 00:28:18.952 [2024-04-25 03:28:53.254309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.952 [2024-04-25 03:28:53.254543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.952 [2024-04-25 03:28:53.254569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.952 qpair failed and we were unable to recover it. 00:28:18.952 [2024-04-25 03:28:53.254752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.952 [2024-04-25 03:28:53.254944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.952 [2024-04-25 03:28:53.254970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.952 qpair failed and we were unable to recover it. 00:28:18.952 [2024-04-25 03:28:53.255136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.952 [2024-04-25 03:28:53.255306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.952 [2024-04-25 03:28:53.255331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.952 qpair failed and we were unable to recover it. 00:28:18.952 [2024-04-25 03:28:53.255527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.952 [2024-04-25 03:28:53.255701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.952 [2024-04-25 03:28:53.255727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.952 qpair failed and we were unable to recover it. 00:28:18.952 [2024-04-25 03:28:53.255930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.952 [2024-04-25 03:28:53.256119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.952 [2024-04-25 03:28:53.256145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.952 qpair failed and we were unable to recover it. 00:28:18.952 [2024-04-25 03:28:53.256339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.952 [2024-04-25 03:28:53.256558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.952 [2024-04-25 03:28:53.256584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.952 qpair failed and we were unable to recover it. 00:28:18.952 [2024-04-25 03:28:53.256795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.952 [2024-04-25 03:28:53.256987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.952 [2024-04-25 03:28:53.257013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.952 qpair failed and we were unable to recover it. 00:28:18.952 [2024-04-25 03:28:53.257227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.952 [2024-04-25 03:28:53.257453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.952 [2024-04-25 03:28:53.257483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.952 qpair failed and we were unable to recover it. 00:28:18.952 [2024-04-25 03:28:53.257677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.952 [2024-04-25 03:28:53.257875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.952 [2024-04-25 03:28:53.257901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.952 qpair failed and we were unable to recover it. 00:28:18.952 [2024-04-25 03:28:53.258071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.952 [2024-04-25 03:28:53.258245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.952 [2024-04-25 03:28:53.258273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.952 qpair failed and we were unable to recover it. 00:28:18.952 [2024-04-25 03:28:53.258471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.952 [2024-04-25 03:28:53.258657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.952 [2024-04-25 03:28:53.258683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.952 qpair failed and we were unable to recover it. 00:28:18.952 [2024-04-25 03:28:53.258852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.952 [2024-04-25 03:28:53.259042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.952 [2024-04-25 03:28:53.259068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.952 qpair failed and we were unable to recover it. 00:28:18.952 [2024-04-25 03:28:53.259253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.952 [2024-04-25 03:28:53.259444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.952 [2024-04-25 03:28:53.259470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.952 qpair failed and we were unable to recover it. 00:28:18.952 [2024-04-25 03:28:53.259667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.952 [2024-04-25 03:28:53.259870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.953 [2024-04-25 03:28:53.259895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.953 qpair failed and we were unable to recover it. 00:28:18.953 [2024-04-25 03:28:53.260092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.953 [2024-04-25 03:28:53.260278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.953 [2024-04-25 03:28:53.260304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.953 qpair failed and we were unable to recover it. 00:28:18.953 [2024-04-25 03:28:53.260466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.953 [2024-04-25 03:28:53.260692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.953 [2024-04-25 03:28:53.260718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.953 qpair failed and we were unable to recover it. 00:28:18.953 [2024-04-25 03:28:53.260916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.953 [2024-04-25 03:28:53.261085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.953 [2024-04-25 03:28:53.261111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.953 qpair failed and we were unable to recover it. 00:28:18.953 [2024-04-25 03:28:53.261307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.953 [2024-04-25 03:28:53.261502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.953 [2024-04-25 03:28:53.261529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.953 qpair failed and we were unable to recover it. 00:28:18.953 [2024-04-25 03:28:53.261702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.953 [2024-04-25 03:28:53.261869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.953 [2024-04-25 03:28:53.261894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.953 qpair failed and we were unable to recover it. 00:28:18.953 [2024-04-25 03:28:53.262064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.953 [2024-04-25 03:28:53.262254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.953 [2024-04-25 03:28:53.262280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.953 qpair failed and we were unable to recover it. 00:28:18.953 [2024-04-25 03:28:53.262469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.953 [2024-04-25 03:28:53.262651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.953 [2024-04-25 03:28:53.262677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.953 qpair failed and we were unable to recover it. 00:28:18.953 [2024-04-25 03:28:53.262852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.953 [2024-04-25 03:28:53.263041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.953 [2024-04-25 03:28:53.263067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.953 qpair failed and we were unable to recover it. 00:28:18.953 [2024-04-25 03:28:53.263241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.953 [2024-04-25 03:28:53.263437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.953 [2024-04-25 03:28:53.263463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.953 qpair failed and we were unable to recover it. 00:28:18.953 [2024-04-25 03:28:53.263641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.953 [2024-04-25 03:28:53.263811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.953 [2024-04-25 03:28:53.263837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.953 qpair failed and we were unable to recover it. 00:28:18.953 [2024-04-25 03:28:53.264028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.953 [2024-04-25 03:28:53.264218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.953 [2024-04-25 03:28:53.264243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.953 qpair failed and we were unable to recover it. 00:28:18.953 [2024-04-25 03:28:53.264413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.953 [2024-04-25 03:28:53.264650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.953 [2024-04-25 03:28:53.264677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.953 qpair failed and we were unable to recover it. 00:28:18.953 [2024-04-25 03:28:53.264874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.953 [2024-04-25 03:28:53.265103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.953 [2024-04-25 03:28:53.265128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.953 qpair failed and we were unable to recover it. 00:28:18.953 [2024-04-25 03:28:53.265326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.953 [2024-04-25 03:28:53.265543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.953 [2024-04-25 03:28:53.265570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.953 qpair failed and we were unable to recover it. 00:28:18.953 [2024-04-25 03:28:53.265746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.953 [2024-04-25 03:28:53.265966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.953 [2024-04-25 03:28:53.265992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.953 qpair failed and we were unable to recover it. 00:28:18.953 [2024-04-25 03:28:53.266164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.953 [2024-04-25 03:28:53.266336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.953 [2024-04-25 03:28:53.266362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.953 qpair failed and we were unable to recover it. 00:28:18.953 [2024-04-25 03:28:53.266566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.953 [2024-04-25 03:28:53.266733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.953 [2024-04-25 03:28:53.266760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.953 qpair failed and we were unable to recover it. 00:28:18.953 [2024-04-25 03:28:53.266962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.953 [2024-04-25 03:28:53.267157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.953 [2024-04-25 03:28:53.267182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.953 qpair failed and we were unable to recover it. 00:28:18.953 [2024-04-25 03:28:53.267344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.953 [2024-04-25 03:28:53.267553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.953 [2024-04-25 03:28:53.267579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.953 qpair failed and we were unable to recover it. 00:28:18.953 [2024-04-25 03:28:53.267783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.953 [2024-04-25 03:28:53.267950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.953 [2024-04-25 03:28:53.267976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.953 qpair failed and we were unable to recover it. 00:28:18.953 [2024-04-25 03:28:53.268197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.953 [2024-04-25 03:28:53.268391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.953 [2024-04-25 03:28:53.268417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.953 qpair failed and we were unable to recover it. 00:28:18.953 [2024-04-25 03:28:53.268588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.953 [2024-04-25 03:28:53.268820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.953 [2024-04-25 03:28:53.268847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.953 qpair failed and we were unable to recover it. 00:28:18.953 [2024-04-25 03:28:53.269034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.953 [2024-04-25 03:28:53.269226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.953 [2024-04-25 03:28:53.269252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.953 qpair failed and we were unable to recover it. 00:28:18.953 [2024-04-25 03:28:53.269416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.953 [2024-04-25 03:28:53.269575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.953 [2024-04-25 03:28:53.269602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.953 qpair failed and we were unable to recover it. 00:28:18.953 [2024-04-25 03:28:53.269782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.953 [2024-04-25 03:28:53.269985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.953 [2024-04-25 03:28:53.270012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.953 qpair failed and we were unable to recover it. 00:28:18.953 [2024-04-25 03:28:53.270180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.953 [2024-04-25 03:28:53.270354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.953 [2024-04-25 03:28:53.270381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.953 qpair failed and we were unable to recover it. 00:28:18.953 [2024-04-25 03:28:53.270547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.953 [2024-04-25 03:28:53.270745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.953 [2024-04-25 03:28:53.270771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.953 qpair failed and we were unable to recover it. 00:28:18.953 [2024-04-25 03:28:53.270964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.953 [2024-04-25 03:28:53.271129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.953 [2024-04-25 03:28:53.271155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.953 qpair failed and we were unable to recover it. 00:28:18.954 [2024-04-25 03:28:53.271313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.954 [2024-04-25 03:28:53.271513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.954 [2024-04-25 03:28:53.271539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.954 qpair failed and we were unable to recover it. 00:28:18.954 [2024-04-25 03:28:53.271740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.954 [2024-04-25 03:28:53.271906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.954 [2024-04-25 03:28:53.271932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.954 qpair failed and we were unable to recover it. 00:28:18.954 [2024-04-25 03:28:53.272156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.954 [2024-04-25 03:28:53.272352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.954 [2024-04-25 03:28:53.272377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.954 qpair failed and we were unable to recover it. 00:28:18.954 [2024-04-25 03:28:53.272536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.954 [2024-04-25 03:28:53.272737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.954 [2024-04-25 03:28:53.272765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.954 qpair failed and we were unable to recover it. 00:28:18.954 [2024-04-25 03:28:53.272982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.954 [2024-04-25 03:28:53.273145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.954 [2024-04-25 03:28:53.273171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.954 qpair failed and we were unable to recover it. 00:28:18.954 [2024-04-25 03:28:53.273367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.954 [2024-04-25 03:28:53.273563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.954 [2024-04-25 03:28:53.273589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.954 qpair failed and we were unable to recover it. 00:28:18.954 [2024-04-25 03:28:53.273794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.954 [2024-04-25 03:28:53.273987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.954 [2024-04-25 03:28:53.274017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.954 qpair failed and we were unable to recover it. 00:28:18.954 [2024-04-25 03:28:53.274189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.954 [2024-04-25 03:28:53.274377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.954 [2024-04-25 03:28:53.274403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.954 qpair failed and we were unable to recover it. 00:28:18.954 [2024-04-25 03:28:53.274558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.954 [2024-04-25 03:28:53.274773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.954 [2024-04-25 03:28:53.274800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.954 qpair failed and we were unable to recover it. 00:28:18.954 [2024-04-25 03:28:53.274970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.954 [2024-04-25 03:28:53.275160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.954 [2024-04-25 03:28:53.275186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.954 qpair failed and we were unable to recover it. 00:28:18.954 [2024-04-25 03:28:53.275372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.954 [2024-04-25 03:28:53.275562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.954 [2024-04-25 03:28:53.275588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.954 qpair failed and we were unable to recover it. 00:28:18.954 [2024-04-25 03:28:53.275791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.954 [2024-04-25 03:28:53.275958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.954 [2024-04-25 03:28:53.275984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.954 qpair failed and we were unable to recover it. 00:28:18.954 [2024-04-25 03:28:53.276173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.954 [2024-04-25 03:28:53.276364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.954 [2024-04-25 03:28:53.276390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.954 qpair failed and we were unable to recover it. 00:28:18.954 [2024-04-25 03:28:53.276584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.954 [2024-04-25 03:28:53.276760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.954 [2024-04-25 03:28:53.276786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.954 qpair failed and we were unable to recover it. 00:28:18.954 [2024-04-25 03:28:53.276948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.954 [2024-04-25 03:28:53.277148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.954 [2024-04-25 03:28:53.277174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.954 qpair failed and we were unable to recover it. 00:28:18.954 [2024-04-25 03:28:53.277388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.954 [2024-04-25 03:28:53.277612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.954 [2024-04-25 03:28:53.277643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.954 qpair failed and we were unable to recover it. 00:28:18.954 [2024-04-25 03:28:53.277811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.954 [2024-04-25 03:28:53.277986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.954 [2024-04-25 03:28:53.278012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.954 qpair failed and we were unable to recover it. 00:28:18.954 [2024-04-25 03:28:53.278190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.954 [2024-04-25 03:28:53.278363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.954 [2024-04-25 03:28:53.278389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.954 qpair failed and we were unable to recover it. 00:28:18.954 [2024-04-25 03:28:53.278549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.954 [2024-04-25 03:28:53.278714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.954 [2024-04-25 03:28:53.278740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.954 qpair failed and we were unable to recover it. 00:28:18.954 [2024-04-25 03:28:53.278945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.954 [2024-04-25 03:28:53.279166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.954 [2024-04-25 03:28:53.279192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.954 qpair failed and we were unable to recover it. 00:28:18.954 [2024-04-25 03:28:53.279368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.954 [2024-04-25 03:28:53.279606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.954 [2024-04-25 03:28:53.279637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.954 qpair failed and we were unable to recover it. 00:28:18.954 [2024-04-25 03:28:53.279805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.954 [2024-04-25 03:28:53.280018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.954 [2024-04-25 03:28:53.280043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.954 qpair failed and we were unable to recover it. 00:28:18.954 [2024-04-25 03:28:53.280240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.954 [2024-04-25 03:28:53.280427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.954 [2024-04-25 03:28:53.280452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.954 qpair failed and we were unable to recover it. 00:28:18.954 [2024-04-25 03:28:53.280654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.954 [2024-04-25 03:28:53.280815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.954 [2024-04-25 03:28:53.280839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.954 qpair failed and we were unable to recover it. 00:28:18.954 [2024-04-25 03:28:53.281005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.954 [2024-04-25 03:28:53.281204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.954 [2024-04-25 03:28:53.281229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.954 qpair failed and we were unable to recover it. 00:28:18.954 [2024-04-25 03:28:53.281413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.954 [2024-04-25 03:28:53.281638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.954 [2024-04-25 03:28:53.281664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.954 qpair failed and we were unable to recover it. 00:28:18.954 [2024-04-25 03:28:53.281839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.954 [2024-04-25 03:28:53.282032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.954 [2024-04-25 03:28:53.282057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.954 qpair failed and we were unable to recover it. 00:28:18.954 [2024-04-25 03:28:53.282281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.954 [2024-04-25 03:28:53.282446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.954 [2024-04-25 03:28:53.282471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.954 qpair failed and we were unable to recover it. 00:28:18.954 [2024-04-25 03:28:53.282642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.954 [2024-04-25 03:28:53.282813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.954 [2024-04-25 03:28:53.282839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.954 qpair failed and we were unable to recover it. 00:28:18.954 [2024-04-25 03:28:53.283033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.955 [2024-04-25 03:28:53.283190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.955 [2024-04-25 03:28:53.283215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.955 qpair failed and we were unable to recover it. 00:28:18.955 [2024-04-25 03:28:53.283412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.955 [2024-04-25 03:28:53.283597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.955 [2024-04-25 03:28:53.283622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.955 qpair failed and we were unable to recover it. 00:28:18.955 [2024-04-25 03:28:53.283796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.955 [2024-04-25 03:28:53.283964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.955 [2024-04-25 03:28:53.283989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.955 qpair failed and we were unable to recover it. 00:28:18.955 [2024-04-25 03:28:53.284181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.955 [2024-04-25 03:28:53.284402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.955 [2024-04-25 03:28:53.284427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.955 qpair failed and we were unable to recover it. 00:28:18.955 [2024-04-25 03:28:53.284618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.955 [2024-04-25 03:28:53.284798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.955 [2024-04-25 03:28:53.284823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.955 qpair failed and we were unable to recover it. 00:28:18.955 [2024-04-25 03:28:53.284984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.955 [2024-04-25 03:28:53.285143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.955 [2024-04-25 03:28:53.285168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.955 qpair failed and we were unable to recover it. 00:28:18.955 [2024-04-25 03:28:53.285362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.955 [2024-04-25 03:28:53.285531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.955 [2024-04-25 03:28:53.285556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.955 qpair failed and we were unable to recover it. 00:28:18.955 [2024-04-25 03:28:53.285727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.955 [2024-04-25 03:28:53.285891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.955 [2024-04-25 03:28:53.285918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.955 qpair failed and we were unable to recover it. 00:28:18.955 [2024-04-25 03:28:53.286086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.955 [2024-04-25 03:28:53.286253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.955 [2024-04-25 03:28:53.286278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.955 qpair failed and we were unable to recover it. 00:28:18.955 [2024-04-25 03:28:53.286436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.955 [2024-04-25 03:28:53.286604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.955 [2024-04-25 03:28:53.286635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.955 qpair failed and we were unable to recover it. 00:28:18.955 [2024-04-25 03:28:53.286814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.955 [2024-04-25 03:28:53.287002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.955 [2024-04-25 03:28:53.287029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.955 qpair failed and we were unable to recover it. 00:28:18.955 [2024-04-25 03:28:53.287246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.955 [2024-04-25 03:28:53.287429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.955 [2024-04-25 03:28:53.287453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.955 qpair failed and we were unable to recover it. 00:28:18.955 [2024-04-25 03:28:53.287648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.955 [2024-04-25 03:28:53.287815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.955 [2024-04-25 03:28:53.287840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.955 qpair failed and we were unable to recover it. 00:28:18.955 [2024-04-25 03:28:53.288014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.955 [2024-04-25 03:28:53.288181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.955 [2024-04-25 03:28:53.288206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.955 qpair failed and we were unable to recover it. 00:28:18.955 [2024-04-25 03:28:53.288419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.955 [2024-04-25 03:28:53.288581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.955 [2024-04-25 03:28:53.288606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.955 qpair failed and we were unable to recover it. 00:28:18.955 [2024-04-25 03:28:53.288806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.955 [2024-04-25 03:28:53.288975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.955 [2024-04-25 03:28:53.288999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.955 qpair failed and we were unable to recover it. 00:28:18.955 [2024-04-25 03:28:53.289193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.955 [2024-04-25 03:28:53.289388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.955 [2024-04-25 03:28:53.289413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.955 qpair failed and we were unable to recover it. 00:28:18.955 [2024-04-25 03:28:53.289604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.955 [2024-04-25 03:28:53.289815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.955 [2024-04-25 03:28:53.289840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.955 qpair failed and we were unable to recover it. 00:28:18.955 [2024-04-25 03:28:53.290036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.955 [2024-04-25 03:28:53.290238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.955 [2024-04-25 03:28:53.290263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.955 qpair failed and we were unable to recover it. 00:28:18.955 [2024-04-25 03:28:53.290478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.955 [2024-04-25 03:28:53.290649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.955 [2024-04-25 03:28:53.290675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.955 qpair failed and we were unable to recover it. 00:28:18.955 [2024-04-25 03:28:53.290839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.955 [2024-04-25 03:28:53.291022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.955 [2024-04-25 03:28:53.291047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.955 qpair failed and we were unable to recover it. 00:28:18.955 [2024-04-25 03:28:53.291239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.955 [2024-04-25 03:28:53.291404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.955 [2024-04-25 03:28:53.291429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.955 qpair failed and we were unable to recover it. 00:28:18.955 [2024-04-25 03:28:53.291644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.955 [2024-04-25 03:28:53.291821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.955 [2024-04-25 03:28:53.291846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.955 qpair failed and we were unable to recover it. 00:28:18.955 [2024-04-25 03:28:53.292005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.955 [2024-04-25 03:28:53.292204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.955 [2024-04-25 03:28:53.292229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.955 qpair failed and we were unable to recover it. 00:28:18.955 [2024-04-25 03:28:53.292437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.955 [2024-04-25 03:28:53.292657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.956 [2024-04-25 03:28:53.292682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.956 qpair failed and we were unable to recover it. 00:28:18.956 [2024-04-25 03:28:53.292878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.956 [2024-04-25 03:28:53.293045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.956 [2024-04-25 03:28:53.293070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.956 qpair failed and we were unable to recover it. 00:28:18.956 [2024-04-25 03:28:53.293254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.956 [2024-04-25 03:28:53.293423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.956 [2024-04-25 03:28:53.293448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.956 qpair failed and we were unable to recover it. 00:28:18.956 [2024-04-25 03:28:53.293665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.956 [2024-04-25 03:28:53.293866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.956 [2024-04-25 03:28:53.293891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.956 qpair failed and we were unable to recover it. 00:28:18.956 [2024-04-25 03:28:53.294123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.956 [2024-04-25 03:28:53.294311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.956 [2024-04-25 03:28:53.294342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.956 qpair failed and we were unable to recover it. 00:28:18.956 [2024-04-25 03:28:53.294541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.956 [2024-04-25 03:28:53.294709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.956 [2024-04-25 03:28:53.294735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.956 qpair failed and we were unable to recover it. 00:28:18.956 [2024-04-25 03:28:53.294905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.956 [2024-04-25 03:28:53.295098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.956 [2024-04-25 03:28:53.295123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.956 qpair failed and we were unable to recover it. 00:28:18.956 [2024-04-25 03:28:53.295300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.956 [2024-04-25 03:28:53.295471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.956 [2024-04-25 03:28:53.295496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.956 qpair failed and we were unable to recover it. 00:28:18.956 [2024-04-25 03:28:53.295661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.956 [2024-04-25 03:28:53.295865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.956 [2024-04-25 03:28:53.295890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.956 qpair failed and we were unable to recover it. 00:28:18.956 [2024-04-25 03:28:53.296062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.956 [2024-04-25 03:28:53.296255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.956 [2024-04-25 03:28:53.296280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.956 qpair failed and we were unable to recover it. 00:28:18.956 [2024-04-25 03:28:53.296474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.956 [2024-04-25 03:28:53.296643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.956 [2024-04-25 03:28:53.296669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.956 qpair failed and we were unable to recover it. 00:28:18.956 [2024-04-25 03:28:53.296841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.956 [2024-04-25 03:28:53.297012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.956 [2024-04-25 03:28:53.297037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.956 qpair failed and we were unable to recover it. 00:28:18.956 [2024-04-25 03:28:53.297248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.956 [2024-04-25 03:28:53.297413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.956 [2024-04-25 03:28:53.297439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.956 qpair failed and we were unable to recover it. 00:28:18.956 [2024-04-25 03:28:53.297641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.956 [2024-04-25 03:28:53.297811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.956 [2024-04-25 03:28:53.297837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.956 qpair failed and we were unable to recover it. 00:28:18.956 [2024-04-25 03:28:53.298025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.956 [2024-04-25 03:28:53.298194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.956 [2024-04-25 03:28:53.298219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.956 qpair failed and we were unable to recover it. 00:28:18.956 [2024-04-25 03:28:53.298414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.956 [2024-04-25 03:28:53.298586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.956 [2024-04-25 03:28:53.298611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.956 qpair failed and we were unable to recover it. 00:28:18.956 [2024-04-25 03:28:53.298818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.956 [2024-04-25 03:28:53.298983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.956 [2024-04-25 03:28:53.299008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.956 qpair failed and we were unable to recover it. 00:28:18.956 [2024-04-25 03:28:53.299205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.956 [2024-04-25 03:28:53.299401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.956 [2024-04-25 03:28:53.299426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.956 qpair failed and we were unable to recover it. 00:28:18.956 [2024-04-25 03:28:53.299647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.956 [2024-04-25 03:28:53.299818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.956 [2024-04-25 03:28:53.299843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.956 qpair failed and we were unable to recover it. 00:28:18.956 [2024-04-25 03:28:53.300007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.956 [2024-04-25 03:28:53.300216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.956 [2024-04-25 03:28:53.300241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.956 qpair failed and we were unable to recover it. 00:28:18.956 [2024-04-25 03:28:53.300436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.956 [2024-04-25 03:28:53.300606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.956 [2024-04-25 03:28:53.300638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.956 qpair failed and we were unable to recover it. 00:28:18.956 [2024-04-25 03:28:53.300834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.956 [2024-04-25 03:28:53.301028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.956 [2024-04-25 03:28:53.301053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.956 qpair failed and we were unable to recover it. 00:28:18.956 [2024-04-25 03:28:53.301221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.956 [2024-04-25 03:28:53.301433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.956 [2024-04-25 03:28:53.301458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.956 qpair failed and we were unable to recover it. 00:28:18.956 [2024-04-25 03:28:53.301679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.956 [2024-04-25 03:28:53.301853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.956 [2024-04-25 03:28:53.301878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.956 qpair failed and we were unable to recover it. 00:28:18.956 [2024-04-25 03:28:53.302084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.956 [2024-04-25 03:28:53.302249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.956 [2024-04-25 03:28:53.302274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.956 qpair failed and we were unable to recover it. 00:28:18.956 [2024-04-25 03:28:53.302446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.956 [2024-04-25 03:28:53.302611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.956 [2024-04-25 03:28:53.302641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.956 qpair failed and we were unable to recover it. 00:28:18.956 [2024-04-25 03:28:53.302806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.956 [2024-04-25 03:28:53.303005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.956 [2024-04-25 03:28:53.303030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.956 qpair failed and we were unable to recover it. 00:28:18.956 [2024-04-25 03:28:53.303220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.956 [2024-04-25 03:28:53.303383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.956 [2024-04-25 03:28:53.303409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.956 qpair failed and we were unable to recover it. 00:28:18.956 [2024-04-25 03:28:53.303598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.956 [2024-04-25 03:28:53.303799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.956 [2024-04-25 03:28:53.303824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.956 qpair failed and we were unable to recover it. 00:28:18.956 [2024-04-25 03:28:53.304017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.956 [2024-04-25 03:28:53.304185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.957 [2024-04-25 03:28:53.304209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.957 qpair failed and we were unable to recover it. 00:28:18.957 [2024-04-25 03:28:53.304409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.957 [2024-04-25 03:28:53.304601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.957 [2024-04-25 03:28:53.304626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.957 qpair failed and we were unable to recover it. 00:28:18.957 [2024-04-25 03:28:53.304824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.957 [2024-04-25 03:28:53.304985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.957 [2024-04-25 03:28:53.305010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.957 qpair failed and we were unable to recover it. 00:28:18.957 [2024-04-25 03:28:53.305198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.957 [2024-04-25 03:28:53.305364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.957 [2024-04-25 03:28:53.305389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.957 qpair failed and we were unable to recover it. 00:28:18.957 [2024-04-25 03:28:53.305555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.957 [2024-04-25 03:28:53.305722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.957 [2024-04-25 03:28:53.305749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.957 qpair failed and we were unable to recover it. 00:28:18.957 [2024-04-25 03:28:53.305917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.957 [2024-04-25 03:28:53.306111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.957 [2024-04-25 03:28:53.306136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.957 qpair failed and we were unable to recover it. 00:28:18.957 [2024-04-25 03:28:53.306329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.957 [2024-04-25 03:28:53.306502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.957 [2024-04-25 03:28:53.306527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.957 qpair failed and we were unable to recover it. 00:28:18.957 [2024-04-25 03:28:53.306730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.957 [2024-04-25 03:28:53.306889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.957 [2024-04-25 03:28:53.306914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.957 qpair failed and we were unable to recover it. 00:28:18.957 [2024-04-25 03:28:53.307079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.957 [2024-04-25 03:28:53.307280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.957 [2024-04-25 03:28:53.307305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.957 qpair failed and we were unable to recover it. 00:28:18.957 [2024-04-25 03:28:53.307466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.957 [2024-04-25 03:28:53.307632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.957 [2024-04-25 03:28:53.307658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.957 qpair failed and we were unable to recover it. 00:28:18.957 [2024-04-25 03:28:53.307821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.957 [2024-04-25 03:28:53.307986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.957 [2024-04-25 03:28:53.308011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.957 qpair failed and we were unable to recover it. 00:28:18.957 [2024-04-25 03:28:53.308182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.957 [2024-04-25 03:28:53.308382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.957 [2024-04-25 03:28:53.308407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.957 qpair failed and we were unable to recover it. 00:28:18.957 [2024-04-25 03:28:53.308596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.957 [2024-04-25 03:28:53.308797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.957 [2024-04-25 03:28:53.308823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.957 qpair failed and we were unable to recover it. 00:28:18.957 [2024-04-25 03:28:53.309018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.957 [2024-04-25 03:28:53.309216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.957 [2024-04-25 03:28:53.309241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.957 qpair failed and we were unable to recover it. 00:28:18.957 [2024-04-25 03:28:53.309416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.957 [2024-04-25 03:28:53.309603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.957 [2024-04-25 03:28:53.309636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.957 qpair failed and we were unable to recover it. 00:28:18.957 [2024-04-25 03:28:53.309804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.957 [2024-04-25 03:28:53.309968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.957 [2024-04-25 03:28:53.309992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.957 qpair failed and we were unable to recover it. 00:28:18.957 [2024-04-25 03:28:53.310162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.957 [2024-04-25 03:28:53.310327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.957 [2024-04-25 03:28:53.310356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.957 qpair failed and we were unable to recover it. 00:28:18.957 [2024-04-25 03:28:53.310561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.957 [2024-04-25 03:28:53.310732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.957 [2024-04-25 03:28:53.310758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.957 qpair failed and we were unable to recover it. 00:28:18.957 [2024-04-25 03:28:53.310930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.957 [2024-04-25 03:28:53.311118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.957 [2024-04-25 03:28:53.311144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.957 qpair failed and we were unable to recover it. 00:28:18.957 [2024-04-25 03:28:53.311308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.957 [2024-04-25 03:28:53.311499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.957 [2024-04-25 03:28:53.311524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.957 qpair failed and we were unable to recover it. 00:28:18.957 [2024-04-25 03:28:53.311683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.957 [2024-04-25 03:28:53.311852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.957 [2024-04-25 03:28:53.311879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.957 qpair failed and we were unable to recover it. 00:28:18.957 [2024-04-25 03:28:53.312071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.957 [2024-04-25 03:28:53.312246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.957 [2024-04-25 03:28:53.312271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.957 qpair failed and we were unable to recover it. 00:28:18.957 [2024-04-25 03:28:53.312436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.957 [2024-04-25 03:28:53.312624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.957 [2024-04-25 03:28:53.312655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.957 qpair failed and we were unable to recover it. 00:28:18.957 [2024-04-25 03:28:53.312818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.957 [2024-04-25 03:28:53.312982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.957 [2024-04-25 03:28:53.313007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.957 qpair failed and we were unable to recover it. 00:28:18.957 [2024-04-25 03:28:53.313201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.957 [2024-04-25 03:28:53.313363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.957 [2024-04-25 03:28:53.313387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.957 qpair failed and we were unable to recover it. 00:28:18.957 [2024-04-25 03:28:53.313555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.957 [2024-04-25 03:28:53.313740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.957 [2024-04-25 03:28:53.313766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.957 qpair failed and we were unable to recover it. 00:28:18.957 [2024-04-25 03:28:53.313956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.957 [2024-04-25 03:28:53.314146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.957 [2024-04-25 03:28:53.314174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.957 qpair failed and we were unable to recover it. 00:28:18.957 [2024-04-25 03:28:53.314346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.957 [2024-04-25 03:28:53.314511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.957 [2024-04-25 03:28:53.314536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.957 qpair failed and we were unable to recover it. 00:28:18.957 [2024-04-25 03:28:53.314733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.957 [2024-04-25 03:28:53.314928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.957 [2024-04-25 03:28:53.314954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.957 qpair failed and we were unable to recover it. 00:28:18.957 [2024-04-25 03:28:53.315142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.957 [2024-04-25 03:28:53.315308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.957 [2024-04-25 03:28:53.315333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.958 qpair failed and we were unable to recover it. 00:28:18.958 [2024-04-25 03:28:53.315522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.958 [2024-04-25 03:28:53.315724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.958 [2024-04-25 03:28:53.315749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.958 qpair failed and we were unable to recover it. 00:28:18.958 [2024-04-25 03:28:53.315911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.958 [2024-04-25 03:28:53.316100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.958 [2024-04-25 03:28:53.316126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.958 qpair failed and we were unable to recover it. 00:28:18.958 [2024-04-25 03:28:53.316312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.958 [2024-04-25 03:28:53.316481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.958 [2024-04-25 03:28:53.316506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.958 qpair failed and we were unable to recover it. 00:28:18.958 [2024-04-25 03:28:53.316703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.958 [2024-04-25 03:28:53.316872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.958 [2024-04-25 03:28:53.316897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.958 qpair failed and we were unable to recover it. 00:28:18.958 [2024-04-25 03:28:53.317095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.958 [2024-04-25 03:28:53.317286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.958 [2024-04-25 03:28:53.317310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.958 qpair failed and we were unable to recover it. 00:28:18.958 [2024-04-25 03:28:53.317509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.958 [2024-04-25 03:28:53.317681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.958 [2024-04-25 03:28:53.317707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.958 qpair failed and we were unable to recover it. 00:28:18.958 [2024-04-25 03:28:53.317916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.958 [2024-04-25 03:28:53.318085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.958 [2024-04-25 03:28:53.318109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.958 qpair failed and we were unable to recover it. 00:28:18.958 [2024-04-25 03:28:53.318301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.958 [2024-04-25 03:28:53.318474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.958 [2024-04-25 03:28:53.318501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.958 qpair failed and we were unable to recover it. 00:28:18.958 [2024-04-25 03:28:53.318688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.958 [2024-04-25 03:28:53.318860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.958 [2024-04-25 03:28:53.318886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.958 qpair failed and we were unable to recover it. 00:28:18.958 [2024-04-25 03:28:53.319053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.958 [2024-04-25 03:28:53.319241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.958 [2024-04-25 03:28:53.319266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.958 qpair failed and we were unable to recover it. 00:28:18.958 [2024-04-25 03:28:53.319467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.958 [2024-04-25 03:28:53.319645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.958 [2024-04-25 03:28:53.319672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.958 qpair failed and we were unable to recover it. 00:28:18.958 [2024-04-25 03:28:53.319839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.958 [2024-04-25 03:28:53.320010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.958 [2024-04-25 03:28:53.320035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.958 qpair failed and we were unable to recover it. 00:28:18.958 [2024-04-25 03:28:53.320233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.958 [2024-04-25 03:28:53.320403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.958 [2024-04-25 03:28:53.320430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.958 qpair failed and we were unable to recover it. 00:28:18.958 [2024-04-25 03:28:53.320616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.958 [2024-04-25 03:28:53.320788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.958 [2024-04-25 03:28:53.320814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.958 qpair failed and we were unable to recover it. 00:28:18.958 [2024-04-25 03:28:53.320982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.958 [2024-04-25 03:28:53.321143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.958 [2024-04-25 03:28:53.321168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.958 qpair failed and we were unable to recover it. 00:28:18.958 [2024-04-25 03:28:53.321360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.958 [2024-04-25 03:28:53.321553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.958 [2024-04-25 03:28:53.321577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.958 qpair failed and we were unable to recover it. 00:28:18.958 [2024-04-25 03:28:53.321768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.958 [2024-04-25 03:28:53.321986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.958 [2024-04-25 03:28:53.322011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.958 qpair failed and we were unable to recover it. 00:28:18.958 [2024-04-25 03:28:53.322222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.958 [2024-04-25 03:28:53.322412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.958 [2024-04-25 03:28:53.322437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.958 qpair failed and we were unable to recover it. 00:28:18.958 [2024-04-25 03:28:53.322607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.958 [2024-04-25 03:28:53.322821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.958 [2024-04-25 03:28:53.322846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.958 qpair failed and we were unable to recover it. 00:28:18.958 [2024-04-25 03:28:53.323068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.958 [2024-04-25 03:28:53.323255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.958 [2024-04-25 03:28:53.323280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.958 qpair failed and we were unable to recover it. 00:28:18.958 [2024-04-25 03:28:53.323453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.958 [2024-04-25 03:28:53.323645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.958 [2024-04-25 03:28:53.323670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.958 qpair failed and we were unable to recover it. 00:28:18.958 [2024-04-25 03:28:53.323830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.958 [2024-04-25 03:28:53.324004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.958 [2024-04-25 03:28:53.324029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.958 qpair failed and we were unable to recover it. 00:28:18.958 [2024-04-25 03:28:53.324217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.958 [2024-04-25 03:28:53.324386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.958 [2024-04-25 03:28:53.324411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.958 qpair failed and we were unable to recover it. 00:28:18.958 [2024-04-25 03:28:53.324611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.958 [2024-04-25 03:28:53.324782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.958 [2024-04-25 03:28:53.324807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.958 qpair failed and we were unable to recover it. 00:28:18.958 [2024-04-25 03:28:53.324967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.958 [2024-04-25 03:28:53.325124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.958 [2024-04-25 03:28:53.325150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.958 qpair failed and we were unable to recover it. 00:28:18.958 [2024-04-25 03:28:53.325347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.958 [2024-04-25 03:28:53.325546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.958 [2024-04-25 03:28:53.325571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.958 qpair failed and we were unable to recover it. 00:28:18.958 [2024-04-25 03:28:53.325758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.958 [2024-04-25 03:28:53.325953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.958 [2024-04-25 03:28:53.325978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.958 qpair failed and we were unable to recover it. 00:28:18.958 [2024-04-25 03:28:53.326141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.958 [2024-04-25 03:28:53.326312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.958 [2024-04-25 03:28:53.326337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.958 qpair failed and we were unable to recover it. 00:28:18.958 [2024-04-25 03:28:53.326527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.958 [2024-04-25 03:28:53.326700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.958 [2024-04-25 03:28:53.326726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.958 qpair failed and we were unable to recover it. 00:28:18.959 [2024-04-25 03:28:53.326898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.959 [2024-04-25 03:28:53.327092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.959 [2024-04-25 03:28:53.327117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.959 qpair failed and we were unable to recover it. 00:28:18.959 [2024-04-25 03:28:53.327290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.959 [2024-04-25 03:28:53.327504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.959 [2024-04-25 03:28:53.327529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.959 qpair failed and we were unable to recover it. 00:28:18.959 [2024-04-25 03:28:53.327720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.959 [2024-04-25 03:28:53.327911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.959 [2024-04-25 03:28:53.327938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.959 qpair failed and we were unable to recover it. 00:28:18.959 [2024-04-25 03:28:53.328129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.959 [2024-04-25 03:28:53.328285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.959 [2024-04-25 03:28:53.328310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.959 qpair failed and we were unable to recover it. 00:28:18.959 [2024-04-25 03:28:53.328504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.959 [2024-04-25 03:28:53.328674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.959 [2024-04-25 03:28:53.328700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.959 qpair failed and we were unable to recover it. 00:28:18.959 [2024-04-25 03:28:53.328885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.959 [2024-04-25 03:28:53.329054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.959 [2024-04-25 03:28:53.329079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.959 qpair failed and we were unable to recover it. 00:28:18.959 [2024-04-25 03:28:53.329244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.959 [2024-04-25 03:28:53.329434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.959 [2024-04-25 03:28:53.329459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.959 qpair failed and we were unable to recover it. 00:28:18.959 [2024-04-25 03:28:53.329644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.959 [2024-04-25 03:28:53.329818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.959 [2024-04-25 03:28:53.329843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.959 qpair failed and we were unable to recover it. 00:28:18.959 [2024-04-25 03:28:53.330039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.959 [2024-04-25 03:28:53.330199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.959 [2024-04-25 03:28:53.330228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.959 qpair failed and we were unable to recover it. 00:28:18.959 [2024-04-25 03:28:53.330415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.959 [2024-04-25 03:28:53.330583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.959 [2024-04-25 03:28:53.330608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.959 qpair failed and we were unable to recover it. 00:28:18.959 [2024-04-25 03:28:53.330830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.959 [2024-04-25 03:28:53.331002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.959 [2024-04-25 03:28:53.331029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.959 qpair failed and we were unable to recover it. 00:28:18.959 [2024-04-25 03:28:53.331217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.959 [2024-04-25 03:28:53.331412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.959 [2024-04-25 03:28:53.331438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.959 qpair failed and we were unable to recover it. 00:28:18.959 [2024-04-25 03:28:53.331655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.959 [2024-04-25 03:28:53.331834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.959 [2024-04-25 03:28:53.331860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.959 qpair failed and we were unable to recover it. 00:28:18.959 [2024-04-25 03:28:53.332085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.959 [2024-04-25 03:28:53.332307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.959 [2024-04-25 03:28:53.332332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.959 qpair failed and we were unable to recover it. 00:28:18.959 [2024-04-25 03:28:53.332502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.959 [2024-04-25 03:28:53.332698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.959 [2024-04-25 03:28:53.332725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.959 qpair failed and we were unable to recover it. 00:28:18.959 [2024-04-25 03:28:53.332925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.959 [2024-04-25 03:28:53.333094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.959 [2024-04-25 03:28:53.333119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.959 qpair failed and we were unable to recover it. 00:28:18.959 [2024-04-25 03:28:53.333342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.959 [2024-04-25 03:28:53.333539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.959 [2024-04-25 03:28:53.333565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.959 qpair failed and we were unable to recover it. 00:28:18.959 [2024-04-25 03:28:53.333733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.959 [2024-04-25 03:28:53.333925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.959 [2024-04-25 03:28:53.333951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.959 qpair failed and we were unable to recover it. 00:28:18.959 [2024-04-25 03:28:53.334110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.959 [2024-04-25 03:28:53.334284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.959 [2024-04-25 03:28:53.334309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.959 qpair failed and we were unable to recover it. 00:28:18.959 [2024-04-25 03:28:53.334473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.959 [2024-04-25 03:28:53.334646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.959 [2024-04-25 03:28:53.334672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.959 qpair failed and we were unable to recover it. 00:28:18.959 [2024-04-25 03:28:53.334862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.959 [2024-04-25 03:28:53.335025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.959 [2024-04-25 03:28:53.335050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.959 qpair failed and we were unable to recover it. 00:28:18.959 [2024-04-25 03:28:53.335241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.959 [2024-04-25 03:28:53.335424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.959 [2024-04-25 03:28:53.335449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.959 qpair failed and we were unable to recover it. 00:28:18.959 [2024-04-25 03:28:53.335666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.959 [2024-04-25 03:28:53.335832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.959 [2024-04-25 03:28:53.335858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.959 qpair failed and we were unable to recover it. 00:28:18.959 [2024-04-25 03:28:53.336049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.959 [2024-04-25 03:28:53.336239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.959 [2024-04-25 03:28:53.336264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.959 qpair failed and we were unable to recover it. 00:28:18.959 [2024-04-25 03:28:53.336452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.959 [2024-04-25 03:28:53.336641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.959 [2024-04-25 03:28:53.336667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.959 qpair failed and we were unable to recover it. 00:28:18.959 [2024-04-25 03:28:53.336863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.959 [2024-04-25 03:28:53.337060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.959 [2024-04-25 03:28:53.337086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.959 qpair failed and we were unable to recover it. 00:28:18.959 [2024-04-25 03:28:53.337251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.959 [2024-04-25 03:28:53.337451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.959 [2024-04-25 03:28:53.337476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.959 qpair failed and we were unable to recover it. 00:28:18.959 [2024-04-25 03:28:53.337650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.959 [2024-04-25 03:28:53.337848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.959 [2024-04-25 03:28:53.337874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.959 qpair failed and we were unable to recover it. 00:28:18.959 [2024-04-25 03:28:53.338037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.959 [2024-04-25 03:28:53.338194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.959 [2024-04-25 03:28:53.338219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.959 qpair failed and we were unable to recover it. 00:28:18.960 [2024-04-25 03:28:53.338386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.960 [2024-04-25 03:28:53.338583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.960 [2024-04-25 03:28:53.338609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.960 qpair failed and we were unable to recover it. 00:28:18.960 [2024-04-25 03:28:53.338777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.960 [2024-04-25 03:28:53.338942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.960 [2024-04-25 03:28:53.338969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.960 qpair failed and we were unable to recover it. 00:28:18.960 [2024-04-25 03:28:53.339161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.960 [2024-04-25 03:28:53.339377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.960 [2024-04-25 03:28:53.339402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.960 qpair failed and we were unable to recover it. 00:28:18.960 [2024-04-25 03:28:53.339575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.960 [2024-04-25 03:28:53.339772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.960 [2024-04-25 03:28:53.339798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.960 qpair failed and we were unable to recover it. 00:28:18.960 [2024-04-25 03:28:53.339969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.960 [2024-04-25 03:28:53.340154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.960 [2024-04-25 03:28:53.340178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.960 qpair failed and we were unable to recover it. 00:28:18.960 [2024-04-25 03:28:53.340414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.960 [2024-04-25 03:28:53.340581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.960 [2024-04-25 03:28:53.340606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.960 qpair failed and we were unable to recover it. 00:28:18.960 [2024-04-25 03:28:53.340799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.960 [2024-04-25 03:28:53.340961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.960 [2024-04-25 03:28:53.340987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.960 qpair failed and we were unable to recover it. 00:28:18.960 [2024-04-25 03:28:53.341150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.960 [2024-04-25 03:28:53.341360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.960 [2024-04-25 03:28:53.341385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.960 qpair failed and we were unable to recover it. 00:28:18.960 [2024-04-25 03:28:53.341542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.960 [2024-04-25 03:28:53.341715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.960 [2024-04-25 03:28:53.341740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.960 qpair failed and we were unable to recover it. 00:28:18.960 [2024-04-25 03:28:53.341906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.960 [2024-04-25 03:28:53.342125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.960 [2024-04-25 03:28:53.342150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.960 qpair failed and we were unable to recover it. 00:28:18.960 [2024-04-25 03:28:53.342345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.960 [2024-04-25 03:28:53.342517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.960 [2024-04-25 03:28:53.342542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.960 qpair failed and we were unable to recover it. 00:28:18.960 [2024-04-25 03:28:53.342718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.960 [2024-04-25 03:28:53.342922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.960 [2024-04-25 03:28:53.342947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.960 qpair failed and we were unable to recover it. 00:28:18.960 [2024-04-25 03:28:53.343133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.960 [2024-04-25 03:28:53.343324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.960 [2024-04-25 03:28:53.343349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.960 qpair failed and we were unable to recover it. 00:28:18.960 [2024-04-25 03:28:53.343516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.960 [2024-04-25 03:28:53.343720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.960 [2024-04-25 03:28:53.343745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.960 qpair failed and we were unable to recover it. 00:28:18.960 [2024-04-25 03:28:53.343955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.960 [2024-04-25 03:28:53.344142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.960 [2024-04-25 03:28:53.344167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.960 qpair failed and we were unable to recover it. 00:28:18.960 [2024-04-25 03:28:53.344384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.960 [2024-04-25 03:28:53.344574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.960 [2024-04-25 03:28:53.344599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.960 qpair failed and we were unable to recover it. 00:28:18.960 [2024-04-25 03:28:53.344768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.960 [2024-04-25 03:28:53.344954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.960 [2024-04-25 03:28:53.344980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.960 qpair failed and we were unable to recover it. 00:28:18.960 [2024-04-25 03:28:53.345174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.960 [2024-04-25 03:28:53.345337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.960 [2024-04-25 03:28:53.345362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.960 qpair failed and we were unable to recover it. 00:28:18.960 [2024-04-25 03:28:53.345525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.960 [2024-04-25 03:28:53.345718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.960 [2024-04-25 03:28:53.345743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.960 qpair failed and we were unable to recover it. 00:28:18.960 [2024-04-25 03:28:53.345916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.960 [2024-04-25 03:28:53.346116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.960 [2024-04-25 03:28:53.346141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.960 qpair failed and we were unable to recover it. 00:28:18.960 [2024-04-25 03:28:53.346348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.960 [2024-04-25 03:28:53.346585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.960 [2024-04-25 03:28:53.346611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.960 qpair failed and we were unable to recover it. 00:28:18.960 [2024-04-25 03:28:53.346801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.960 [2024-04-25 03:28:53.346990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.960 [2024-04-25 03:28:53.347015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.960 qpair failed and we were unable to recover it. 00:28:18.960 [2024-04-25 03:28:53.347205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.960 [2024-04-25 03:28:53.347402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.960 [2024-04-25 03:28:53.347427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.960 qpair failed and we were unable to recover it. 00:28:18.960 [2024-04-25 03:28:53.347594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.960 [2024-04-25 03:28:53.347767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.960 [2024-04-25 03:28:53.347793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.960 qpair failed and we were unable to recover it. 00:28:18.960 [2024-04-25 03:28:53.347979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.960 [2024-04-25 03:28:53.348194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.960 [2024-04-25 03:28:53.348219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.960 qpair failed and we were unable to recover it. 00:28:18.960 [2024-04-25 03:28:53.348388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.961 [2024-04-25 03:28:53.348548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.961 [2024-04-25 03:28:53.348572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.961 qpair failed and we were unable to recover it. 00:28:18.961 [2024-04-25 03:28:53.348748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.961 [2024-04-25 03:28:53.348909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.961 [2024-04-25 03:28:53.348934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.961 qpair failed and we were unable to recover it. 00:28:18.961 [2024-04-25 03:28:53.349157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.961 [2024-04-25 03:28:53.349327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.961 [2024-04-25 03:28:53.349352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.961 qpair failed and we were unable to recover it. 00:28:18.961 [2024-04-25 03:28:53.349569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.961 [2024-04-25 03:28:53.349733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.961 [2024-04-25 03:28:53.349758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.961 qpair failed and we were unable to recover it. 00:28:18.961 [2024-04-25 03:28:53.349967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.961 [2024-04-25 03:28:53.350168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.961 [2024-04-25 03:28:53.350193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.961 qpair failed and we were unable to recover it. 00:28:18.961 [2024-04-25 03:28:53.350352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.961 [2024-04-25 03:28:53.350536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.961 [2024-04-25 03:28:53.350565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.961 qpair failed and we were unable to recover it. 00:28:18.961 [2024-04-25 03:28:53.350748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.961 [2024-04-25 03:28:53.350932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.961 [2024-04-25 03:28:53.350957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.961 qpair failed and we were unable to recover it. 00:28:18.961 [2024-04-25 03:28:53.351178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.961 [2024-04-25 03:28:53.351375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.961 [2024-04-25 03:28:53.351399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.961 qpair failed and we were unable to recover it. 00:28:18.961 [2024-04-25 03:28:53.351590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.961 [2024-04-25 03:28:53.351795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.961 [2024-04-25 03:28:53.351820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.961 qpair failed and we were unable to recover it. 00:28:18.961 [2024-04-25 03:28:53.351982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.961 [2024-04-25 03:28:53.352146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.961 [2024-04-25 03:28:53.352171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.961 qpair failed and we were unable to recover it. 00:28:18.961 [2024-04-25 03:28:53.352359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.961 [2024-04-25 03:28:53.352555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.961 [2024-04-25 03:28:53.352580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.961 qpair failed and we were unable to recover it. 00:28:18.961 [2024-04-25 03:28:53.352852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.961 [2024-04-25 03:28:53.353125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.961 [2024-04-25 03:28:53.353150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.961 qpair failed and we were unable to recover it. 00:28:18.961 [2024-04-25 03:28:53.353354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.961 [2024-04-25 03:28:53.353522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.961 [2024-04-25 03:28:53.353548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.961 qpair failed and we were unable to recover it. 00:28:18.961 [2024-04-25 03:28:53.353718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.961 [2024-04-25 03:28:53.353892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.961 [2024-04-25 03:28:53.353917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.961 qpair failed and we were unable to recover it. 00:28:18.961 [2024-04-25 03:28:53.354084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.961 [2024-04-25 03:28:53.354283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.961 [2024-04-25 03:28:53.354308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.961 qpair failed and we were unable to recover it. 00:28:18.961 [2024-04-25 03:28:53.354580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.961 [2024-04-25 03:28:53.354750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.961 [2024-04-25 03:28:53.354776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.961 qpair failed and we were unable to recover it. 00:28:18.961 [2024-04-25 03:28:53.354956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.961 [2024-04-25 03:28:53.355149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.961 [2024-04-25 03:28:53.355175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.961 qpair failed and we were unable to recover it. 00:28:18.961 [2024-04-25 03:28:53.355336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.961 [2024-04-25 03:28:53.355536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.961 [2024-04-25 03:28:53.355561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.961 qpair failed and we were unable to recover it. 00:28:18.961 [2024-04-25 03:28:53.355767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.961 [2024-04-25 03:28:53.355935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.961 [2024-04-25 03:28:53.355960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.961 qpair failed and we were unable to recover it. 00:28:18.961 [2024-04-25 03:28:53.356130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.961 [2024-04-25 03:28:53.356319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.961 [2024-04-25 03:28:53.356345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.961 qpair failed and we were unable to recover it. 00:28:18.961 [2024-04-25 03:28:53.356536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.961 [2024-04-25 03:28:53.356729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.961 [2024-04-25 03:28:53.356755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.961 qpair failed and we were unable to recover it. 00:28:18.961 [2024-04-25 03:28:53.356945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.961 [2024-04-25 03:28:53.357109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.961 [2024-04-25 03:28:53.357134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.961 qpair failed and we were unable to recover it. 00:28:18.961 [2024-04-25 03:28:53.357347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.961 [2024-04-25 03:28:53.357515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.961 [2024-04-25 03:28:53.357542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.961 qpair failed and we were unable to recover it. 00:28:18.961 [2024-04-25 03:28:53.357723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.961 [2024-04-25 03:28:53.357890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.961 [2024-04-25 03:28:53.357916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.961 qpair failed and we were unable to recover it. 00:28:18.961 [2024-04-25 03:28:53.358094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.961 [2024-04-25 03:28:53.358258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.961 [2024-04-25 03:28:53.358285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.961 qpair failed and we were unable to recover it. 00:28:18.961 [2024-04-25 03:28:53.358458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.961 [2024-04-25 03:28:53.358615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.961 [2024-04-25 03:28:53.358654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.961 qpair failed and we were unable to recover it. 00:28:18.961 [2024-04-25 03:28:53.358823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.961 [2024-04-25 03:28:53.359037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.961 [2024-04-25 03:28:53.359062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.961 qpair failed and we were unable to recover it. 00:28:18.961 [2024-04-25 03:28:53.359236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.961 [2024-04-25 03:28:53.359401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.961 [2024-04-25 03:28:53.359426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.961 qpair failed and we were unable to recover it. 00:28:18.961 [2024-04-25 03:28:53.359592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.961 [2024-04-25 03:28:53.359810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.961 [2024-04-25 03:28:53.359837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.961 qpair failed and we were unable to recover it. 00:28:18.961 [2024-04-25 03:28:53.360058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.962 [2024-04-25 03:28:53.360225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.962 [2024-04-25 03:28:53.360251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.962 qpair failed and we were unable to recover it. 00:28:18.962 [2024-04-25 03:28:53.360447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.962 [2024-04-25 03:28:53.360672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.962 [2024-04-25 03:28:53.360698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.962 qpair failed and we were unable to recover it. 00:28:18.962 [2024-04-25 03:28:53.360854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.962 [2024-04-25 03:28:53.361018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.962 [2024-04-25 03:28:53.361044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.962 qpair failed and we were unable to recover it. 00:28:18.962 [2024-04-25 03:28:53.361212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.962 [2024-04-25 03:28:53.361420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.962 [2024-04-25 03:28:53.361445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.962 qpair failed and we were unable to recover it. 00:28:18.962 [2024-04-25 03:28:53.361637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.962 [2024-04-25 03:28:53.361837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.962 [2024-04-25 03:28:53.361864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.962 qpair failed and we were unable to recover it. 00:28:18.962 [2024-04-25 03:28:53.362026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.962 [2024-04-25 03:28:53.362269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.962 [2024-04-25 03:28:53.362294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.962 qpair failed and we were unable to recover it. 00:28:18.962 [2024-04-25 03:28:53.362469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.962 [2024-04-25 03:28:53.362639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.962 [2024-04-25 03:28:53.362665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.962 qpair failed and we were unable to recover it. 00:28:18.962 [2024-04-25 03:28:53.362863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.962 [2024-04-25 03:28:53.363096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.962 [2024-04-25 03:28:53.363121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.962 qpair failed and we were unable to recover it. 00:28:18.962 [2024-04-25 03:28:53.363344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.962 [2024-04-25 03:28:53.363503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.962 [2024-04-25 03:28:53.363529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.962 qpair failed and we were unable to recover it. 00:28:18.962 [2024-04-25 03:28:53.363698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.962 [2024-04-25 03:28:53.363891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.962 [2024-04-25 03:28:53.363917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.962 qpair failed and we were unable to recover it. 00:28:18.962 [2024-04-25 03:28:53.364089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.962 [2024-04-25 03:28:53.364250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.962 [2024-04-25 03:28:53.364276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.962 qpair failed and we were unable to recover it. 00:28:18.962 [2024-04-25 03:28:53.364479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.962 [2024-04-25 03:28:53.364672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.962 [2024-04-25 03:28:53.364698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.962 qpair failed and we were unable to recover it. 00:28:18.962 [2024-04-25 03:28:53.364856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.962 [2024-04-25 03:28:53.365047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.962 [2024-04-25 03:28:53.365072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.962 qpair failed and we were unable to recover it. 00:28:18.962 [2024-04-25 03:28:53.365238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.962 [2024-04-25 03:28:53.365397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.962 [2024-04-25 03:28:53.365422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.962 qpair failed and we were unable to recover it. 00:28:18.962 [2024-04-25 03:28:53.365703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.962 [2024-04-25 03:28:53.365895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.962 [2024-04-25 03:28:53.365921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.962 qpair failed and we were unable to recover it. 00:28:18.962 [2024-04-25 03:28:53.366119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.962 [2024-04-25 03:28:53.366288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.962 [2024-04-25 03:28:53.366313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.962 qpair failed and we were unable to recover it. 00:28:18.962 [2024-04-25 03:28:53.366486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.962 [2024-04-25 03:28:53.366675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.962 [2024-04-25 03:28:53.366700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.962 qpair failed and we were unable to recover it. 00:28:18.962 [2024-04-25 03:28:53.366898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.962 [2024-04-25 03:28:53.367093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.962 [2024-04-25 03:28:53.367122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.962 qpair failed and we were unable to recover it. 00:28:18.962 [2024-04-25 03:28:53.367408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.962 [2024-04-25 03:28:53.367602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.962 [2024-04-25 03:28:53.367631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.962 qpair failed and we were unable to recover it. 00:28:18.962 [2024-04-25 03:28:53.367821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.962 [2024-04-25 03:28:53.367982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.962 [2024-04-25 03:28:53.368007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.962 qpair failed and we were unable to recover it. 00:28:18.962 [2024-04-25 03:28:53.368176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.962 [2024-04-25 03:28:53.368404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.962 [2024-04-25 03:28:53.368429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.962 qpair failed and we were unable to recover it. 00:28:18.962 [2024-04-25 03:28:53.368596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.962 [2024-04-25 03:28:53.368794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.962 [2024-04-25 03:28:53.368820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.962 qpair failed and we were unable to recover it. 00:28:18.962 [2024-04-25 03:28:53.369017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.962 [2024-04-25 03:28:53.369228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.962 [2024-04-25 03:28:53.369254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.962 qpair failed and we were unable to recover it. 00:28:18.962 [2024-04-25 03:28:53.369453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.962 [2024-04-25 03:28:53.369612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.962 [2024-04-25 03:28:53.369643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.962 qpair failed and we were unable to recover it. 00:28:18.962 [2024-04-25 03:28:53.369923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.962 [2024-04-25 03:28:53.370115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.962 [2024-04-25 03:28:53.370140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.962 qpair failed and we were unable to recover it. 00:28:18.962 [2024-04-25 03:28:53.370308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.962 [2024-04-25 03:28:53.370505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.962 [2024-04-25 03:28:53.370531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.962 qpair failed and we were unable to recover it. 00:28:18.962 [2024-04-25 03:28:53.370731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.962 [2024-04-25 03:28:53.370893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.962 [2024-04-25 03:28:53.370920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.962 qpair failed and we were unable to recover it. 00:28:18.962 [2024-04-25 03:28:53.371112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.962 [2024-04-25 03:28:53.371306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.962 [2024-04-25 03:28:53.371336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.962 qpair failed and we were unable to recover it. 00:28:18.962 [2024-04-25 03:28:53.371505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.962 [2024-04-25 03:28:53.371724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.962 [2024-04-25 03:28:53.371750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.962 qpair failed and we were unable to recover it. 00:28:18.962 [2024-04-25 03:28:53.371969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.962 [2024-04-25 03:28:53.372164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.963 [2024-04-25 03:28:53.372190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.963 qpair failed and we were unable to recover it. 00:28:18.963 [2024-04-25 03:28:53.372357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.963 [2024-04-25 03:28:53.372550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.963 [2024-04-25 03:28:53.372576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.963 qpair failed and we were unable to recover it. 00:28:18.963 [2024-04-25 03:28:53.372743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.963 [2024-04-25 03:28:53.372910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.963 [2024-04-25 03:28:53.372936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.963 qpair failed and we were unable to recover it. 00:28:18.963 [2024-04-25 03:28:53.373153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.963 [2024-04-25 03:28:53.373339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.963 [2024-04-25 03:28:53.373365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.963 qpair failed and we were unable to recover it. 00:28:18.963 [2024-04-25 03:28:53.373584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.963 [2024-04-25 03:28:53.373748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.963 [2024-04-25 03:28:53.373774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.963 qpair failed and we were unable to recover it. 00:28:18.963 [2024-04-25 03:28:53.373974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.963 [2024-04-25 03:28:53.374141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.963 [2024-04-25 03:28:53.374166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.963 qpair failed and we were unable to recover it. 00:28:18.963 [2024-04-25 03:28:53.374364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.963 [2024-04-25 03:28:53.374554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.963 [2024-04-25 03:28:53.374579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.963 qpair failed and we were unable to recover it. 00:28:18.963 [2024-04-25 03:28:53.374753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.963 [2024-04-25 03:28:53.374917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.963 [2024-04-25 03:28:53.374943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.963 qpair failed and we were unable to recover it. 00:28:18.963 [2024-04-25 03:28:53.375145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.963 [2024-04-25 03:28:53.375309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.963 [2024-04-25 03:28:53.375335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.963 qpair failed and we were unable to recover it. 00:28:18.963 [2024-04-25 03:28:53.375504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.963 [2024-04-25 03:28:53.375709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.963 [2024-04-25 03:28:53.375734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.963 qpair failed and we were unable to recover it. 00:28:18.963 [2024-04-25 03:28:53.375977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.963 [2024-04-25 03:28:53.376170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.963 [2024-04-25 03:28:53.376195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.963 qpair failed and we were unable to recover it. 00:28:18.963 [2024-04-25 03:28:53.376386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.963 [2024-04-25 03:28:53.376577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.963 [2024-04-25 03:28:53.376602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.963 qpair failed and we were unable to recover it. 00:28:18.963 [2024-04-25 03:28:53.376807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.963 [2024-04-25 03:28:53.376994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.963 [2024-04-25 03:28:53.377019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.963 qpair failed and we were unable to recover it. 00:28:18.963 [2024-04-25 03:28:53.377235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.963 [2024-04-25 03:28:53.377428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.963 [2024-04-25 03:28:53.377454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.963 qpair failed and we were unable to recover it. 00:28:18.963 [2024-04-25 03:28:53.377727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.963 [2024-04-25 03:28:53.377953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.963 [2024-04-25 03:28:53.377979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.963 qpair failed and we were unable to recover it. 00:28:18.963 [2024-04-25 03:28:53.378178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.963 [2024-04-25 03:28:53.378416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.963 [2024-04-25 03:28:53.378441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.963 qpair failed and we were unable to recover it. 00:28:18.963 [2024-04-25 03:28:53.378620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.963 [2024-04-25 03:28:53.378816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.963 [2024-04-25 03:28:53.378842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.963 qpair failed and we were unable to recover it. 00:28:18.963 [2024-04-25 03:28:53.379014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.963 [2024-04-25 03:28:53.379180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.963 [2024-04-25 03:28:53.379206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.963 qpair failed and we were unable to recover it. 00:28:18.963 [2024-04-25 03:28:53.379482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.963 [2024-04-25 03:28:53.379649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.963 [2024-04-25 03:28:53.379683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.963 qpair failed and we were unable to recover it. 00:28:18.963 [2024-04-25 03:28:53.379850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.963 [2024-04-25 03:28:53.380067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.963 [2024-04-25 03:28:53.380092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.963 qpair failed and we were unable to recover it. 00:28:18.963 [2024-04-25 03:28:53.380281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.963 [2024-04-25 03:28:53.380466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.963 [2024-04-25 03:28:53.380492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.963 qpair failed and we were unable to recover it. 00:28:18.963 [2024-04-25 03:28:53.380694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.963 [2024-04-25 03:28:53.380967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.963 [2024-04-25 03:28:53.380992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.963 qpair failed and we were unable to recover it. 00:28:18.963 [2024-04-25 03:28:53.381183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.963 [2024-04-25 03:28:53.381384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.963 [2024-04-25 03:28:53.381409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.963 qpair failed and we were unable to recover it. 00:28:18.963 [2024-04-25 03:28:53.381633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.963 [2024-04-25 03:28:53.381802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.963 [2024-04-25 03:28:53.381827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.963 qpair failed and we were unable to recover it. 00:28:18.963 [2024-04-25 03:28:53.382026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.963 [2024-04-25 03:28:53.382244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.963 [2024-04-25 03:28:53.382269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.963 qpair failed and we were unable to recover it. 00:28:18.963 [2024-04-25 03:28:53.382441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.963 [2024-04-25 03:28:53.382609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.963 [2024-04-25 03:28:53.382641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.963 qpair failed and we were unable to recover it. 00:28:18.963 [2024-04-25 03:28:53.382813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.963 [2024-04-25 03:28:53.383085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.963 [2024-04-25 03:28:53.383110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.963 qpair failed and we were unable to recover it. 00:28:18.963 [2024-04-25 03:28:53.383298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.963 [2024-04-25 03:28:53.383494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.963 [2024-04-25 03:28:53.383519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.963 qpair failed and we were unable to recover it. 00:28:18.963 [2024-04-25 03:28:53.383695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.963 [2024-04-25 03:28:53.383911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.963 [2024-04-25 03:28:53.383937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.963 qpair failed and we were unable to recover it. 00:28:18.963 [2024-04-25 03:28:53.384155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.964 [2024-04-25 03:28:53.384327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.964 [2024-04-25 03:28:53.384352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.964 qpair failed and we were unable to recover it. 00:28:18.964 [2024-04-25 03:28:53.384516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.964 [2024-04-25 03:28:53.384709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.964 [2024-04-25 03:28:53.384735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.964 qpair failed and we were unable to recover it. 00:28:18.964 [2024-04-25 03:28:53.384896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.964 [2024-04-25 03:28:53.385085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.964 [2024-04-25 03:28:53.385110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.964 qpair failed and we were unable to recover it. 00:28:18.964 [2024-04-25 03:28:53.385271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.964 [2024-04-25 03:28:53.385430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.964 [2024-04-25 03:28:53.385465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.964 qpair failed and we were unable to recover it. 00:28:18.964 [2024-04-25 03:28:53.385739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.964 [2024-04-25 03:28:53.385908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.964 [2024-04-25 03:28:53.385934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.964 qpair failed and we were unable to recover it. 00:28:18.964 [2024-04-25 03:28:53.386132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.964 [2024-04-25 03:28:53.386342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.964 [2024-04-25 03:28:53.386367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.964 qpair failed and we were unable to recover it. 00:28:18.964 [2024-04-25 03:28:53.386564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.964 [2024-04-25 03:28:53.386757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.964 [2024-04-25 03:28:53.386782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.964 qpair failed and we were unable to recover it. 00:28:18.964 [2024-04-25 03:28:53.386991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.964 [2024-04-25 03:28:53.387205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.964 [2024-04-25 03:28:53.387230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.964 qpair failed and we were unable to recover it. 00:28:18.964 [2024-04-25 03:28:53.387394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.964 [2024-04-25 03:28:53.387589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.964 [2024-04-25 03:28:53.387615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.964 qpair failed and we were unable to recover it. 00:28:18.964 [2024-04-25 03:28:53.387828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.964 [2024-04-25 03:28:53.388002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.964 [2024-04-25 03:28:53.388028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.964 qpair failed and we were unable to recover it. 00:28:18.964 [2024-04-25 03:28:53.388191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.964 [2024-04-25 03:28:53.388388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.964 [2024-04-25 03:28:53.388417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.964 qpair failed and we were unable to recover it. 00:28:18.964 [2024-04-25 03:28:53.388624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.964 [2024-04-25 03:28:53.388795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.964 [2024-04-25 03:28:53.388821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.964 qpair failed and we were unable to recover it. 00:28:18.964 [2024-04-25 03:28:53.388992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.964 [2024-04-25 03:28:53.389184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.964 [2024-04-25 03:28:53.389209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.964 qpair failed and we were unable to recover it. 00:28:18.964 [2024-04-25 03:28:53.389376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.964 [2024-04-25 03:28:53.389561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.964 [2024-04-25 03:28:53.389586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.964 qpair failed and we were unable to recover it. 00:28:18.964 [2024-04-25 03:28:53.389798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.964 [2024-04-25 03:28:53.390011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.964 [2024-04-25 03:28:53.390036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.964 qpair failed and we were unable to recover it. 00:28:18.964 [2024-04-25 03:28:53.390197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.964 [2024-04-25 03:28:53.390367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.964 [2024-04-25 03:28:53.390392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.964 qpair failed and we were unable to recover it. 00:28:18.964 [2024-04-25 03:28:53.390577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.964 [2024-04-25 03:28:53.390753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.964 [2024-04-25 03:28:53.390779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.964 qpair failed and we were unable to recover it. 00:28:18.964 [2024-04-25 03:28:53.390981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.964 [2024-04-25 03:28:53.391255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.964 [2024-04-25 03:28:53.391281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.964 qpair failed and we were unable to recover it. 00:28:18.964 [2024-04-25 03:28:53.391482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.964 [2024-04-25 03:28:53.391678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.964 [2024-04-25 03:28:53.391704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.964 qpair failed and we were unable to recover it. 00:28:18.964 [2024-04-25 03:28:53.391903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.964 [2024-04-25 03:28:53.392092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.964 [2024-04-25 03:28:53.392117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.964 qpair failed and we were unable to recover it. 00:28:18.964 [2024-04-25 03:28:53.392292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.964 [2024-04-25 03:28:53.392476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.964 [2024-04-25 03:28:53.392502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.964 qpair failed and we were unable to recover it. 00:28:18.964 [2024-04-25 03:28:53.392696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.964 [2024-04-25 03:28:53.392890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.964 [2024-04-25 03:28:53.392915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.964 qpair failed and we were unable to recover it. 00:28:18.964 [2024-04-25 03:28:53.393117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.964 [2024-04-25 03:28:53.393307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.964 [2024-04-25 03:28:53.393333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.964 qpair failed and we were unable to recover it. 00:28:18.964 [2024-04-25 03:28:53.393527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.964 [2024-04-25 03:28:53.393725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.964 [2024-04-25 03:28:53.393751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.964 qpair failed and we were unable to recover it. 00:28:18.964 [2024-04-25 03:28:53.394024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.964 [2024-04-25 03:28:53.394189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.964 [2024-04-25 03:28:53.394214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.964 qpair failed and we were unable to recover it. 00:28:18.965 [2024-04-25 03:28:53.394406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.965 [2024-04-25 03:28:53.394569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.965 [2024-04-25 03:28:53.394596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.965 qpair failed and we were unable to recover it. 00:28:18.965 [2024-04-25 03:28:53.394813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.965 [2024-04-25 03:28:53.394980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.965 [2024-04-25 03:28:53.395008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.965 qpair failed and we were unable to recover it. 00:28:18.965 [2024-04-25 03:28:53.395177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.965 [2024-04-25 03:28:53.395337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.965 [2024-04-25 03:28:53.395363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.965 qpair failed and we were unable to recover it. 00:28:18.965 [2024-04-25 03:28:53.395523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.965 [2024-04-25 03:28:53.395731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.965 [2024-04-25 03:28:53.395757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.965 qpair failed and we were unable to recover it. 00:28:18.965 [2024-04-25 03:28:53.395921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.965 [2024-04-25 03:28:53.396117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.965 [2024-04-25 03:28:53.396142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.965 qpair failed and we were unable to recover it. 00:28:18.965 [2024-04-25 03:28:53.396341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.965 [2024-04-25 03:28:53.396532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.965 [2024-04-25 03:28:53.396557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.965 qpair failed and we were unable to recover it. 00:28:18.965 [2024-04-25 03:28:53.396770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.965 [2024-04-25 03:28:53.396933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.965 [2024-04-25 03:28:53.396959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.965 qpair failed and we were unable to recover it. 00:28:18.965 [2024-04-25 03:28:53.397152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.965 [2024-04-25 03:28:53.397341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.965 [2024-04-25 03:28:53.397366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.965 qpair failed and we were unable to recover it. 00:28:18.965 [2024-04-25 03:28:53.397529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.965 [2024-04-25 03:28:53.397741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.965 [2024-04-25 03:28:53.397766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.965 qpair failed and we were unable to recover it. 00:28:18.965 [2024-04-25 03:28:53.397968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.965 [2024-04-25 03:28:53.398242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.965 [2024-04-25 03:28:53.398268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.965 qpair failed and we were unable to recover it. 00:28:18.965 [2024-04-25 03:28:53.398457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.965 [2024-04-25 03:28:53.398620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.965 [2024-04-25 03:28:53.398651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.965 qpair failed and we were unable to recover it. 00:28:18.965 [2024-04-25 03:28:53.398853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.965 [2024-04-25 03:28:53.399033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.965 [2024-04-25 03:28:53.399058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.965 qpair failed and we were unable to recover it. 00:28:18.965 [2024-04-25 03:28:53.399224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.965 [2024-04-25 03:28:53.399493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.965 [2024-04-25 03:28:53.399518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.965 qpair failed and we were unable to recover it. 00:28:18.965 [2024-04-25 03:28:53.399704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.965 [2024-04-25 03:28:53.399867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.965 [2024-04-25 03:28:53.399893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.965 qpair failed and we were unable to recover it. 00:28:18.965 [2024-04-25 03:28:53.400084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.965 [2024-04-25 03:28:53.400253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.965 [2024-04-25 03:28:53.400279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.965 qpair failed and we were unable to recover it. 00:28:18.965 [2024-04-25 03:28:53.400473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.965 [2024-04-25 03:28:53.400670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.965 [2024-04-25 03:28:53.400696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.965 qpair failed and we were unable to recover it. 00:28:18.965 [2024-04-25 03:28:53.400895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.965 [2024-04-25 03:28:53.401062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.965 [2024-04-25 03:28:53.401087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.965 qpair failed and we were unable to recover it. 00:28:18.965 [2024-04-25 03:28:53.401257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.965 [2024-04-25 03:28:53.401452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.965 [2024-04-25 03:28:53.401477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.965 qpair failed and we were unable to recover it. 00:28:18.965 [2024-04-25 03:28:53.401665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.965 [2024-04-25 03:28:53.401824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.965 [2024-04-25 03:28:53.401850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.965 qpair failed and we were unable to recover it. 00:28:18.965 [2024-04-25 03:28:53.402017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.965 [2024-04-25 03:28:53.402212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.965 [2024-04-25 03:28:53.402238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.965 qpair failed and we were unable to recover it. 00:28:18.965 [2024-04-25 03:28:53.402407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.965 [2024-04-25 03:28:53.402594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.965 [2024-04-25 03:28:53.402619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.965 qpair failed and we were unable to recover it. 00:28:18.965 [2024-04-25 03:28:53.402830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.965 [2024-04-25 03:28:53.403002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.965 [2024-04-25 03:28:53.403027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.965 qpair failed and we were unable to recover it. 00:28:18.965 [2024-04-25 03:28:53.403219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.965 [2024-04-25 03:28:53.403442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.965 [2024-04-25 03:28:53.403468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.965 qpair failed and we were unable to recover it. 00:28:18.965 [2024-04-25 03:28:53.403663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.965 [2024-04-25 03:28:53.403848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.965 [2024-04-25 03:28:53.403873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.965 qpair failed and we were unable to recover it. 00:28:18.965 [2024-04-25 03:28:53.404062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.965 [2024-04-25 03:28:53.404252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.965 [2024-04-25 03:28:53.404278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.965 qpair failed and we were unable to recover it. 00:28:18.965 [2024-04-25 03:28:53.404552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.965 [2024-04-25 03:28:53.404735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.965 [2024-04-25 03:28:53.404760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.965 qpair failed and we were unable to recover it. 00:28:18.965 [2024-04-25 03:28:53.404930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.965 [2024-04-25 03:28:53.405132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.965 [2024-04-25 03:28:53.405159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.965 qpair failed and we were unable to recover it. 00:28:18.965 [2024-04-25 03:28:53.405349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.965 [2024-04-25 03:28:53.405517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.965 [2024-04-25 03:28:53.405543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.965 qpair failed and we were unable to recover it. 00:28:18.965 [2024-04-25 03:28:53.405732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.965 [2024-04-25 03:28:53.405923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.965 [2024-04-25 03:28:53.405948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.965 qpair failed and we were unable to recover it. 00:28:18.965 [2024-04-25 03:28:53.406147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.966 [2024-04-25 03:28:53.406335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.966 [2024-04-25 03:28:53.406361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.966 qpair failed and we were unable to recover it. 00:28:18.966 [2024-04-25 03:28:53.406553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.966 [2024-04-25 03:28:53.406746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.966 [2024-04-25 03:28:53.406771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.966 qpair failed and we were unable to recover it. 00:28:18.966 [2024-04-25 03:28:53.406939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.966 [2024-04-25 03:28:53.407106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.966 [2024-04-25 03:28:53.407132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.966 qpair failed and we were unable to recover it. 00:28:18.966 [2024-04-25 03:28:53.407330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.966 [2024-04-25 03:28:53.407525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.966 [2024-04-25 03:28:53.407550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.966 qpair failed and we were unable to recover it. 00:28:18.966 [2024-04-25 03:28:53.407743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.966 [2024-04-25 03:28:53.407926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.966 [2024-04-25 03:28:53.407958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.966 qpair failed and we were unable to recover it. 00:28:18.966 [2024-04-25 03:28:53.408129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.966 [2024-04-25 03:28:53.408291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.966 [2024-04-25 03:28:53.408316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.966 qpair failed and we were unable to recover it. 00:28:18.966 [2024-04-25 03:28:53.408515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.966 [2024-04-25 03:28:53.408682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.966 [2024-04-25 03:28:53.408708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.966 qpair failed and we were unable to recover it. 00:28:18.966 [2024-04-25 03:28:53.408869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.966 [2024-04-25 03:28:53.409066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.966 [2024-04-25 03:28:53.409095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.966 qpair failed and we were unable to recover it. 00:28:18.966 [2024-04-25 03:28:53.409285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.966 [2024-04-25 03:28:53.409558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.966 [2024-04-25 03:28:53.409583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.966 qpair failed and we were unable to recover it. 00:28:18.966 [2024-04-25 03:28:53.409754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.966 [2024-04-25 03:28:53.409949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.966 [2024-04-25 03:28:53.409975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.966 qpair failed and we were unable to recover it. 00:28:18.966 [2024-04-25 03:28:53.410138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.966 [2024-04-25 03:28:53.410328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.966 [2024-04-25 03:28:53.410353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.966 qpair failed and we were unable to recover it. 00:28:18.966 [2024-04-25 03:28:53.410511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.966 [2024-04-25 03:28:53.410707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.966 [2024-04-25 03:28:53.410732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.966 qpair failed and we were unable to recover it. 00:28:18.966 [2024-04-25 03:28:53.410918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.966 [2024-04-25 03:28:53.411113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.966 [2024-04-25 03:28:53.411138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.966 qpair failed and we were unable to recover it. 00:28:18.966 [2024-04-25 03:28:53.411329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.966 [2024-04-25 03:28:53.411512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.966 [2024-04-25 03:28:53.411538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.966 qpair failed and we were unable to recover it. 00:28:18.966 [2024-04-25 03:28:53.411708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.966 [2024-04-25 03:28:53.411900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.966 [2024-04-25 03:28:53.411927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.966 qpair failed and we were unable to recover it. 00:28:18.966 [2024-04-25 03:28:53.412095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.966 [2024-04-25 03:28:53.412252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.966 [2024-04-25 03:28:53.412277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.966 qpair failed and we were unable to recover it. 00:28:18.966 [2024-04-25 03:28:53.412470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.966 [2024-04-25 03:28:53.412664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.966 [2024-04-25 03:28:53.412690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.966 qpair failed and we were unable to recover it. 00:28:18.966 [2024-04-25 03:28:53.412885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.966 [2024-04-25 03:28:53.413074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.966 [2024-04-25 03:28:53.413100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.966 qpair failed and we were unable to recover it. 00:28:18.966 [2024-04-25 03:28:53.413300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.966 [2024-04-25 03:28:53.413465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.966 [2024-04-25 03:28:53.413490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.966 qpair failed and we were unable to recover it. 00:28:18.966 [2024-04-25 03:28:53.413701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.966 [2024-04-25 03:28:53.413899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.966 [2024-04-25 03:28:53.413924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.966 qpair failed and we were unable to recover it. 00:28:18.966 [2024-04-25 03:28:53.414126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.966 [2024-04-25 03:28:53.414398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.966 [2024-04-25 03:28:53.414423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.966 qpair failed and we were unable to recover it. 00:28:18.966 [2024-04-25 03:28:53.414596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.966 [2024-04-25 03:28:53.414778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.966 [2024-04-25 03:28:53.414803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.966 qpair failed and we were unable to recover it. 00:28:18.966 [2024-04-25 03:28:53.414981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.966 [2024-04-25 03:28:53.415169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.966 [2024-04-25 03:28:53.415194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.966 qpair failed and we were unable to recover it. 00:28:18.966 [2024-04-25 03:28:53.415384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.966 [2024-04-25 03:28:53.415544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.966 [2024-04-25 03:28:53.415569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.966 qpair failed and we were unable to recover it. 00:28:18.966 [2024-04-25 03:28:53.415764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.966 [2024-04-25 03:28:53.415923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.966 [2024-04-25 03:28:53.415948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.966 qpair failed and we were unable to recover it. 00:28:18.966 [2024-04-25 03:28:53.416222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.966 [2024-04-25 03:28:53.416406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.966 [2024-04-25 03:28:53.416431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.966 qpair failed and we were unable to recover it. 00:28:18.966 [2024-04-25 03:28:53.416706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.966 [2024-04-25 03:28:53.416923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.966 [2024-04-25 03:28:53.416948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.966 qpair failed and we were unable to recover it. 00:28:18.966 [2024-04-25 03:28:53.417170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.966 [2024-04-25 03:28:53.417333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.966 [2024-04-25 03:28:53.417358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.966 qpair failed and we were unable to recover it. 00:28:18.966 [2024-04-25 03:28:53.417558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.966 [2024-04-25 03:28:53.417776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.966 [2024-04-25 03:28:53.417802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.966 qpair failed and we were unable to recover it. 00:28:18.966 [2024-04-25 03:28:53.417999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.967 [2024-04-25 03:28:53.418167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.967 [2024-04-25 03:28:53.418192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.967 qpair failed and we were unable to recover it. 00:28:18.967 [2024-04-25 03:28:53.418414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.967 [2024-04-25 03:28:53.418612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.967 [2024-04-25 03:28:53.418645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.967 qpair failed and we were unable to recover it. 00:28:18.967 [2024-04-25 03:28:53.418820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.967 [2024-04-25 03:28:53.419007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.967 [2024-04-25 03:28:53.419033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.967 qpair failed and we were unable to recover it. 00:28:18.967 [2024-04-25 03:28:53.419207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.967 [2024-04-25 03:28:53.419400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.967 [2024-04-25 03:28:53.419425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.967 qpair failed and we were unable to recover it. 00:28:18.967 [2024-04-25 03:28:53.419622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.967 [2024-04-25 03:28:53.419790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.967 [2024-04-25 03:28:53.419816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.967 qpair failed and we were unable to recover it. 00:28:18.967 [2024-04-25 03:28:53.420091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.967 [2024-04-25 03:28:53.420279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.967 [2024-04-25 03:28:53.420304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.967 qpair failed and we were unable to recover it. 00:28:18.967 [2024-04-25 03:28:53.420490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.967 [2024-04-25 03:28:53.420767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.967 [2024-04-25 03:28:53.420793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.967 qpair failed and we were unable to recover it. 00:28:18.967 [2024-04-25 03:28:53.421071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.967 [2024-04-25 03:28:53.421232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.967 [2024-04-25 03:28:53.421257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.967 qpair failed and we were unable to recover it. 00:28:18.967 [2024-04-25 03:28:53.421421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.967 [2024-04-25 03:28:53.421619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.967 [2024-04-25 03:28:53.421649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.967 qpair failed and we were unable to recover it. 00:28:18.967 [2024-04-25 03:28:53.421809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.967 [2024-04-25 03:28:53.422015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.967 [2024-04-25 03:28:53.422040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.967 qpair failed and we were unable to recover it. 00:28:18.967 [2024-04-25 03:28:53.422232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.967 [2024-04-25 03:28:53.422389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.967 [2024-04-25 03:28:53.422414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.967 qpair failed and we were unable to recover it. 00:28:18.967 [2024-04-25 03:28:53.422583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.967 [2024-04-25 03:28:53.422789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.967 [2024-04-25 03:28:53.422814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.967 qpair failed and we were unable to recover it. 00:28:18.967 [2024-04-25 03:28:53.422977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.967 [2024-04-25 03:28:53.423165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.967 [2024-04-25 03:28:53.423190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.967 qpair failed and we were unable to recover it. 00:28:18.967 [2024-04-25 03:28:53.423360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.967 [2024-04-25 03:28:53.423520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.967 [2024-04-25 03:28:53.423546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.967 qpair failed and we were unable to recover it. 00:28:18.967 [2024-04-25 03:28:53.423766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.967 [2024-04-25 03:28:53.423967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.967 [2024-04-25 03:28:53.423992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.967 qpair failed and we were unable to recover it. 00:28:18.967 [2024-04-25 03:28:53.424208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.967 [2024-04-25 03:28:53.424386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.967 [2024-04-25 03:28:53.424411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:18.967 qpair failed and we were unable to recover it. 00:28:18.967 [2024-04-25 03:28:53.424604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.967 [2024-04-25 03:28:53.424786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.240 [2024-04-25 03:28:53.424812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.240 qpair failed and we were unable to recover it. 00:28:19.240 [2024-04-25 03:28:53.424986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.240 [2024-04-25 03:28:53.425144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.240 [2024-04-25 03:28:53.425169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.240 qpair failed and we were unable to recover it. 00:28:19.240 [2024-04-25 03:28:53.425337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.240 [2024-04-25 03:28:53.425502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.240 [2024-04-25 03:28:53.425527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.240 qpair failed and we were unable to recover it. 00:28:19.240 [2024-04-25 03:28:53.425695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.240 [2024-04-25 03:28:53.425903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.240 [2024-04-25 03:28:53.425933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.240 qpair failed and we were unable to recover it. 00:28:19.240 [2024-04-25 03:28:53.426109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.240 [2024-04-25 03:28:53.426322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.240 [2024-04-25 03:28:53.426347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.240 qpair failed and we were unable to recover it. 00:28:19.240 [2024-04-25 03:28:53.426536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.240 [2024-04-25 03:28:53.426705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.240 [2024-04-25 03:28:53.426731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.240 qpair failed and we were unable to recover it. 00:28:19.240 [2024-04-25 03:28:53.426907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.240 [2024-04-25 03:28:53.427183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.240 [2024-04-25 03:28:53.427208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.240 qpair failed and we were unable to recover it. 00:28:19.240 [2024-04-25 03:28:53.427418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.240 [2024-04-25 03:28:53.427583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.240 [2024-04-25 03:28:53.427608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.240 qpair failed and we were unable to recover it. 00:28:19.240 [2024-04-25 03:28:53.427795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.240 [2024-04-25 03:28:53.427965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.240 [2024-04-25 03:28:53.427992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.240 qpair failed and we were unable to recover it. 00:28:19.240 [2024-04-25 03:28:53.428188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.240 [2024-04-25 03:28:53.428380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.240 [2024-04-25 03:28:53.428404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.240 qpair failed and we were unable to recover it. 00:28:19.240 [2024-04-25 03:28:53.428574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.240 [2024-04-25 03:28:53.428768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.240 [2024-04-25 03:28:53.428793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.240 qpair failed and we were unable to recover it. 00:28:19.240 [2024-04-25 03:28:53.428963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.240 [2024-04-25 03:28:53.429156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.240 [2024-04-25 03:28:53.429181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.240 qpair failed and we were unable to recover it. 00:28:19.240 [2024-04-25 03:28:53.429394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.240 [2024-04-25 03:28:53.429554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.240 [2024-04-25 03:28:53.429579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.240 qpair failed and we were unable to recover it. 00:28:19.240 [2024-04-25 03:28:53.429803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.240 [2024-04-25 03:28:53.429974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.240 [2024-04-25 03:28:53.430003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.240 qpair failed and we were unable to recover it. 00:28:19.240 [2024-04-25 03:28:53.430165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.240 [2024-04-25 03:28:53.430328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.240 [2024-04-25 03:28:53.430353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.240 qpair failed and we were unable to recover it. 00:28:19.240 [2024-04-25 03:28:53.430512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.240 [2024-04-25 03:28:53.430685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.240 [2024-04-25 03:28:53.430711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.240 qpair failed and we were unable to recover it. 00:28:19.240 [2024-04-25 03:28:53.430908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.240 [2024-04-25 03:28:53.431099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.240 [2024-04-25 03:28:53.431124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.240 qpair failed and we were unable to recover it. 00:28:19.240 [2024-04-25 03:28:53.431288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.240 [2024-04-25 03:28:53.431479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.240 [2024-04-25 03:28:53.431503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.240 qpair failed and we were unable to recover it. 00:28:19.240 [2024-04-25 03:28:53.431722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.240 [2024-04-25 03:28:53.431922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.240 [2024-04-25 03:28:53.431947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.240 qpair failed and we were unable to recover it. 00:28:19.240 [2024-04-25 03:28:53.432112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.240 [2024-04-25 03:28:53.432304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.240 [2024-04-25 03:28:53.432329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.240 qpair failed and we were unable to recover it. 00:28:19.240 [2024-04-25 03:28:53.432520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.240 [2024-04-25 03:28:53.432715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.240 [2024-04-25 03:28:53.432740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.240 qpair failed and we were unable to recover it. 00:28:19.240 [2024-04-25 03:28:53.432901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.240 [2024-04-25 03:28:53.433092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.240 [2024-04-25 03:28:53.433117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.240 qpair failed and we were unable to recover it. 00:28:19.240 [2024-04-25 03:28:53.433339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.240 [2024-04-25 03:28:53.433512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.240 [2024-04-25 03:28:53.433537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.240 qpair failed and we were unable to recover it. 00:28:19.240 [2024-04-25 03:28:53.433712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.240 [2024-04-25 03:28:53.433898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.240 [2024-04-25 03:28:53.433923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.240 qpair failed and we were unable to recover it. 00:28:19.240 [2024-04-25 03:28:53.434124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.240 [2024-04-25 03:28:53.434317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.240 [2024-04-25 03:28:53.434341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.240 qpair failed and we were unable to recover it. 00:28:19.240 [2024-04-25 03:28:53.434508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.240 [2024-04-25 03:28:53.434672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.240 [2024-04-25 03:28:53.434697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.240 qpair failed and we were unable to recover it. 00:28:19.240 [2024-04-25 03:28:53.434860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.240 [2024-04-25 03:28:53.435049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.240 [2024-04-25 03:28:53.435074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.240 qpair failed and we were unable to recover it. 00:28:19.240 [2024-04-25 03:28:53.435265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.240 [2024-04-25 03:28:53.435457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.240 [2024-04-25 03:28:53.435482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.241 qpair failed and we were unable to recover it. 00:28:19.241 [2024-04-25 03:28:53.435696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.241 [2024-04-25 03:28:53.435941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.241 [2024-04-25 03:28:53.435966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.241 qpair failed and we were unable to recover it. 00:28:19.241 [2024-04-25 03:28:53.436183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.241 [2024-04-25 03:28:53.436379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.241 [2024-04-25 03:28:53.436404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.241 qpair failed and we were unable to recover it. 00:28:19.241 [2024-04-25 03:28:53.436569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.241 [2024-04-25 03:28:53.436777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.241 [2024-04-25 03:28:53.436803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.241 qpair failed and we were unable to recover it. 00:28:19.241 [2024-04-25 03:28:53.436967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.241 [2024-04-25 03:28:53.437130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.241 [2024-04-25 03:28:53.437155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.241 qpair failed and we were unable to recover it. 00:28:19.241 [2024-04-25 03:28:53.437374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.241 [2024-04-25 03:28:53.437543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.241 [2024-04-25 03:28:53.437568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.241 qpair failed and we were unable to recover it. 00:28:19.241 [2024-04-25 03:28:53.437761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.241 [2024-04-25 03:28:53.437995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.241 [2024-04-25 03:28:53.438020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.241 qpair failed and we were unable to recover it. 00:28:19.241 [2024-04-25 03:28:53.438195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.241 [2024-04-25 03:28:53.438357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.241 [2024-04-25 03:28:53.438382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.241 qpair failed and we were unable to recover it. 00:28:19.241 [2024-04-25 03:28:53.438663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.241 [2024-04-25 03:28:53.438858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.241 [2024-04-25 03:28:53.438883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.241 qpair failed and we were unable to recover it. 00:28:19.241 [2024-04-25 03:28:53.439099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.241 [2024-04-25 03:28:53.439314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.241 [2024-04-25 03:28:53.439338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.241 qpair failed and we were unable to recover it. 00:28:19.241 [2024-04-25 03:28:53.439505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.241 [2024-04-25 03:28:53.439707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.241 [2024-04-25 03:28:53.439733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.241 qpair failed and we were unable to recover it. 00:28:19.241 [2024-04-25 03:28:53.439897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.241 [2024-04-25 03:28:53.440119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.241 [2024-04-25 03:28:53.440145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.241 qpair failed and we were unable to recover it. 00:28:19.241 [2024-04-25 03:28:53.440310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.241 [2024-04-25 03:28:53.440506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.241 [2024-04-25 03:28:53.440531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.241 qpair failed and we were unable to recover it. 00:28:19.241 [2024-04-25 03:28:53.440701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.241 [2024-04-25 03:28:53.440973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.241 [2024-04-25 03:28:53.440998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.241 qpair failed and we were unable to recover it. 00:28:19.241 [2024-04-25 03:28:53.441185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.241 [2024-04-25 03:28:53.441379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.241 [2024-04-25 03:28:53.441405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.241 qpair failed and we were unable to recover it. 00:28:19.241 [2024-04-25 03:28:53.441596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.241 [2024-04-25 03:28:53.441791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.241 [2024-04-25 03:28:53.441817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.241 qpair failed and we were unable to recover it. 00:28:19.241 [2024-04-25 03:28:53.442016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.241 [2024-04-25 03:28:53.442228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.241 [2024-04-25 03:28:53.442253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.241 qpair failed and we were unable to recover it. 00:28:19.241 [2024-04-25 03:28:53.442418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.241 [2024-04-25 03:28:53.442618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.241 [2024-04-25 03:28:53.442650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.241 qpair failed and we were unable to recover it. 00:28:19.241 [2024-04-25 03:28:53.442853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.241 [2024-04-25 03:28:53.443043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.241 [2024-04-25 03:28:53.443069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.241 qpair failed and we were unable to recover it. 00:28:19.241 [2024-04-25 03:28:53.443230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.241 [2024-04-25 03:28:53.443410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.241 [2024-04-25 03:28:53.443435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.241 qpair failed and we were unable to recover it. 00:28:19.241 [2024-04-25 03:28:53.443633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.241 [2024-04-25 03:28:53.443819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.241 [2024-04-25 03:28:53.443844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.241 qpair failed and we were unable to recover it. 00:28:19.241 [2024-04-25 03:28:53.444006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.241 [2024-04-25 03:28:53.444218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.241 [2024-04-25 03:28:53.444243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.241 qpair failed and we were unable to recover it. 00:28:19.241 [2024-04-25 03:28:53.444437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.241 [2024-04-25 03:28:53.444657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.241 [2024-04-25 03:28:53.444683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.241 qpair failed and we were unable to recover it. 00:28:19.241 [2024-04-25 03:28:53.444855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.241 [2024-04-25 03:28:53.445027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.241 [2024-04-25 03:28:53.445052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.241 qpair failed and we were unable to recover it. 00:28:19.241 [2024-04-25 03:28:53.445244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.241 [2024-04-25 03:28:53.445465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.241 [2024-04-25 03:28:53.445490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.241 qpair failed and we were unable to recover it. 00:28:19.241 [2024-04-25 03:28:53.445684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.241 [2024-04-25 03:28:53.445869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.241 [2024-04-25 03:28:53.445894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.241 qpair failed and we were unable to recover it. 00:28:19.241 [2024-04-25 03:28:53.446087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.241 [2024-04-25 03:28:53.446280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.241 [2024-04-25 03:28:53.446304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.241 qpair failed and we were unable to recover it. 00:28:19.241 [2024-04-25 03:28:53.446492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.241 [2024-04-25 03:28:53.446658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.241 [2024-04-25 03:28:53.446687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.241 qpair failed and we were unable to recover it. 00:28:19.241 [2024-04-25 03:28:53.446888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.241 [2024-04-25 03:28:53.447063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.241 [2024-04-25 03:28:53.447088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.241 qpair failed and we were unable to recover it. 00:28:19.241 [2024-04-25 03:28:53.447263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.241 [2024-04-25 03:28:53.447453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.241 [2024-04-25 03:28:53.447478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.241 qpair failed and we were unable to recover it. 00:28:19.242 [2024-04-25 03:28:53.447646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.242 [2024-04-25 03:28:53.447816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.242 [2024-04-25 03:28:53.447841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.242 qpair failed and we were unable to recover it. 00:28:19.242 [2024-04-25 03:28:53.448032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.242 [2024-04-25 03:28:53.448199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.242 [2024-04-25 03:28:53.448224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.242 qpair failed and we were unable to recover it. 00:28:19.242 [2024-04-25 03:28:53.448417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.242 [2024-04-25 03:28:53.448642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.242 [2024-04-25 03:28:53.448668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.242 qpair failed and we were unable to recover it. 00:28:19.242 [2024-04-25 03:28:53.448862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.242 [2024-04-25 03:28:53.449053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.242 [2024-04-25 03:28:53.449079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.242 qpair failed and we were unable to recover it. 00:28:19.242 [2024-04-25 03:28:53.449251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.242 [2024-04-25 03:28:53.449442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.242 [2024-04-25 03:28:53.449467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.242 qpair failed and we were unable to recover it. 00:28:19.242 [2024-04-25 03:28:53.449664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.242 [2024-04-25 03:28:53.449831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.242 [2024-04-25 03:28:53.449857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.242 qpair failed and we were unable to recover it. 00:28:19.242 [2024-04-25 03:28:53.450077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.242 [2024-04-25 03:28:53.450246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.242 [2024-04-25 03:28:53.450271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.242 qpair failed and we were unable to recover it. 00:28:19.242 [2024-04-25 03:28:53.450445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.242 [2024-04-25 03:28:53.450606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.242 [2024-04-25 03:28:53.450638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.242 qpair failed and we were unable to recover it. 00:28:19.242 [2024-04-25 03:28:53.450841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.242 [2024-04-25 03:28:53.451120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.242 [2024-04-25 03:28:53.451146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.242 qpair failed and we were unable to recover it. 00:28:19.242 [2024-04-25 03:28:53.451353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.242 [2024-04-25 03:28:53.451625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.242 [2024-04-25 03:28:53.451654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.242 qpair failed and we were unable to recover it. 00:28:19.242 [2024-04-25 03:28:53.451847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.242 [2024-04-25 03:28:53.452041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.242 [2024-04-25 03:28:53.452066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.242 qpair failed and we were unable to recover it. 00:28:19.242 [2024-04-25 03:28:53.452228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.242 [2024-04-25 03:28:53.452422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.242 [2024-04-25 03:28:53.452446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.242 qpair failed and we were unable to recover it. 00:28:19.242 [2024-04-25 03:28:53.452662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.242 [2024-04-25 03:28:53.452834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.242 [2024-04-25 03:28:53.452859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.242 qpair failed and we were unable to recover it. 00:28:19.242 [2024-04-25 03:28:53.453059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.242 [2024-04-25 03:28:53.453233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.242 [2024-04-25 03:28:53.453259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.242 qpair failed and we were unable to recover it. 00:28:19.242 [2024-04-25 03:28:53.453423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.242 [2024-04-25 03:28:53.453591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.242 [2024-04-25 03:28:53.453616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.242 qpair failed and we were unable to recover it. 00:28:19.242 [2024-04-25 03:28:53.453897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.242 [2024-04-25 03:28:53.454060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.242 [2024-04-25 03:28:53.454085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.242 qpair failed and we were unable to recover it. 00:28:19.242 [2024-04-25 03:28:53.454254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.242 [2024-04-25 03:28:53.454449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.242 [2024-04-25 03:28:53.454474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.242 qpair failed and we were unable to recover it. 00:28:19.242 [2024-04-25 03:28:53.454648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.242 [2024-04-25 03:28:53.454840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.242 [2024-04-25 03:28:53.454865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.242 qpair failed and we were unable to recover it. 00:28:19.242 [2024-04-25 03:28:53.455142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.242 [2024-04-25 03:28:53.455302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.242 [2024-04-25 03:28:53.455327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.242 qpair failed and we were unable to recover it. 00:28:19.242 [2024-04-25 03:28:53.455487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.242 [2024-04-25 03:28:53.455686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.242 [2024-04-25 03:28:53.455712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.242 qpair failed and we were unable to recover it. 00:28:19.242 [2024-04-25 03:28:53.455914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.242 [2024-04-25 03:28:53.456116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.242 [2024-04-25 03:28:53.456141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.242 qpair failed and we were unable to recover it. 00:28:19.242 [2024-04-25 03:28:53.456334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.242 [2024-04-25 03:28:53.456494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.242 [2024-04-25 03:28:53.456519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.242 qpair failed and we were unable to recover it. 00:28:19.242 [2024-04-25 03:28:53.456689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.242 [2024-04-25 03:28:53.456852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.242 [2024-04-25 03:28:53.456877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.242 qpair failed and we were unable to recover it. 00:28:19.242 [2024-04-25 03:28:53.457099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.242 [2024-04-25 03:28:53.457290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.242 [2024-04-25 03:28:53.457316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.242 qpair failed and we were unable to recover it. 00:28:19.242 [2024-04-25 03:28:53.457475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.242 [2024-04-25 03:28:53.457666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.242 [2024-04-25 03:28:53.457692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.242 qpair failed and we were unable to recover it. 00:28:19.242 [2024-04-25 03:28:53.457856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.242 [2024-04-25 03:28:53.458054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.242 [2024-04-25 03:28:53.458079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.242 qpair failed and we were unable to recover it. 00:28:19.242 [2024-04-25 03:28:53.458266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.242 [2024-04-25 03:28:53.458488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.242 [2024-04-25 03:28:53.458513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.242 qpair failed and we were unable to recover it. 00:28:19.242 [2024-04-25 03:28:53.458682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.242 [2024-04-25 03:28:53.458955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.242 [2024-04-25 03:28:53.458981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.242 qpair failed and we were unable to recover it. 00:28:19.242 [2024-04-25 03:28:53.459257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.242 [2024-04-25 03:28:53.459419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.242 [2024-04-25 03:28:53.459445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.242 qpair failed and we were unable to recover it. 00:28:19.243 [2024-04-25 03:28:53.459665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.243 [2024-04-25 03:28:53.459841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.243 [2024-04-25 03:28:53.459866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.243 qpair failed and we were unable to recover it. 00:28:19.243 [2024-04-25 03:28:53.460039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.243 [2024-04-25 03:28:53.460229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.243 [2024-04-25 03:28:53.460254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.243 qpair failed and we were unable to recover it. 00:28:19.243 [2024-04-25 03:28:53.460442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.243 [2024-04-25 03:28:53.460614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.243 [2024-04-25 03:28:53.460649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.243 qpair failed and we were unable to recover it. 00:28:19.243 [2024-04-25 03:28:53.460874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.243 [2024-04-25 03:28:53.461089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.243 [2024-04-25 03:28:53.461114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.243 qpair failed and we were unable to recover it. 00:28:19.243 [2024-04-25 03:28:53.461275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.243 [2024-04-25 03:28:53.461469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.243 [2024-04-25 03:28:53.461493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.243 qpair failed and we were unable to recover it. 00:28:19.243 [2024-04-25 03:28:53.461653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.243 [2024-04-25 03:28:53.461817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.243 [2024-04-25 03:28:53.461842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.243 qpair failed and we were unable to recover it. 00:28:19.243 [2024-04-25 03:28:53.462001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.243 [2024-04-25 03:28:53.462191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.243 [2024-04-25 03:28:53.462216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.243 qpair failed and we were unable to recover it. 00:28:19.243 [2024-04-25 03:28:53.462386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.243 [2024-04-25 03:28:53.462602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.243 [2024-04-25 03:28:53.462632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.243 qpair failed and we were unable to recover it. 00:28:19.243 [2024-04-25 03:28:53.462831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.243 [2024-04-25 03:28:53.463029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.243 [2024-04-25 03:28:53.463053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.243 qpair failed and we were unable to recover it. 00:28:19.243 [2024-04-25 03:28:53.463273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.243 [2024-04-25 03:28:53.463439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.243 [2024-04-25 03:28:53.463464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.243 qpair failed and we were unable to recover it. 00:28:19.243 [2024-04-25 03:28:53.463680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.243 [2024-04-25 03:28:53.463873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.243 [2024-04-25 03:28:53.463898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.243 qpair failed and we were unable to recover it. 00:28:19.243 [2024-04-25 03:28:53.464084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.243 [2024-04-25 03:28:53.464305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.243 [2024-04-25 03:28:53.464330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.243 qpair failed and we were unable to recover it. 00:28:19.243 [2024-04-25 03:28:53.464531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.243 [2024-04-25 03:28:53.464698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.243 [2024-04-25 03:28:53.464723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.243 qpair failed and we were unable to recover it. 00:28:19.243 [2024-04-25 03:28:53.464920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.243 [2024-04-25 03:28:53.465104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.243 [2024-04-25 03:28:53.465129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.243 qpair failed and we were unable to recover it. 00:28:19.243 [2024-04-25 03:28:53.465324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.243 [2024-04-25 03:28:53.465512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.243 [2024-04-25 03:28:53.465539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.243 qpair failed and we were unable to recover it. 00:28:19.243 [2024-04-25 03:28:53.465727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.243 [2024-04-25 03:28:53.465918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.243 [2024-04-25 03:28:53.465943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.243 qpair failed and we were unable to recover it. 00:28:19.243 [2024-04-25 03:28:53.466141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.243 [2024-04-25 03:28:53.466339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.243 [2024-04-25 03:28:53.466365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.243 qpair failed and we were unable to recover it. 00:28:19.243 [2024-04-25 03:28:53.466533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.243 [2024-04-25 03:28:53.466726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.243 [2024-04-25 03:28:53.466751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.243 qpair failed and we were unable to recover it. 00:28:19.243 [2024-04-25 03:28:53.466925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.243 [2024-04-25 03:28:53.467095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.243 [2024-04-25 03:28:53.467120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.243 qpair failed and we were unable to recover it. 00:28:19.243 [2024-04-25 03:28:53.467395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.243 [2024-04-25 03:28:53.467580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.243 [2024-04-25 03:28:53.467609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.243 qpair failed and we were unable to recover it. 00:28:19.243 [2024-04-25 03:28:53.467820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.243 [2024-04-25 03:28:53.467991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.243 [2024-04-25 03:28:53.468016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.243 qpair failed and we were unable to recover it. 00:28:19.243 [2024-04-25 03:28:53.468202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.243 [2024-04-25 03:28:53.468359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.243 [2024-04-25 03:28:53.468384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.243 qpair failed and we were unable to recover it. 00:28:19.243 [2024-04-25 03:28:53.468582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.243 [2024-04-25 03:28:53.468750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.243 [2024-04-25 03:28:53.468776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.243 qpair failed and we were unable to recover it. 00:28:19.243 [2024-04-25 03:28:53.468944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.243 [2024-04-25 03:28:53.469107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.243 [2024-04-25 03:28:53.469133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.243 qpair failed and we were unable to recover it. 00:28:19.243 [2024-04-25 03:28:53.469326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.243 [2024-04-25 03:28:53.469504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.243 [2024-04-25 03:28:53.469530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.243 qpair failed and we were unable to recover it. 00:28:19.243 [2024-04-25 03:28:53.469728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.243 [2024-04-25 03:28:53.469891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.243 [2024-04-25 03:28:53.469916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.243 qpair failed and we were unable to recover it. 00:28:19.243 [2024-04-25 03:28:53.470088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.243 [2024-04-25 03:28:53.470258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.243 [2024-04-25 03:28:53.470283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.243 qpair failed and we were unable to recover it. 00:28:19.243 [2024-04-25 03:28:53.470499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.243 [2024-04-25 03:28:53.470669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.243 [2024-04-25 03:28:53.470694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.243 qpair failed and we were unable to recover it. 00:28:19.243 [2024-04-25 03:28:53.470895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.243 [2024-04-25 03:28:53.471090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.243 [2024-04-25 03:28:53.471115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.243 qpair failed and we were unable to recover it. 00:28:19.244 [2024-04-25 03:28:53.471302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.244 [2024-04-25 03:28:53.471492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.244 [2024-04-25 03:28:53.471517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.244 qpair failed and we were unable to recover it. 00:28:19.244 [2024-04-25 03:28:53.471715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.244 [2024-04-25 03:28:53.471910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.244 [2024-04-25 03:28:53.471935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.244 qpair failed and we were unable to recover it. 00:28:19.244 [2024-04-25 03:28:53.472133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.244 [2024-04-25 03:28:53.472324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.244 [2024-04-25 03:28:53.472349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.244 qpair failed and we were unable to recover it. 00:28:19.244 [2024-04-25 03:28:53.472545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.244 [2024-04-25 03:28:53.472711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.244 [2024-04-25 03:28:53.472736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.244 qpair failed and we were unable to recover it. 00:28:19.244 [2024-04-25 03:28:53.472934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.244 [2024-04-25 03:28:53.473101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.244 [2024-04-25 03:28:53.473127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.244 qpair failed and we were unable to recover it. 00:28:19.244 [2024-04-25 03:28:53.473325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.244 [2024-04-25 03:28:53.473490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.244 [2024-04-25 03:28:53.473515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.244 qpair failed and we were unable to recover it. 00:28:19.244 [2024-04-25 03:28:53.473733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.244 [2024-04-25 03:28:53.473892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.244 [2024-04-25 03:28:53.473917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.244 qpair failed and we were unable to recover it. 00:28:19.244 [2024-04-25 03:28:53.474080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.244 [2024-04-25 03:28:53.474242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.244 [2024-04-25 03:28:53.474268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.244 qpair failed and we were unable to recover it. 00:28:19.244 [2024-04-25 03:28:53.474435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.244 [2024-04-25 03:28:53.474710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.244 [2024-04-25 03:28:53.474736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.244 qpair failed and we were unable to recover it. 00:28:19.244 [2024-04-25 03:28:53.474929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.244 [2024-04-25 03:28:53.475099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.244 [2024-04-25 03:28:53.475124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.244 qpair failed and we were unable to recover it. 00:28:19.244 [2024-04-25 03:28:53.475286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.244 [2024-04-25 03:28:53.475458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.244 [2024-04-25 03:28:53.475484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.244 qpair failed and we were unable to recover it. 00:28:19.244 [2024-04-25 03:28:53.475685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.244 [2024-04-25 03:28:53.475852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.244 [2024-04-25 03:28:53.475878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.244 qpair failed and we were unable to recover it. 00:28:19.244 [2024-04-25 03:28:53.476066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.244 [2024-04-25 03:28:53.476255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.244 [2024-04-25 03:28:53.476280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.244 qpair failed and we were unable to recover it. 00:28:19.244 [2024-04-25 03:28:53.476478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.244 [2024-04-25 03:28:53.476649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.244 [2024-04-25 03:28:53.476675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.244 qpair failed and we were unable to recover it. 00:28:19.244 [2024-04-25 03:28:53.476838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.244 [2024-04-25 03:28:53.477031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.245 [2024-04-25 03:28:53.477056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.245 qpair failed and we were unable to recover it. 00:28:19.245 [2024-04-25 03:28:53.477242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.245 [2024-04-25 03:28:53.477414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.245 [2024-04-25 03:28:53.477439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.245 qpair failed and we were unable to recover it. 00:28:19.245 [2024-04-25 03:28:53.477715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.245 [2024-04-25 03:28:53.477909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.245 [2024-04-25 03:28:53.477934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.245 qpair failed and we were unable to recover it. 00:28:19.245 [2024-04-25 03:28:53.478101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.245 [2024-04-25 03:28:53.478288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.245 [2024-04-25 03:28:53.478313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.245 qpair failed and we were unable to recover it. 00:28:19.245 [2024-04-25 03:28:53.478496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.245 [2024-04-25 03:28:53.478686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.245 [2024-04-25 03:28:53.478712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.245 qpair failed and we were unable to recover it. 00:28:19.245 [2024-04-25 03:28:53.478905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.245 [2024-04-25 03:28:53.479077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.245 [2024-04-25 03:28:53.479101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.245 qpair failed and we were unable to recover it. 00:28:19.245 [2024-04-25 03:28:53.479289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.245 [2024-04-25 03:28:53.479450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.245 [2024-04-25 03:28:53.479475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.245 qpair failed and we were unable to recover it. 00:28:19.245 [2024-04-25 03:28:53.479751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.245 [2024-04-25 03:28:53.479953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.245 [2024-04-25 03:28:53.479978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.245 qpair failed and we were unable to recover it. 00:28:19.245 [2024-04-25 03:28:53.480140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.245 [2024-04-25 03:28:53.480308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.245 [2024-04-25 03:28:53.480333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.245 qpair failed and we were unable to recover it. 00:28:19.245 [2024-04-25 03:28:53.480492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.245 [2024-04-25 03:28:53.480707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.245 [2024-04-25 03:28:53.480732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.245 qpair failed and we were unable to recover it. 00:28:19.245 [2024-04-25 03:28:53.480925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.245 [2024-04-25 03:28:53.481089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.245 [2024-04-25 03:28:53.481114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.245 qpair failed and we were unable to recover it. 00:28:19.245 [2024-04-25 03:28:53.481306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.245 [2024-04-25 03:28:53.481468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.245 [2024-04-25 03:28:53.481493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.245 qpair failed and we were unable to recover it. 00:28:19.245 [2024-04-25 03:28:53.481680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.246 [2024-04-25 03:28:53.481836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.246 [2024-04-25 03:28:53.481861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.246 qpair failed and we were unable to recover it. 00:28:19.246 [2024-04-25 03:28:53.482032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.246 [2024-04-25 03:28:53.482200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.246 [2024-04-25 03:28:53.482225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.246 qpair failed and we were unable to recover it. 00:28:19.246 [2024-04-25 03:28:53.482403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.246 [2024-04-25 03:28:53.482675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.246 [2024-04-25 03:28:53.482701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.246 qpair failed and we were unable to recover it. 00:28:19.246 [2024-04-25 03:28:53.482922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.246 [2024-04-25 03:28:53.483108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.246 [2024-04-25 03:28:53.483132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.246 qpair failed and we were unable to recover it. 00:28:19.246 [2024-04-25 03:28:53.483294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.246 [2024-04-25 03:28:53.483459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.246 [2024-04-25 03:28:53.483485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.246 qpair failed and we were unable to recover it. 00:28:19.246 [2024-04-25 03:28:53.483673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.246 [2024-04-25 03:28:53.483852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.246 [2024-04-25 03:28:53.483882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.246 qpair failed and we were unable to recover it. 00:28:19.246 [2024-04-25 03:28:53.484084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.246 [2024-04-25 03:28:53.484245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.246 [2024-04-25 03:28:53.484270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.246 qpair failed and we were unable to recover it. 00:28:19.246 [2024-04-25 03:28:53.484442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.246 [2024-04-25 03:28:53.484606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.246 [2024-04-25 03:28:53.484636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.246 qpair failed and we were unable to recover it. 00:28:19.246 [2024-04-25 03:28:53.484830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.246 [2024-04-25 03:28:53.485027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.246 [2024-04-25 03:28:53.485052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.246 qpair failed and we were unable to recover it. 00:28:19.246 [2024-04-25 03:28:53.485257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.246 [2024-04-25 03:28:53.485481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.246 [2024-04-25 03:28:53.485506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.246 qpair failed and we were unable to recover it. 00:28:19.246 [2024-04-25 03:28:53.485698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.246 [2024-04-25 03:28:53.485887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.246 [2024-04-25 03:28:53.485912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.246 qpair failed and we were unable to recover it. 00:28:19.246 [2024-04-25 03:28:53.486077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.246 [2024-04-25 03:28:53.486275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.246 [2024-04-25 03:28:53.486300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.246 qpair failed and we were unable to recover it. 00:28:19.246 [2024-04-25 03:28:53.486518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.246 [2024-04-25 03:28:53.486710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.246 [2024-04-25 03:28:53.486736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.246 qpair failed and we were unable to recover it. 00:28:19.246 [2024-04-25 03:28:53.486927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.246 [2024-04-25 03:28:53.487098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.246 [2024-04-25 03:28:53.487123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.246 qpair failed and we were unable to recover it. 00:28:19.246 [2024-04-25 03:28:53.487294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.246 [2024-04-25 03:28:53.487487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.246 [2024-04-25 03:28:53.487513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.246 qpair failed and we were unable to recover it. 00:28:19.246 [2024-04-25 03:28:53.487712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.246 [2024-04-25 03:28:53.487873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.246 [2024-04-25 03:28:53.487903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.246 qpair failed and we were unable to recover it. 00:28:19.246 [2024-04-25 03:28:53.488102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.246 [2024-04-25 03:28:53.488320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.246 [2024-04-25 03:28:53.488345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.246 qpair failed and we were unable to recover it. 00:28:19.246 [2024-04-25 03:28:53.488508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.246 [2024-04-25 03:28:53.488683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.246 [2024-04-25 03:28:53.488710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.246 qpair failed and we were unable to recover it. 00:28:19.246 [2024-04-25 03:28:53.488903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.246 [2024-04-25 03:28:53.489121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.246 [2024-04-25 03:28:53.489146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.246 qpair failed and we were unable to recover it. 00:28:19.246 [2024-04-25 03:28:53.489332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.246 [2024-04-25 03:28:53.489491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.246 [2024-04-25 03:28:53.489517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.246 qpair failed and we were unable to recover it. 00:28:19.246 [2024-04-25 03:28:53.489721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.246 [2024-04-25 03:28:53.489906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.246 [2024-04-25 03:28:53.489931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.246 qpair failed and we were unable to recover it. 00:28:19.246 [2024-04-25 03:28:53.490131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.246 [2024-04-25 03:28:53.490320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.246 [2024-04-25 03:28:53.490345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.246 qpair failed and we were unable to recover it. 00:28:19.246 [2024-04-25 03:28:53.490505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.246 [2024-04-25 03:28:53.490671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.246 [2024-04-25 03:28:53.490696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.246 qpair failed and we were unable to recover it. 00:28:19.246 [2024-04-25 03:28:53.490892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.246 [2024-04-25 03:28:53.491089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.246 [2024-04-25 03:28:53.491114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.246 qpair failed and we were unable to recover it. 00:28:19.246 [2024-04-25 03:28:53.491312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.246 [2024-04-25 03:28:53.491474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.246 [2024-04-25 03:28:53.491499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.246 qpair failed and we were unable to recover it. 00:28:19.246 [2024-04-25 03:28:53.491696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.246 [2024-04-25 03:28:53.491890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.246 [2024-04-25 03:28:53.491916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.246 qpair failed and we were unable to recover it. 00:28:19.246 [2024-04-25 03:28:53.492118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.246 [2024-04-25 03:28:53.492338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.246 [2024-04-25 03:28:53.492363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.246 qpair failed and we were unable to recover it. 00:28:19.246 [2024-04-25 03:28:53.492519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.246 [2024-04-25 03:28:53.492713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.246 [2024-04-25 03:28:53.492738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.246 qpair failed and we were unable to recover it. 00:28:19.246 [2024-04-25 03:28:53.492939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.246 [2024-04-25 03:28:53.493097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.246 [2024-04-25 03:28:53.493123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.246 qpair failed and we were unable to recover it. 00:28:19.246 [2024-04-25 03:28:53.493336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.246 [2024-04-25 03:28:53.493524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.247 [2024-04-25 03:28:53.493549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.247 qpair failed and we were unable to recover it. 00:28:19.247 [2024-04-25 03:28:53.493721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.247 [2024-04-25 03:28:53.493906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.247 [2024-04-25 03:28:53.493931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.247 qpair failed and we were unable to recover it. 00:28:19.247 [2024-04-25 03:28:53.494130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.247 [2024-04-25 03:28:53.494291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.247 [2024-04-25 03:28:53.494317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.247 qpair failed and we were unable to recover it. 00:28:19.247 [2024-04-25 03:28:53.494480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.247 [2024-04-25 03:28:53.494677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.247 [2024-04-25 03:28:53.494704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.247 qpair failed and we were unable to recover it. 00:28:19.247 [2024-04-25 03:28:53.494875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.247 [2024-04-25 03:28:53.495093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.247 [2024-04-25 03:28:53.495119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.247 qpair failed and we were unable to recover it. 00:28:19.247 [2024-04-25 03:28:53.495313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.247 [2024-04-25 03:28:53.495534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.247 [2024-04-25 03:28:53.495559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.247 qpair failed and we were unable to recover it. 00:28:19.247 [2024-04-25 03:28:53.495791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.247 [2024-04-25 03:28:53.496009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.247 [2024-04-25 03:28:53.496034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.247 qpair failed and we were unable to recover it. 00:28:19.247 [2024-04-25 03:28:53.496226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.247 [2024-04-25 03:28:53.496421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.247 [2024-04-25 03:28:53.496447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.247 qpair failed and we were unable to recover it. 00:28:19.247 [2024-04-25 03:28:53.496615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.247 [2024-04-25 03:28:53.497906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.247 [2024-04-25 03:28:53.497939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.247 qpair failed and we were unable to recover it. 00:28:19.247 [2024-04-25 03:28:53.498167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.247 [2024-04-25 03:28:53.498382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.247 [2024-04-25 03:28:53.498412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.247 qpair failed and we were unable to recover it. 00:28:19.247 [2024-04-25 03:28:53.498625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.247 [2024-04-25 03:28:53.499426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.247 [2024-04-25 03:28:53.499457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.247 qpair failed and we were unable to recover it. 00:28:19.247 [2024-04-25 03:28:53.499682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.247 [2024-04-25 03:28:53.499852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.247 [2024-04-25 03:28:53.499878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.247 qpair failed and we were unable to recover it. 00:28:19.247 [2024-04-25 03:28:53.500054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.247 [2024-04-25 03:28:53.500268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.247 [2024-04-25 03:28:53.500293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.247 qpair failed and we were unable to recover it. 00:28:19.247 [2024-04-25 03:28:53.500478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.247 [2024-04-25 03:28:53.500657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.247 [2024-04-25 03:28:53.500684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.247 qpair failed and we were unable to recover it. 00:28:19.247 [2024-04-25 03:28:53.500906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.247 [2024-04-25 03:28:53.501069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.247 [2024-04-25 03:28:53.501094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.247 qpair failed and we were unable to recover it. 00:28:19.247 [2024-04-25 03:28:53.501260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.247 [2024-04-25 03:28:53.501451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.247 [2024-04-25 03:28:53.501476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.247 qpair failed and we were unable to recover it. 00:28:19.247 [2024-04-25 03:28:53.502685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.247 [2024-04-25 03:28:53.502896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.247 [2024-04-25 03:28:53.502926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.247 qpair failed and we were unable to recover it. 00:28:19.247 [2024-04-25 03:28:53.503095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.247 [2024-04-25 03:28:53.504050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.247 [2024-04-25 03:28:53.504081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.247 qpair failed and we were unable to recover it. 00:28:19.247 [2024-04-25 03:28:53.504317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.247 [2024-04-25 03:28:53.504486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.247 [2024-04-25 03:28:53.504513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.247 qpair failed and we were unable to recover it. 00:28:19.247 [2024-04-25 03:28:53.504693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.247 [2024-04-25 03:28:53.504891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.247 [2024-04-25 03:28:53.504917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.247 qpair failed and we were unable to recover it. 00:28:19.247 [2024-04-25 03:28:53.505118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.247 [2024-04-25 03:28:53.505281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.247 [2024-04-25 03:28:53.505306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.247 qpair failed and we were unable to recover it. 00:28:19.247 [2024-04-25 03:28:53.505500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.247 [2024-04-25 03:28:53.505695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.247 [2024-04-25 03:28:53.505721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.247 qpair failed and we were unable to recover it. 00:28:19.247 [2024-04-25 03:28:53.505908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.247 [2024-04-25 03:28:53.506122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.247 [2024-04-25 03:28:53.506147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.247 qpair failed and we were unable to recover it. 00:28:19.247 [2024-04-25 03:28:53.507293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.247 [2024-04-25 03:28:53.507511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.247 [2024-04-25 03:28:53.507538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.247 qpair failed and we were unable to recover it. 00:28:19.247 [2024-04-25 03:28:53.507736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.247 [2024-04-25 03:28:53.507908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.247 [2024-04-25 03:28:53.507933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.247 qpair failed and we were unable to recover it. 00:28:19.247 [2024-04-25 03:28:53.508109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.247 [2024-04-25 03:28:53.508287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.247 [2024-04-25 03:28:53.508312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.247 qpair failed and we were unable to recover it. 00:28:19.247 [2024-04-25 03:28:53.508588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.247 [2024-04-25 03:28:53.508762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.247 [2024-04-25 03:28:53.508787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.247 qpair failed and we were unable to recover it. 00:28:19.247 [2024-04-25 03:28:53.508990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.247 [2024-04-25 03:28:53.509152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.247 [2024-04-25 03:28:53.509183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.247 qpair failed and we were unable to recover it. 00:28:19.247 [2024-04-25 03:28:53.509379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.247 [2024-04-25 03:28:53.509574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.247 [2024-04-25 03:28:53.509599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.247 qpair failed and we were unable to recover it. 00:28:19.247 [2024-04-25 03:28:53.509800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.247 [2024-04-25 03:28:53.509967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.248 [2024-04-25 03:28:53.509992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.248 qpair failed and we were unable to recover it. 00:28:19.248 [2024-04-25 03:28:53.510210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.248 [2024-04-25 03:28:53.510369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.248 [2024-04-25 03:28:53.510394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.248 qpair failed and we were unable to recover it. 00:28:19.248 [2024-04-25 03:28:53.510588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.248 [2024-04-25 03:28:53.510752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.248 [2024-04-25 03:28:53.510778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.248 qpair failed and we were unable to recover it. 00:28:19.248 [2024-04-25 03:28:53.510970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.248 [2024-04-25 03:28:53.511137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.248 [2024-04-25 03:28:53.511162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.248 qpair failed and we were unable to recover it. 00:28:19.248 [2024-04-25 03:28:53.511327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.248 [2024-04-25 03:28:53.511500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.248 [2024-04-25 03:28:53.511525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.248 qpair failed and we were unable to recover it. 00:28:19.248 [2024-04-25 03:28:53.511702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.248 [2024-04-25 03:28:53.511974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.248 [2024-04-25 03:28:53.511999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.248 qpair failed and we were unable to recover it. 00:28:19.248 [2024-04-25 03:28:53.512194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.248 [2024-04-25 03:28:53.512426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.248 [2024-04-25 03:28:53.512452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.248 qpair failed and we were unable to recover it. 00:28:19.248 [2024-04-25 03:28:53.512646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.248 [2024-04-25 03:28:53.512836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.248 [2024-04-25 03:28:53.512862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.248 qpair failed and we were unable to recover it. 00:28:19.248 [2024-04-25 03:28:53.513038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.248 [2024-04-25 03:28:53.513207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.248 [2024-04-25 03:28:53.513232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.248 qpair failed and we were unable to recover it. 00:28:19.248 [2024-04-25 03:28:53.513431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.248 [2024-04-25 03:28:53.513590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.248 [2024-04-25 03:28:53.513616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.248 qpair failed and we were unable to recover it. 00:28:19.248 [2024-04-25 03:28:53.513822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.248 [2024-04-25 03:28:53.514014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.248 [2024-04-25 03:28:53.514039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.248 qpair failed and we were unable to recover it. 00:28:19.248 [2024-04-25 03:28:53.514219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.248 [2024-04-25 03:28:53.514410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.248 [2024-04-25 03:28:53.514435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.248 qpair failed and we were unable to recover it. 00:28:19.248 [2024-04-25 03:28:53.514640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.248 [2024-04-25 03:28:53.514815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.248 [2024-04-25 03:28:53.514840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.248 qpair failed and we were unable to recover it. 00:28:19.248 [2024-04-25 03:28:53.515062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.248 [2024-04-25 03:28:53.515241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.248 [2024-04-25 03:28:53.515266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.248 qpair failed and we were unable to recover it. 00:28:19.248 [2024-04-25 03:28:53.515433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.248 [2024-04-25 03:28:53.515599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.248 [2024-04-25 03:28:53.515640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.248 qpair failed and we were unable to recover it. 00:28:19.248 [2024-04-25 03:28:53.515813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.248 [2024-04-25 03:28:53.515978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.248 [2024-04-25 03:28:53.516004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.248 qpair failed and we were unable to recover it. 00:28:19.248 [2024-04-25 03:28:53.516199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.248 [2024-04-25 03:28:53.516382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.248 [2024-04-25 03:28:53.516407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.248 qpair failed and we were unable to recover it. 00:28:19.248 [2024-04-25 03:28:53.516616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.248 [2024-04-25 03:28:53.516798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.248 [2024-04-25 03:28:53.516823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.248 qpair failed and we were unable to recover it. 00:28:19.248 [2024-04-25 03:28:53.517030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.248 [2024-04-25 03:28:53.517219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.248 [2024-04-25 03:28:53.517244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.248 qpair failed and we were unable to recover it. 00:28:19.248 [2024-04-25 03:28:53.517450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.248 [2024-04-25 03:28:53.517633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.248 [2024-04-25 03:28:53.517659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.248 qpair failed and we were unable to recover it. 00:28:19.248 [2024-04-25 03:28:53.517828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.248 [2024-04-25 03:28:53.518025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.248 [2024-04-25 03:28:53.518051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.248 qpair failed and we were unable to recover it. 00:28:19.248 [2024-04-25 03:28:53.518227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.248 [2024-04-25 03:28:53.518397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.248 [2024-04-25 03:28:53.518424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.248 qpair failed and we were unable to recover it. 00:28:19.248 [2024-04-25 03:28:53.518609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.248 [2024-04-25 03:28:53.518777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.248 [2024-04-25 03:28:53.518803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.248 qpair failed and we were unable to recover it. 00:28:19.248 [2024-04-25 03:28:53.518970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.248 [2024-04-25 03:28:53.519132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.248 [2024-04-25 03:28:53.519158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.248 qpair failed and we were unable to recover it. 00:28:19.248 [2024-04-25 03:28:53.519326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.248 [2024-04-25 03:28:53.519493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.248 [2024-04-25 03:28:53.519519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.248 qpair failed and we were unable to recover it. 00:28:19.248 [2024-04-25 03:28:53.519714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.248 [2024-04-25 03:28:53.519881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.248 [2024-04-25 03:28:53.519906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.248 qpair failed and we were unable to recover it. 00:28:19.248 [2024-04-25 03:28:53.520076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.248 [2024-04-25 03:28:53.520274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.248 [2024-04-25 03:28:53.520300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.248 qpair failed and we were unable to recover it. 00:28:19.248 [2024-04-25 03:28:53.520494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.248 [2024-04-25 03:28:53.520718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.248 [2024-04-25 03:28:53.520744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.248 qpair failed and we were unable to recover it. 00:28:19.248 [2024-04-25 03:28:53.521019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.248 [2024-04-25 03:28:53.521238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.248 [2024-04-25 03:28:53.521263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.248 qpair failed and we were unable to recover it. 00:28:19.248 [2024-04-25 03:28:53.521457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.248 [2024-04-25 03:28:53.521626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.248 [2024-04-25 03:28:53.521727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.249 qpair failed and we were unable to recover it. 00:28:19.249 [2024-04-25 03:28:53.522005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.249 [2024-04-25 03:28:53.522169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.249 [2024-04-25 03:28:53.522194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.249 qpair failed and we were unable to recover it. 00:28:19.249 [2024-04-25 03:28:53.522354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.249 [2024-04-25 03:28:53.522626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.249 [2024-04-25 03:28:53.522657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.249 qpair failed and we were unable to recover it. 00:28:19.249 [2024-04-25 03:28:53.522829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.249 [2024-04-25 03:28:53.523050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.249 [2024-04-25 03:28:53.523075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.249 qpair failed and we were unable to recover it. 00:28:19.249 [2024-04-25 03:28:53.523255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.249 [2024-04-25 03:28:53.523451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.249 [2024-04-25 03:28:53.523477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.249 qpair failed and we were unable to recover it. 00:28:19.249 [2024-04-25 03:28:53.523678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.249 [2024-04-25 03:28:53.523848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.249 [2024-04-25 03:28:53.523874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.249 qpair failed and we were unable to recover it. 00:28:19.249 [2024-04-25 03:28:53.524044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.249 [2024-04-25 03:28:53.524229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.249 [2024-04-25 03:28:53.524255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.249 qpair failed and we were unable to recover it. 00:28:19.249 [2024-04-25 03:28:53.524446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.249 [2024-04-25 03:28:53.524620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.249 [2024-04-25 03:28:53.524650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.249 qpair failed and we were unable to recover it. 00:28:19.249 [2024-04-25 03:28:53.524846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.249 [2024-04-25 03:28:53.525122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.249 [2024-04-25 03:28:53.525149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.249 qpair failed and we were unable to recover it. 00:28:19.249 [2024-04-25 03:28:53.525346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.249 [2024-04-25 03:28:53.525570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.249 [2024-04-25 03:28:53.525595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.249 qpair failed and we were unable to recover it. 00:28:19.249 [2024-04-25 03:28:53.525773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.249 [2024-04-25 03:28:53.525973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.249 [2024-04-25 03:28:53.525998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.249 qpair failed and we were unable to recover it. 00:28:19.249 [2024-04-25 03:28:53.526197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.249 [2024-04-25 03:28:53.526417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.249 [2024-04-25 03:28:53.526443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.249 qpair failed and we were unable to recover it. 00:28:19.249 [2024-04-25 03:28:53.526643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.249 [2024-04-25 03:28:53.526840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.249 [2024-04-25 03:28:53.526865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.249 qpair failed and we were unable to recover it. 00:28:19.249 [2024-04-25 03:28:53.527035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.249 [2024-04-25 03:28:53.527199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.249 [2024-04-25 03:28:53.527225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.249 qpair failed and we were unable to recover it. 00:28:19.249 [2024-04-25 03:28:53.527387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.249 [2024-04-25 03:28:53.527561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.249 [2024-04-25 03:28:53.527586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.249 qpair failed and we were unable to recover it. 00:28:19.249 [2024-04-25 03:28:53.527772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.249 [2024-04-25 03:28:53.527940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.249 [2024-04-25 03:28:53.527964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.249 qpair failed and we were unable to recover it. 00:28:19.249 [2024-04-25 03:28:53.528169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.249 [2024-04-25 03:28:53.528327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.249 [2024-04-25 03:28:53.528352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.249 qpair failed and we were unable to recover it. 00:28:19.249 [2024-04-25 03:28:53.528519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.249 [2024-04-25 03:28:53.528714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.249 [2024-04-25 03:28:53.528740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.249 qpair failed and we were unable to recover it. 00:28:19.249 [2024-04-25 03:28:53.528959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.249 [2024-04-25 03:28:53.529130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.249 [2024-04-25 03:28:53.529156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.249 qpair failed and we were unable to recover it. 00:28:19.249 [2024-04-25 03:28:53.529320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.249 [2024-04-25 03:28:53.529517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.249 [2024-04-25 03:28:53.529542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.249 qpair failed and we were unable to recover it. 00:28:19.249 [2024-04-25 03:28:53.529729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.249 [2024-04-25 03:28:53.529922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.249 [2024-04-25 03:28:53.529953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.249 qpair failed and we were unable to recover it. 00:28:19.249 [2024-04-25 03:28:53.530119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.249 [2024-04-25 03:28:53.530287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.249 [2024-04-25 03:28:53.530312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.249 qpair failed and we were unable to recover it. 00:28:19.249 [2024-04-25 03:28:53.530546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.249 [2024-04-25 03:28:53.530775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.249 [2024-04-25 03:28:53.530801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.249 qpair failed and we were unable to recover it. 00:28:19.249 [2024-04-25 03:28:53.530978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.249 [2024-04-25 03:28:53.531149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.249 [2024-04-25 03:28:53.531174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.249 qpair failed and we were unable to recover it. 00:28:19.249 [2024-04-25 03:28:53.531396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.249 [2024-04-25 03:28:53.531565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.249 [2024-04-25 03:28:53.531590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.249 qpair failed and we were unable to recover it. 00:28:19.249 [2024-04-25 03:28:53.531759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.249 [2024-04-25 03:28:53.531933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.249 [2024-04-25 03:28:53.531958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.249 qpair failed and we were unable to recover it. 00:28:19.249 [2024-04-25 03:28:53.532120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.249 [2024-04-25 03:28:53.532322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.249 [2024-04-25 03:28:53.532347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.249 qpair failed and we were unable to recover it. 00:28:19.249 [2024-04-25 03:28:53.532512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.249 [2024-04-25 03:28:53.532683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.249 [2024-04-25 03:28:53.532708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.249 qpair failed and we were unable to recover it. 00:28:19.249 [2024-04-25 03:28:53.532880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.249 [2024-04-25 03:28:53.533079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.249 [2024-04-25 03:28:53.533104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.249 qpair failed and we were unable to recover it. 00:28:19.249 [2024-04-25 03:28:53.533278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.249 [2024-04-25 03:28:53.533439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.249 [2024-04-25 03:28:53.533466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.249 qpair failed and we were unable to recover it. 00:28:19.250 [2024-04-25 03:28:53.533688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.250 [2024-04-25 03:28:53.533874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.250 [2024-04-25 03:28:53.533900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.250 qpair failed and we were unable to recover it. 00:28:19.250 [2024-04-25 03:28:53.534107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.250 [2024-04-25 03:28:53.534278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.250 [2024-04-25 03:28:53.534303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.250 qpair failed and we were unable to recover it. 00:28:19.250 [2024-04-25 03:28:53.534464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.250 [2024-04-25 03:28:53.534662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.250 [2024-04-25 03:28:53.534688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.250 qpair failed and we were unable to recover it. 00:28:19.250 [2024-04-25 03:28:53.534879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.250 [2024-04-25 03:28:53.535062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.250 [2024-04-25 03:28:53.535087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.250 qpair failed and we were unable to recover it. 00:28:19.250 [2024-04-25 03:28:53.535257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.250 [2024-04-25 03:28:53.535458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.250 [2024-04-25 03:28:53.535483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.250 qpair failed and we were unable to recover it. 00:28:19.250 [2024-04-25 03:28:53.535705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.250 [2024-04-25 03:28:53.535896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.250 [2024-04-25 03:28:53.535921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.250 qpair failed and we were unable to recover it. 00:28:19.250 [2024-04-25 03:28:53.536127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.250 [2024-04-25 03:28:53.536320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.250 [2024-04-25 03:28:53.536345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.250 qpair failed and we were unable to recover it. 00:28:19.250 [2024-04-25 03:28:53.536538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.250 [2024-04-25 03:28:53.536789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.250 [2024-04-25 03:28:53.536816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.250 qpair failed and we were unable to recover it. 00:28:19.250 [2024-04-25 03:28:53.537000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.250 [2024-04-25 03:28:53.537164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.250 [2024-04-25 03:28:53.537189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.250 qpair failed and we were unable to recover it. 00:28:19.250 [2024-04-25 03:28:53.537380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.250 [2024-04-25 03:28:53.537571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.250 [2024-04-25 03:28:53.537596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.250 qpair failed and we were unable to recover it. 00:28:19.250 [2024-04-25 03:28:53.537795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.250 [2024-04-25 03:28:53.537987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.250 [2024-04-25 03:28:53.538011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.250 qpair failed and we were unable to recover it. 00:28:19.250 [2024-04-25 03:28:53.538174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.250 [2024-04-25 03:28:53.538371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.250 [2024-04-25 03:28:53.538397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.250 qpair failed and we were unable to recover it. 00:28:19.250 [2024-04-25 03:28:53.538595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.250 [2024-04-25 03:28:53.538815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.250 [2024-04-25 03:28:53.538840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.250 qpair failed and we were unable to recover it. 00:28:19.250 [2024-04-25 03:28:53.539056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.250 [2024-04-25 03:28:53.539241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.250 [2024-04-25 03:28:53.539266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.250 qpair failed and we were unable to recover it. 00:28:19.250 [2024-04-25 03:28:53.539442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.250 [2024-04-25 03:28:53.539667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.250 [2024-04-25 03:28:53.539693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.250 qpair failed and we were unable to recover it. 00:28:19.250 [2024-04-25 03:28:53.539862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.250 [2024-04-25 03:28:53.541163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.250 [2024-04-25 03:28:53.541194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.250 qpair failed and we were unable to recover it. 00:28:19.250 [2024-04-25 03:28:53.541425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.250 [2024-04-25 03:28:53.541620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.250 [2024-04-25 03:28:53.541651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.250 qpair failed and we were unable to recover it. 00:28:19.250 [2024-04-25 03:28:53.541827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.250 [2024-04-25 03:28:53.542021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.250 [2024-04-25 03:28:53.542046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.250 qpair failed and we were unable to recover it. 00:28:19.250 [2024-04-25 03:28:53.542225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.250 [2024-04-25 03:28:53.542425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.250 [2024-04-25 03:28:53.542451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.250 qpair failed and we were unable to recover it. 00:28:19.250 [2024-04-25 03:28:53.542650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.250 [2024-04-25 03:28:53.542859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.250 [2024-04-25 03:28:53.542883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.250 qpair failed and we were unable to recover it. 00:28:19.250 [2024-04-25 03:28:53.543080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.250 [2024-04-25 03:28:53.544214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.250 [2024-04-25 03:28:53.544244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.250 qpair failed and we were unable to recover it. 00:28:19.250 [2024-04-25 03:28:53.544500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.250 [2024-04-25 03:28:53.545450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.250 [2024-04-25 03:28:53.545480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.250 qpair failed and we were unable to recover it. 00:28:19.250 [2024-04-25 03:28:53.545691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.250 [2024-04-25 03:28:53.546695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.250 [2024-04-25 03:28:53.546725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.250 qpair failed and we were unable to recover it. 00:28:19.250 [2024-04-25 03:28:53.546952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.250 [2024-04-25 03:28:53.547125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.250 [2024-04-25 03:28:53.547152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.250 qpair failed and we were unable to recover it. 00:28:19.250 [2024-04-25 03:28:53.547369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.250 [2024-04-25 03:28:53.547637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.250 [2024-04-25 03:28:53.547664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.251 qpair failed and we were unable to recover it. 00:28:19.251 [2024-04-25 03:28:53.547836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.251 [2024-04-25 03:28:53.548033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.251 [2024-04-25 03:28:53.548059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.251 qpair failed and we were unable to recover it. 00:28:19.251 [2024-04-25 03:28:53.548220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.251 [2024-04-25 03:28:53.548387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.251 [2024-04-25 03:28:53.548414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.251 qpair failed and we were unable to recover it. 00:28:19.251 [2024-04-25 03:28:53.548642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.251 [2024-04-25 03:28:53.548835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.251 [2024-04-25 03:28:53.548860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.251 qpair failed and we were unable to recover it. 00:28:19.251 [2024-04-25 03:28:53.549021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.251 [2024-04-25 03:28:53.549238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.251 [2024-04-25 03:28:53.549264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.251 qpair failed and we were unable to recover it. 00:28:19.251 [2024-04-25 03:28:53.549451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.251 [2024-04-25 03:28:53.549727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.251 [2024-04-25 03:28:53.549756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.251 qpair failed and we were unable to recover it. 00:28:19.251 [2024-04-25 03:28:53.549985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.251 [2024-04-25 03:28:53.550149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.251 [2024-04-25 03:28:53.550175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.251 qpair failed and we were unable to recover it. 00:28:19.251 [2024-04-25 03:28:53.550362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.251 [2024-04-25 03:28:53.550548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.251 [2024-04-25 03:28:53.550589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.251 qpair failed and we were unable to recover it. 00:28:19.251 [2024-04-25 03:28:53.550791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.251 [2024-04-25 03:28:53.551021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.251 [2024-04-25 03:28:53.551046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.251 qpair failed and we were unable to recover it. 00:28:19.251 [2024-04-25 03:28:53.551222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.251 [2024-04-25 03:28:53.551390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.251 [2024-04-25 03:28:53.551415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.251 qpair failed and we were unable to recover it. 00:28:19.251 [2024-04-25 03:28:53.551577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.251 [2024-04-25 03:28:53.551813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.251 [2024-04-25 03:28:53.551840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.251 qpair failed and we were unable to recover it. 00:28:19.251 [2024-04-25 03:28:53.552046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.251 [2024-04-25 03:28:53.552267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.251 [2024-04-25 03:28:53.552292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.251 qpair failed and we were unable to recover it. 00:28:19.251 [2024-04-25 03:28:53.552497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.251 [2024-04-25 03:28:53.552694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.251 [2024-04-25 03:28:53.552720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.251 qpair failed and we were unable to recover it. 00:28:19.251 [2024-04-25 03:28:53.552885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.251 [2024-04-25 03:28:53.553067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.251 [2024-04-25 03:28:53.553092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.251 qpair failed and we were unable to recover it. 00:28:19.251 [2024-04-25 03:28:53.553300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.251 [2024-04-25 03:28:53.553580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.251 [2024-04-25 03:28:53.553607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.251 qpair failed and we were unable to recover it. 00:28:19.251 [2024-04-25 03:28:53.553796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.251 [2024-04-25 03:28:53.553991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.251 [2024-04-25 03:28:53.554017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.251 qpair failed and we were unable to recover it. 00:28:19.251 [2024-04-25 03:28:53.554182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.251 [2024-04-25 03:28:53.554351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.251 [2024-04-25 03:28:53.554376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.251 qpair failed and we were unable to recover it. 00:28:19.251 [2024-04-25 03:28:53.554559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.251 [2024-04-25 03:28:53.554734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.251 [2024-04-25 03:28:53.554769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.251 qpair failed and we were unable to recover it. 00:28:19.251 [2024-04-25 03:28:53.554943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.251 [2024-04-25 03:28:53.555106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.251 [2024-04-25 03:28:53.555132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.251 qpair failed and we were unable to recover it. 00:28:19.251 [2024-04-25 03:28:53.555401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.251 [2024-04-25 03:28:53.556181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.251 [2024-04-25 03:28:53.556211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.251 qpair failed and we were unable to recover it. 00:28:19.251 [2024-04-25 03:28:53.556409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.251 [2024-04-25 03:28:53.556635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.251 [2024-04-25 03:28:53.556661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.251 qpair failed and we were unable to recover it. 00:28:19.251 [2024-04-25 03:28:53.556833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.251 [2024-04-25 03:28:53.557027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.251 [2024-04-25 03:28:53.557053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.251 qpair failed and we were unable to recover it. 00:28:19.251 [2024-04-25 03:28:53.557249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.251 [2024-04-25 03:28:53.557412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.251 [2024-04-25 03:28:53.557437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.251 qpair failed and we were unable to recover it. 00:28:19.251 [2024-04-25 03:28:53.557601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.251 [2024-04-25 03:28:53.557780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.251 [2024-04-25 03:28:53.557805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.251 qpair failed and we were unable to recover it. 00:28:19.251 [2024-04-25 03:28:53.557976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.251 [2024-04-25 03:28:53.558149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.251 [2024-04-25 03:28:53.558174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.251 qpair failed and we were unable to recover it. 00:28:19.251 [2024-04-25 03:28:53.558363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.251 [2024-04-25 03:28:53.558566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.251 [2024-04-25 03:28:53.558590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.251 qpair failed and we were unable to recover it. 00:28:19.251 [2024-04-25 03:28:53.558793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.251 [2024-04-25 03:28:53.559002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.251 [2024-04-25 03:28:53.559026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.251 qpair failed and we were unable to recover it. 00:28:19.251 [2024-04-25 03:28:53.559231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.251 [2024-04-25 03:28:53.559399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.251 [2024-04-25 03:28:53.559424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.251 qpair failed and we were unable to recover it. 00:28:19.251 [2024-04-25 03:28:53.559597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.251 [2024-04-25 03:28:53.559848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.251 [2024-04-25 03:28:53.559875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.251 qpair failed and we were unable to recover it. 00:28:19.251 [2024-04-25 03:28:53.560058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.251 [2024-04-25 03:28:53.560215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.251 [2024-04-25 03:28:53.560240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.252 qpair failed and we were unable to recover it. 00:28:19.252 [2024-04-25 03:28:53.560435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.252 [2024-04-25 03:28:53.560656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.252 [2024-04-25 03:28:53.560682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.252 qpair failed and we were unable to recover it. 00:28:19.252 [2024-04-25 03:28:53.560870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.252 [2024-04-25 03:28:53.561047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.252 [2024-04-25 03:28:53.561086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.252 qpair failed and we were unable to recover it. 00:28:19.252 [2024-04-25 03:28:53.561252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.252 [2024-04-25 03:28:53.561418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.252 [2024-04-25 03:28:53.561442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.252 qpair failed and we were unable to recover it. 00:28:19.252 [2024-04-25 03:28:53.561637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.252 [2024-04-25 03:28:53.561806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.252 [2024-04-25 03:28:53.561831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.252 qpair failed and we were unable to recover it. 00:28:19.252 [2024-04-25 03:28:53.562005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.252 [2024-04-25 03:28:53.562167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.252 [2024-04-25 03:28:53.562192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.252 qpair failed and we were unable to recover it. 00:28:19.252 [2024-04-25 03:28:53.562393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.252 [2024-04-25 03:28:53.562613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.252 [2024-04-25 03:28:53.562648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.252 qpair failed and we were unable to recover it. 00:28:19.252 [2024-04-25 03:28:53.562815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.252 [2024-04-25 03:28:53.562983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.252 [2024-04-25 03:28:53.563008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.252 qpair failed and we were unable to recover it. 00:28:19.252 [2024-04-25 03:28:53.563175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.252 [2024-04-25 03:28:53.563393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.252 [2024-04-25 03:28:53.563418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.252 qpair failed and we were unable to recover it. 00:28:19.252 [2024-04-25 03:28:53.563604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.252 [2024-04-25 03:28:53.563803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.252 [2024-04-25 03:28:53.563830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.252 qpair failed and we were unable to recover it. 00:28:19.252 [2024-04-25 03:28:53.564006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.252 [2024-04-25 03:28:53.564282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.252 [2024-04-25 03:28:53.564307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.252 qpair failed and we were unable to recover it. 00:28:19.252 [2024-04-25 03:28:53.564468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.252 [2024-04-25 03:28:53.564665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.252 [2024-04-25 03:28:53.564690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.252 qpair failed and we were unable to recover it. 00:28:19.252 [2024-04-25 03:28:53.564857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.252 [2024-04-25 03:28:53.565051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.252 [2024-04-25 03:28:53.565076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.252 qpair failed and we were unable to recover it. 00:28:19.252 [2024-04-25 03:28:53.565265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.252 [2024-04-25 03:28:53.565491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.252 [2024-04-25 03:28:53.565516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.252 qpair failed and we were unable to recover it. 00:28:19.252 [2024-04-25 03:28:53.565687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.252 [2024-04-25 03:28:53.565844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.252 [2024-04-25 03:28:53.565869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.252 qpair failed and we were unable to recover it. 00:28:19.252 [2024-04-25 03:28:53.566049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.252 [2024-04-25 03:28:53.566222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.252 [2024-04-25 03:28:53.566248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.252 qpair failed and we were unable to recover it. 00:28:19.252 [2024-04-25 03:28:53.566432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.252 [2024-04-25 03:28:53.566600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.252 [2024-04-25 03:28:53.566640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.252 qpair failed and we were unable to recover it. 00:28:19.252 [2024-04-25 03:28:53.566842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.252 [2024-04-25 03:28:53.567013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.252 [2024-04-25 03:28:53.567038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.252 qpair failed and we were unable to recover it. 00:28:19.252 [2024-04-25 03:28:53.567207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.252 [2024-04-25 03:28:53.567404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.252 [2024-04-25 03:28:53.567429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.252 qpair failed and we were unable to recover it. 00:28:19.252 [2024-04-25 03:28:53.567618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.252 [2024-04-25 03:28:53.567832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.252 [2024-04-25 03:28:53.567858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.252 qpair failed and we were unable to recover it. 00:28:19.252 [2024-04-25 03:28:53.568071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.252 [2024-04-25 03:28:53.568270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.252 [2024-04-25 03:28:53.568295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.252 qpair failed and we were unable to recover it. 00:28:19.252 [2024-04-25 03:28:53.568458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.252 [2024-04-25 03:28:53.568643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.252 [2024-04-25 03:28:53.568671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.252 qpair failed and we were unable to recover it. 00:28:19.252 [2024-04-25 03:28:53.568840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.252 [2024-04-25 03:28:53.569027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.252 [2024-04-25 03:28:53.569052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.252 qpair failed and we were unable to recover it. 00:28:19.252 [2024-04-25 03:28:53.569220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.252 [2024-04-25 03:28:53.569388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.252 [2024-04-25 03:28:53.569413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.252 qpair failed and we were unable to recover it. 00:28:19.252 [2024-04-25 03:28:53.569605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.252 [2024-04-25 03:28:53.569782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.252 [2024-04-25 03:28:53.569807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.252 qpair failed and we were unable to recover it. 00:28:19.252 [2024-04-25 03:28:53.569972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.252 [2024-04-25 03:28:53.570190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.252 [2024-04-25 03:28:53.570215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.252 qpair failed and we were unable to recover it. 00:28:19.252 [2024-04-25 03:28:53.570409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.252 [2024-04-25 03:28:53.570600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.252 [2024-04-25 03:28:53.570625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.252 qpair failed and we were unable to recover it. 00:28:19.252 [2024-04-25 03:28:53.570804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.252 [2024-04-25 03:28:53.570969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.252 [2024-04-25 03:28:53.570994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.252 qpair failed and we were unable to recover it. 00:28:19.252 [2024-04-25 03:28:53.571212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.252 [2024-04-25 03:28:53.571378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.252 [2024-04-25 03:28:53.571403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.252 qpair failed and we were unable to recover it. 00:28:19.252 [2024-04-25 03:28:53.571593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.252 [2024-04-25 03:28:53.571800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.252 [2024-04-25 03:28:53.571830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.252 qpair failed and we were unable to recover it. 00:28:19.253 [2024-04-25 03:28:53.572020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.253 [2024-04-25 03:28:53.572292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.253 [2024-04-25 03:28:53.572317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.253 qpair failed and we were unable to recover it. 00:28:19.253 [2024-04-25 03:28:53.572478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.253 [2024-04-25 03:28:53.572654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.253 [2024-04-25 03:28:53.572682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.253 qpair failed and we were unable to recover it. 00:28:19.253 [2024-04-25 03:28:53.572862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.253 [2024-04-25 03:28:53.573134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.253 [2024-04-25 03:28:53.573159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.253 qpair failed and we were unable to recover it. 00:28:19.253 [2024-04-25 03:28:53.573350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.253 [2024-04-25 03:28:53.573521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.253 [2024-04-25 03:28:53.573546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.253 qpair failed and we were unable to recover it. 00:28:19.253 [2024-04-25 03:28:53.573715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.253 [2024-04-25 03:28:53.573881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.253 [2024-04-25 03:28:53.573906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.253 qpair failed and we were unable to recover it. 00:28:19.253 [2024-04-25 03:28:53.574094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.253 [2024-04-25 03:28:53.574312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.253 [2024-04-25 03:28:53.574338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.253 qpair failed and we were unable to recover it. 00:28:19.253 [2024-04-25 03:28:53.574511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.253 [2024-04-25 03:28:53.574701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.253 [2024-04-25 03:28:53.574728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.253 qpair failed and we were unable to recover it. 00:28:19.253 [2024-04-25 03:28:53.574928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.253 [2024-04-25 03:28:53.575107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.253 [2024-04-25 03:28:53.575132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.253 qpair failed and we were unable to recover it. 00:28:19.253 [2024-04-25 03:28:53.575343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.253 [2024-04-25 03:28:53.575549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.253 [2024-04-25 03:28:53.575574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.253 qpair failed and we were unable to recover it. 00:28:19.253 [2024-04-25 03:28:53.575773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.253 [2024-04-25 03:28:53.575967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.253 [2024-04-25 03:28:53.575993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.253 qpair failed and we were unable to recover it. 00:28:19.253 [2024-04-25 03:28:53.576204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.253 [2024-04-25 03:28:53.576393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.253 [2024-04-25 03:28:53.576418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.253 qpair failed and we were unable to recover it. 00:28:19.253 [2024-04-25 03:28:53.576608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.253 [2024-04-25 03:28:53.576812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.253 [2024-04-25 03:28:53.576837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.253 qpair failed and we were unable to recover it. 00:28:19.253 [2024-04-25 03:28:53.577037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.253 [2024-04-25 03:28:53.577232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.253 [2024-04-25 03:28:53.577258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.253 qpair failed and we were unable to recover it. 00:28:19.253 [2024-04-25 03:28:53.577452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.253 [2024-04-25 03:28:53.577646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.253 [2024-04-25 03:28:53.577672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.253 qpair failed and we were unable to recover it. 00:28:19.253 [2024-04-25 03:28:53.577840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.253 [2024-04-25 03:28:53.578002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.253 [2024-04-25 03:28:53.578029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.253 qpair failed and we were unable to recover it. 00:28:19.253 [2024-04-25 03:28:53.578223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.253 [2024-04-25 03:28:53.578445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.253 [2024-04-25 03:28:53.578470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.253 qpair failed and we were unable to recover it. 00:28:19.253 [2024-04-25 03:28:53.578674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.253 [2024-04-25 03:28:53.578836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.253 [2024-04-25 03:28:53.578861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.253 qpair failed and we were unable to recover it. 00:28:19.253 [2024-04-25 03:28:53.579030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.253 [2024-04-25 03:28:53.579219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.253 [2024-04-25 03:28:53.579244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.253 qpair failed and we were unable to recover it. 00:28:19.253 [2024-04-25 03:28:53.579433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.253 [2024-04-25 03:28:53.579660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.253 [2024-04-25 03:28:53.579686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.253 qpair failed and we were unable to recover it. 00:28:19.253 [2024-04-25 03:28:53.579885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.253 [2024-04-25 03:28:53.580077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.253 [2024-04-25 03:28:53.580103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.253 qpair failed and we were unable to recover it. 00:28:19.253 [2024-04-25 03:28:53.580297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.253 [2024-04-25 03:28:53.580513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.253 [2024-04-25 03:28:53.580539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.253 qpair failed and we were unable to recover it. 00:28:19.253 [2024-04-25 03:28:53.580739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.253 [2024-04-25 03:28:53.580960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.253 [2024-04-25 03:28:53.580985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.253 qpair failed and we were unable to recover it. 00:28:19.253 [2024-04-25 03:28:53.581150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.253 [2024-04-25 03:28:53.581355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.253 [2024-04-25 03:28:53.581380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.253 qpair failed and we were unable to recover it. 00:28:19.253 [2024-04-25 03:28:53.581545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.253 [2024-04-25 03:28:53.581705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.253 [2024-04-25 03:28:53.581730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.253 qpair failed and we were unable to recover it. 00:28:19.253 [2024-04-25 03:28:53.581905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.253 [2024-04-25 03:28:53.582118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.253 [2024-04-25 03:28:53.582143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.253 qpair failed and we were unable to recover it. 00:28:19.253 [2024-04-25 03:28:53.582360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.253 [2024-04-25 03:28:53.582552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.253 [2024-04-25 03:28:53.582577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.253 qpair failed and we were unable to recover it. 00:28:19.253 [2024-04-25 03:28:53.582752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.253 [2024-04-25 03:28:53.582959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.253 [2024-04-25 03:28:53.582985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.253 qpair failed and we were unable to recover it. 00:28:19.253 [2024-04-25 03:28:53.583180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.253 [2024-04-25 03:28:53.583375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.253 [2024-04-25 03:28:53.583399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.253 qpair failed and we were unable to recover it. 00:28:19.253 [2024-04-25 03:28:53.583595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.253 [2024-04-25 03:28:53.583768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.253 [2024-04-25 03:28:53.583794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.253 qpair failed and we were unable to recover it. 00:28:19.253 [2024-04-25 03:28:53.583960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.254 [2024-04-25 03:28:53.584155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.254 [2024-04-25 03:28:53.584180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.254 qpair failed and we were unable to recover it. 00:28:19.254 [2024-04-25 03:28:53.584350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.254 [2024-04-25 03:28:53.584545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.254 [2024-04-25 03:28:53.584571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.254 qpair failed and we were unable to recover it. 00:28:19.254 [2024-04-25 03:28:53.584739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.254 [2024-04-25 03:28:53.584937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.254 [2024-04-25 03:28:53.584964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.254 qpair failed and we were unable to recover it. 00:28:19.254 [2024-04-25 03:28:53.585133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.254 [2024-04-25 03:28:53.585351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.254 [2024-04-25 03:28:53.585376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.254 qpair failed and we were unable to recover it. 00:28:19.254 [2024-04-25 03:28:53.585602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.254 [2024-04-25 03:28:53.585773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.254 [2024-04-25 03:28:53.585798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.254 qpair failed and we were unable to recover it. 00:28:19.254 [2024-04-25 03:28:53.586001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.254 [2024-04-25 03:28:53.586206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.254 [2024-04-25 03:28:53.586232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.254 qpair failed and we were unable to recover it. 00:28:19.254 [2024-04-25 03:28:53.586449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.254 [2024-04-25 03:28:53.586637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.254 [2024-04-25 03:28:53.586662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.254 qpair failed and we were unable to recover it. 00:28:19.254 [2024-04-25 03:28:53.586861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.254 [2024-04-25 03:28:53.587049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.254 [2024-04-25 03:28:53.587074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.254 qpair failed and we were unable to recover it. 00:28:19.254 [2024-04-25 03:28:53.587237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.254 [2024-04-25 03:28:53.587444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.254 [2024-04-25 03:28:53.587469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.254 qpair failed and we were unable to recover it. 00:28:19.254 [2024-04-25 03:28:53.587663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.254 [2024-04-25 03:28:53.587852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.254 [2024-04-25 03:28:53.587878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.254 qpair failed and we were unable to recover it. 00:28:19.254 [2024-04-25 03:28:53.588123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.254 [2024-04-25 03:28:53.588304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.254 [2024-04-25 03:28:53.588330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.254 qpair failed and we were unable to recover it. 00:28:19.254 [2024-04-25 03:28:53.588501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.254 [2024-04-25 03:28:53.588675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.254 [2024-04-25 03:28:53.588701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.254 qpair failed and we were unable to recover it. 00:28:19.254 [2024-04-25 03:28:53.588893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.254 [2024-04-25 03:28:53.589073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.254 [2024-04-25 03:28:53.589098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.254 qpair failed and we were unable to recover it. 00:28:19.254 [2024-04-25 03:28:53.589256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.254 [2024-04-25 03:28:53.589452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.254 [2024-04-25 03:28:53.589479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.254 qpair failed and we were unable to recover it. 00:28:19.254 [2024-04-25 03:28:53.589701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.254 [2024-04-25 03:28:53.589891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.254 [2024-04-25 03:28:53.589916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.254 qpair failed and we were unable to recover it. 00:28:19.254 [2024-04-25 03:28:53.590086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.254 [2024-04-25 03:28:53.590277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.254 [2024-04-25 03:28:53.590302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.254 qpair failed and we were unable to recover it. 00:28:19.254 [2024-04-25 03:28:53.590575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.254 [2024-04-25 03:28:53.590736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.254 [2024-04-25 03:28:53.590762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.254 qpair failed and we were unable to recover it. 00:28:19.254 [2024-04-25 03:28:53.590959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.254 [2024-04-25 03:28:53.591155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.254 [2024-04-25 03:28:53.591180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.254 qpair failed and we were unable to recover it. 00:28:19.254 [2024-04-25 03:28:53.591377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.254 [2024-04-25 03:28:53.591541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.254 [2024-04-25 03:28:53.591567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.254 qpair failed and we were unable to recover it. 00:28:19.254 [2024-04-25 03:28:53.591763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.254 [2024-04-25 03:28:53.591963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.254 [2024-04-25 03:28:53.591990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.254 qpair failed and we were unable to recover it. 00:28:19.254 [2024-04-25 03:28:53.592154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.254 [2024-04-25 03:28:53.592320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.254 [2024-04-25 03:28:53.592345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.254 qpair failed and we were unable to recover it. 00:28:19.254 [2024-04-25 03:28:53.592530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.254 [2024-04-25 03:28:53.592701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.254 [2024-04-25 03:28:53.592730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.254 qpair failed and we were unable to recover it. 00:28:19.254 [2024-04-25 03:28:53.592945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.254 [2024-04-25 03:28:53.593135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.254 [2024-04-25 03:28:53.593160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.254 qpair failed and we were unable to recover it. 00:28:19.254 [2024-04-25 03:28:53.593331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.254 [2024-04-25 03:28:53.593496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.254 [2024-04-25 03:28:53.593522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.254 qpair failed and we were unable to recover it. 00:28:19.254 [2024-04-25 03:28:53.593729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.254 [2024-04-25 03:28:53.593897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.255 [2024-04-25 03:28:53.593929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.255 qpair failed and we were unable to recover it. 00:28:19.255 [2024-04-25 03:28:53.594097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.255 [2024-04-25 03:28:53.594316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.255 [2024-04-25 03:28:53.594342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.255 qpair failed and we were unable to recover it. 00:28:19.255 [2024-04-25 03:28:53.594544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.255 [2024-04-25 03:28:53.594715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.255 [2024-04-25 03:28:53.594741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.255 qpair failed and we were unable to recover it. 00:28:19.255 [2024-04-25 03:28:53.594961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.255 [2024-04-25 03:28:53.595123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.255 [2024-04-25 03:28:53.595148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.255 qpair failed and we were unable to recover it. 00:28:19.255 [2024-04-25 03:28:53.595311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.255 [2024-04-25 03:28:53.595487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.255 [2024-04-25 03:28:53.595511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.255 qpair failed and we were unable to recover it. 00:28:19.255 [2024-04-25 03:28:53.595690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.255 [2024-04-25 03:28:53.595886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.255 [2024-04-25 03:28:53.595911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.255 qpair failed and we were unable to recover it. 00:28:19.255 [2024-04-25 03:28:53.596111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.255 [2024-04-25 03:28:53.596269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.255 [2024-04-25 03:28:53.596294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.255 qpair failed and we were unable to recover it. 00:28:19.255 [2024-04-25 03:28:53.596493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.255 [2024-04-25 03:28:53.596661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.255 [2024-04-25 03:28:53.596686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.255 qpair failed and we were unable to recover it. 00:28:19.255 [2024-04-25 03:28:53.596846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.255 [2024-04-25 03:28:53.597027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.255 [2024-04-25 03:28:53.597053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.255 qpair failed and we were unable to recover it. 00:28:19.255 [2024-04-25 03:28:53.597221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.255 [2024-04-25 03:28:53.597388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.255 [2024-04-25 03:28:53.597413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.255 qpair failed and we were unable to recover it. 00:28:19.255 [2024-04-25 03:28:53.597610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.255 [2024-04-25 03:28:53.597814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.255 [2024-04-25 03:28:53.597839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.255 qpair failed and we were unable to recover it. 00:28:19.255 [2024-04-25 03:28:53.598036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.255 [2024-04-25 03:28:53.598229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.255 [2024-04-25 03:28:53.598254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.255 qpair failed and we were unable to recover it. 00:28:19.255 [2024-04-25 03:28:53.598451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.255 [2024-04-25 03:28:53.598639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.255 [2024-04-25 03:28:53.598665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.255 qpair failed and we were unable to recover it. 00:28:19.255 [2024-04-25 03:28:53.598856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.255 [2024-04-25 03:28:53.599029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.255 [2024-04-25 03:28:53.599055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.255 qpair failed and we were unable to recover it. 00:28:19.255 [2024-04-25 03:28:53.599217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.255 [2024-04-25 03:28:53.599416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.255 [2024-04-25 03:28:53.599441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.255 qpair failed and we were unable to recover it. 00:28:19.255 [2024-04-25 03:28:53.599617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.255 [2024-04-25 03:28:53.599820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.255 [2024-04-25 03:28:53.599846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.255 qpair failed and we were unable to recover it. 00:28:19.255 [2024-04-25 03:28:53.600043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.255 [2024-04-25 03:28:53.600234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.255 [2024-04-25 03:28:53.600259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.255 qpair failed and we were unable to recover it. 00:28:19.255 [2024-04-25 03:28:53.600445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.255 [2024-04-25 03:28:53.600637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.255 [2024-04-25 03:28:53.600663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.255 qpair failed and we were unable to recover it. 00:28:19.255 [2024-04-25 03:28:53.600835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.255 [2024-04-25 03:28:53.601018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.255 [2024-04-25 03:28:53.601043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.255 qpair failed and we were unable to recover it. 00:28:19.255 [2024-04-25 03:28:53.601279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.255 [2024-04-25 03:28:53.601471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.255 [2024-04-25 03:28:53.601496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.255 qpair failed and we were unable to recover it. 00:28:19.255 [2024-04-25 03:28:53.601695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.255 [2024-04-25 03:28:53.601867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.255 [2024-04-25 03:28:53.601893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.255 qpair failed and we were unable to recover it. 00:28:19.255 [2024-04-25 03:28:53.602086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.255 [2024-04-25 03:28:53.602271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.255 [2024-04-25 03:28:53.602295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.255 qpair failed and we were unable to recover it. 00:28:19.255 [2024-04-25 03:28:53.602458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.255 [2024-04-25 03:28:53.602686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.255 [2024-04-25 03:28:53.602711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.255 qpair failed and we were unable to recover it. 00:28:19.255 [2024-04-25 03:28:53.602906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.255 [2024-04-25 03:28:53.603111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.255 [2024-04-25 03:28:53.603148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.255 qpair failed and we were unable to recover it. 00:28:19.255 [2024-04-25 03:28:53.603332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.255 [2024-04-25 03:28:53.603502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.255 [2024-04-25 03:28:53.603527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.255 qpair failed and we were unable to recover it. 00:28:19.255 [2024-04-25 03:28:53.603736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.255 [2024-04-25 03:28:53.603918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.255 [2024-04-25 03:28:53.603943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.255 qpair failed and we were unable to recover it. 00:28:19.255 [2024-04-25 03:28:53.604129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.255 [2024-04-25 03:28:53.604322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.255 [2024-04-25 03:28:53.604347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.255 qpair failed and we were unable to recover it. 00:28:19.255 [2024-04-25 03:28:53.604504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.255 [2024-04-25 03:28:53.604697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.255 [2024-04-25 03:28:53.604723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.255 qpair failed and we were unable to recover it. 00:28:19.255 [2024-04-25 03:28:53.604930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.255 [2024-04-25 03:28:53.605097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.255 [2024-04-25 03:28:53.605122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.255 qpair failed and we were unable to recover it. 00:28:19.255 [2024-04-25 03:28:53.605322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.255 [2024-04-25 03:28:53.605514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.255 [2024-04-25 03:28:53.605539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.256 qpair failed and we were unable to recover it. 00:28:19.256 [2024-04-25 03:28:53.605742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.256 [2024-04-25 03:28:53.605905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.256 [2024-04-25 03:28:53.605936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.256 qpair failed and we were unable to recover it. 00:28:19.256 [2024-04-25 03:28:53.606100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.256 [2024-04-25 03:28:53.606281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.256 [2024-04-25 03:28:53.606306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.256 qpair failed and we were unable to recover it. 00:28:19.256 [2024-04-25 03:28:53.606489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.256 [2024-04-25 03:28:53.606705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.256 [2024-04-25 03:28:53.606730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.256 qpair failed and we were unable to recover it. 00:28:19.256 [2024-04-25 03:28:53.606900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.256 [2024-04-25 03:28:53.607097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.256 [2024-04-25 03:28:53.607122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.256 qpair failed and we were unable to recover it. 00:28:19.256 [2024-04-25 03:28:53.607290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.256 [2024-04-25 03:28:53.607458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.256 [2024-04-25 03:28:53.607483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.256 qpair failed and we were unable to recover it. 00:28:19.256 [2024-04-25 03:28:53.607659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.256 [2024-04-25 03:28:53.607851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.256 [2024-04-25 03:28:53.607876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.256 qpair failed and we were unable to recover it. 00:28:19.256 [2024-04-25 03:28:53.608086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.256 [2024-04-25 03:28:53.608262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.256 [2024-04-25 03:28:53.608287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.256 qpair failed and we were unable to recover it. 00:28:19.256 [2024-04-25 03:28:53.608449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.256 [2024-04-25 03:28:53.608619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.256 [2024-04-25 03:28:53.608650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.256 qpair failed and we were unable to recover it. 00:28:19.256 [2024-04-25 03:28:53.608834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.256 [2024-04-25 03:28:53.609003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.256 [2024-04-25 03:28:53.609031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.256 qpair failed and we were unable to recover it. 00:28:19.256 [2024-04-25 03:28:53.609255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.256 [2024-04-25 03:28:53.609463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.256 [2024-04-25 03:28:53.609488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.256 qpair failed and we were unable to recover it. 00:28:19.256 [2024-04-25 03:28:53.609681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.256 [2024-04-25 03:28:53.609856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.256 [2024-04-25 03:28:53.609882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.256 qpair failed and we were unable to recover it. 00:28:19.256 [2024-04-25 03:28:53.610054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.256 [2024-04-25 03:28:53.610232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.256 [2024-04-25 03:28:53.610257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.256 qpair failed and we were unable to recover it. 00:28:19.256 [2024-04-25 03:28:53.610446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.256 [2024-04-25 03:28:53.610610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.256 [2024-04-25 03:28:53.610650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.256 qpair failed and we were unable to recover it. 00:28:19.256 [2024-04-25 03:28:53.610867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.256 [2024-04-25 03:28:53.611084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.256 [2024-04-25 03:28:53.611109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.256 qpair failed and we were unable to recover it. 00:28:19.256 [2024-04-25 03:28:53.611295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.256 [2024-04-25 03:28:53.611483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.256 [2024-04-25 03:28:53.611508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.256 qpair failed and we were unable to recover it. 00:28:19.256 [2024-04-25 03:28:53.611738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.256 [2024-04-25 03:28:53.611937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.256 [2024-04-25 03:28:53.611962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.256 qpair failed and we were unable to recover it. 00:28:19.256 [2024-04-25 03:28:53.612166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.256 [2024-04-25 03:28:53.612356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.256 [2024-04-25 03:28:53.612381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.256 qpair failed and we were unable to recover it. 00:28:19.256 [2024-04-25 03:28:53.612557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.256 [2024-04-25 03:28:53.612789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.256 [2024-04-25 03:28:53.612814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.256 qpair failed and we were unable to recover it. 00:28:19.256 [2024-04-25 03:28:53.613020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.256 [2024-04-25 03:28:53.613217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.256 [2024-04-25 03:28:53.613249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.256 qpair failed and we were unable to recover it. 00:28:19.256 [2024-04-25 03:28:53.613448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.256 [2024-04-25 03:28:53.613617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.256 [2024-04-25 03:28:53.613655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.256 qpair failed and we were unable to recover it. 00:28:19.256 [2024-04-25 03:28:53.613837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.256 [2024-04-25 03:28:53.614054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.256 [2024-04-25 03:28:53.614079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.256 qpair failed and we were unable to recover it. 00:28:19.256 [2024-04-25 03:28:53.614271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.256 [2024-04-25 03:28:53.614434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.256 [2024-04-25 03:28:53.614459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.256 qpair failed and we were unable to recover it. 00:28:19.256 [2024-04-25 03:28:53.614635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.256 [2024-04-25 03:28:53.614853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.256 [2024-04-25 03:28:53.614878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.256 qpair failed and we were unable to recover it. 00:28:19.256 [2024-04-25 03:28:53.615077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.256 [2024-04-25 03:28:53.615268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.256 [2024-04-25 03:28:53.615293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.256 qpair failed and we were unable to recover it. 00:28:19.256 [2024-04-25 03:28:53.615468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.256 [2024-04-25 03:28:53.615652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.256 [2024-04-25 03:28:53.615678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.256 qpair failed and we were unable to recover it. 00:28:19.256 [2024-04-25 03:28:53.615877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.256 [2024-04-25 03:28:53.616069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.256 [2024-04-25 03:28:53.616094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.256 qpair failed and we were unable to recover it. 00:28:19.256 [2024-04-25 03:28:53.616289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.256 [2024-04-25 03:28:53.616460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.256 [2024-04-25 03:28:53.616485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.256 qpair failed and we were unable to recover it. 00:28:19.256 [2024-04-25 03:28:53.616651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.256 [2024-04-25 03:28:53.616824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.256 [2024-04-25 03:28:53.616849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.256 qpair failed and we were unable to recover it. 00:28:19.256 [2024-04-25 03:28:53.617018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.256 [2024-04-25 03:28:53.617181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.256 [2024-04-25 03:28:53.617216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.256 qpair failed and we were unable to recover it. 00:28:19.257 [2024-04-25 03:28:53.617414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.257 [2024-04-25 03:28:53.617580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.257 [2024-04-25 03:28:53.617606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.257 qpair failed and we were unable to recover it. 00:28:19.257 [2024-04-25 03:28:53.617799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.257 [2024-04-25 03:28:53.617969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.257 [2024-04-25 03:28:53.617995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.257 qpair failed and we were unable to recover it. 00:28:19.257 [2024-04-25 03:28:53.618183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.257 [2024-04-25 03:28:53.618376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.257 [2024-04-25 03:28:53.618401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.257 qpair failed and we were unable to recover it. 00:28:19.257 [2024-04-25 03:28:53.618608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.257 [2024-04-25 03:28:53.618792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.257 [2024-04-25 03:28:53.618817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.257 qpair failed and we were unable to recover it. 00:28:19.257 [2024-04-25 03:28:53.619046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.257 [2024-04-25 03:28:53.619240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.257 [2024-04-25 03:28:53.619265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.257 qpair failed and we were unable to recover it. 00:28:19.257 [2024-04-25 03:28:53.619461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.257 [2024-04-25 03:28:53.619625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.257 [2024-04-25 03:28:53.619667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.257 qpair failed and we were unable to recover it. 00:28:19.257 [2024-04-25 03:28:53.619834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.257 [2024-04-25 03:28:53.620000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.257 [2024-04-25 03:28:53.620030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.257 qpair failed and we were unable to recover it. 00:28:19.257 [2024-04-25 03:28:53.620187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.257 [2024-04-25 03:28:53.620381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.257 [2024-04-25 03:28:53.620407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.257 qpair failed and we were unable to recover it. 00:28:19.257 [2024-04-25 03:28:53.620593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.257 [2024-04-25 03:28:53.620815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.257 [2024-04-25 03:28:53.620841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.257 qpair failed and we were unable to recover it. 00:28:19.257 [2024-04-25 03:28:53.621040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.257 [2024-04-25 03:28:53.621214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.257 [2024-04-25 03:28:53.621239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.257 qpair failed and we were unable to recover it. 00:28:19.257 [2024-04-25 03:28:53.621401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.257 [2024-04-25 03:28:53.621567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.257 [2024-04-25 03:28:53.621593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.257 qpair failed and we were unable to recover it. 00:28:19.257 [2024-04-25 03:28:53.621770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.257 [2024-04-25 03:28:53.621973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.257 [2024-04-25 03:28:53.621998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.257 qpair failed and we were unable to recover it. 00:28:19.257 [2024-04-25 03:28:53.622196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.257 [2024-04-25 03:28:53.622390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.257 [2024-04-25 03:28:53.622415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.257 qpair failed and we were unable to recover it. 00:28:19.257 [2024-04-25 03:28:53.622607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.257 [2024-04-25 03:28:53.622788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.257 [2024-04-25 03:28:53.622814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.257 qpair failed and we were unable to recover it. 00:28:19.257 [2024-04-25 03:28:53.623007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.257 [2024-04-25 03:28:53.623172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.257 [2024-04-25 03:28:53.623197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.257 qpair failed and we were unable to recover it. 00:28:19.257 [2024-04-25 03:28:53.623388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.257 [2024-04-25 03:28:53.623586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.257 [2024-04-25 03:28:53.623611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.257 qpair failed and we were unable to recover it. 00:28:19.257 [2024-04-25 03:28:53.623842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.257 [2024-04-25 03:28:53.624018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.257 [2024-04-25 03:28:53.624044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.257 qpair failed and we were unable to recover it. 00:28:19.257 [2024-04-25 03:28:53.624209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.257 [2024-04-25 03:28:53.624381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.257 [2024-04-25 03:28:53.624406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.257 qpair failed and we were unable to recover it. 00:28:19.257 [2024-04-25 03:28:53.624566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.257 [2024-04-25 03:28:53.624744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.257 [2024-04-25 03:28:53.624770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.257 qpair failed and we were unable to recover it. 00:28:19.257 [2024-04-25 03:28:53.624965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.257 [2024-04-25 03:28:53.625125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.257 [2024-04-25 03:28:53.625150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.257 qpair failed and we were unable to recover it. 00:28:19.257 [2024-04-25 03:28:53.625317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.257 [2024-04-25 03:28:53.625532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.257 [2024-04-25 03:28:53.625557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.257 qpair failed and we were unable to recover it. 00:28:19.257 [2024-04-25 03:28:53.625769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.257 [2024-04-25 03:28:53.625936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.257 [2024-04-25 03:28:53.625961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.257 qpair failed and we were unable to recover it. 00:28:19.257 [2024-04-25 03:28:53.626169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.257 [2024-04-25 03:28:53.626323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.257 [2024-04-25 03:28:53.626348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.257 qpair failed and we were unable to recover it. 00:28:19.257 [2024-04-25 03:28:53.626550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.257 [2024-04-25 03:28:53.626769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.257 [2024-04-25 03:28:53.626795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.257 qpair failed and we were unable to recover it. 00:28:19.257 [2024-04-25 03:28:53.627027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.257 [2024-04-25 03:28:53.627221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.257 [2024-04-25 03:28:53.627247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.257 qpair failed and we were unable to recover it. 00:28:19.257 [2024-04-25 03:28:53.627438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.257 [2024-04-25 03:28:53.627598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.257 [2024-04-25 03:28:53.627639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.257 qpair failed and we were unable to recover it. 00:28:19.257 [2024-04-25 03:28:53.627832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.257 [2024-04-25 03:28:53.628025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.257 [2024-04-25 03:28:53.628050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.257 qpair failed and we were unable to recover it. 00:28:19.257 [2024-04-25 03:28:53.628216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.257 [2024-04-25 03:28:53.628402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.257 [2024-04-25 03:28:53.628428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.257 qpair failed and we were unable to recover it. 00:28:19.257 [2024-04-25 03:28:53.628622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.257 [2024-04-25 03:28:53.628800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.257 [2024-04-25 03:28:53.628825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.257 qpair failed and we were unable to recover it. 00:28:19.257 [2024-04-25 03:28:53.629010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.258 [2024-04-25 03:28:53.629206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.258 [2024-04-25 03:28:53.629232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.258 qpair failed and we were unable to recover it. 00:28:19.258 [2024-04-25 03:28:53.629424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.258 [2024-04-25 03:28:53.629635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.258 [2024-04-25 03:28:53.629665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.258 qpair failed and we were unable to recover it. 00:28:19.258 [2024-04-25 03:28:53.629866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.258 [2024-04-25 03:28:53.630061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.258 [2024-04-25 03:28:53.630086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.258 qpair failed and we were unable to recover it. 00:28:19.258 [2024-04-25 03:28:53.630292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.258 [2024-04-25 03:28:53.630481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.258 [2024-04-25 03:28:53.630507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.258 qpair failed and we were unable to recover it. 00:28:19.258 [2024-04-25 03:28:53.630699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.258 [2024-04-25 03:28:53.630870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.258 [2024-04-25 03:28:53.630895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.258 qpair failed and we were unable to recover it. 00:28:19.258 [2024-04-25 03:28:53.631115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.258 [2024-04-25 03:28:53.631299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.258 [2024-04-25 03:28:53.631324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.258 qpair failed and we were unable to recover it. 00:28:19.258 [2024-04-25 03:28:53.631490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.258 [2024-04-25 03:28:53.631707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.258 [2024-04-25 03:28:53.631732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.258 qpair failed and we were unable to recover it. 00:28:19.258 [2024-04-25 03:28:53.631942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.258 [2024-04-25 03:28:53.632164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.258 [2024-04-25 03:28:53.632199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.258 qpair failed and we were unable to recover it. 00:28:19.258 [2024-04-25 03:28:53.632367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.258 [2024-04-25 03:28:53.632577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.258 [2024-04-25 03:28:53.632603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.258 qpair failed and we were unable to recover it. 00:28:19.258 [2024-04-25 03:28:53.632789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.258 [2024-04-25 03:28:53.633004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.258 [2024-04-25 03:28:53.633029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.258 qpair failed and we were unable to recover it. 00:28:19.258 [2024-04-25 03:28:53.633227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.258 [2024-04-25 03:28:53.633432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.258 [2024-04-25 03:28:53.633458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.258 qpair failed and we were unable to recover it. 00:28:19.258 [2024-04-25 03:28:53.633617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.258 [2024-04-25 03:28:53.633814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.258 [2024-04-25 03:28:53.633840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.258 qpair failed and we were unable to recover it. 00:28:19.258 [2024-04-25 03:28:53.634043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.258 [2024-04-25 03:28:53.634267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.258 [2024-04-25 03:28:53.634293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.258 qpair failed and we were unable to recover it. 00:28:19.258 [2024-04-25 03:28:53.634521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.258 [2024-04-25 03:28:53.634723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.258 [2024-04-25 03:28:53.634748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.258 qpair failed and we were unable to recover it. 00:28:19.258 [2024-04-25 03:28:53.634927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.258 [2024-04-25 03:28:53.635152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.258 [2024-04-25 03:28:53.635177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.258 qpair failed and we were unable to recover it. 00:28:19.258 [2024-04-25 03:28:53.635369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.258 [2024-04-25 03:28:53.635561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.258 [2024-04-25 03:28:53.635586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.258 qpair failed and we were unable to recover it. 00:28:19.258 [2024-04-25 03:28:53.635782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.258 [2024-04-25 03:28:53.635955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.258 [2024-04-25 03:28:53.635982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.258 qpair failed and we were unable to recover it. 00:28:19.258 [2024-04-25 03:28:53.636172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.258 [2024-04-25 03:28:53.636370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.258 [2024-04-25 03:28:53.636395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.258 qpair failed and we were unable to recover it. 00:28:19.258 [2024-04-25 03:28:53.636561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.258 [2024-04-25 03:28:53.636731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.258 [2024-04-25 03:28:53.636756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.258 qpair failed and we were unable to recover it. 00:28:19.258 [2024-04-25 03:28:53.636950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.258 [2024-04-25 03:28:53.637114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.258 [2024-04-25 03:28:53.637139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.258 qpair failed and we were unable to recover it. 00:28:19.258 [2024-04-25 03:28:53.637324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.258 [2024-04-25 03:28:53.637488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.258 [2024-04-25 03:28:53.637513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.258 qpair failed and we were unable to recover it. 00:28:19.258 [2024-04-25 03:28:53.637695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.258 [2024-04-25 03:28:53.637854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.258 [2024-04-25 03:28:53.637880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.258 qpair failed and we were unable to recover it. 00:28:19.258 [2024-04-25 03:28:53.638059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.258 [2024-04-25 03:28:53.638254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.258 [2024-04-25 03:28:53.638279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.258 qpair failed and we were unable to recover it. 00:28:19.258 [2024-04-25 03:28:53.638466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.258 [2024-04-25 03:28:53.638632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.258 [2024-04-25 03:28:53.638658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.258 qpair failed and we were unable to recover it. 00:28:19.258 [2024-04-25 03:28:53.638849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.258 [2024-04-25 03:28:53.639013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.258 [2024-04-25 03:28:53.639039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.258 qpair failed and we were unable to recover it. 00:28:19.258 [2024-04-25 03:28:53.639208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.258 [2024-04-25 03:28:53.639372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.258 [2024-04-25 03:28:53.639397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.258 qpair failed and we were unable to recover it. 00:28:19.258 [2024-04-25 03:28:53.639561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.258 [2024-04-25 03:28:53.639761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.258 [2024-04-25 03:28:53.639788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.258 qpair failed and we were unable to recover it. 00:28:19.258 [2024-04-25 03:28:53.639953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.258 [2024-04-25 03:28:53.640122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.258 [2024-04-25 03:28:53.640148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.258 qpair failed and we were unable to recover it. 00:28:19.258 [2024-04-25 03:28:53.640343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.258 [2024-04-25 03:28:53.640518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.258 [2024-04-25 03:28:53.640544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.258 qpair failed and we were unable to recover it. 00:28:19.258 [2024-04-25 03:28:53.641767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.259 [2024-04-25 03:28:53.641941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.259 [2024-04-25 03:28:53.641970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.259 qpair failed and we were unable to recover it. 00:28:19.259 [2024-04-25 03:28:53.642168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.259 [2024-04-25 03:28:53.642384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.259 [2024-04-25 03:28:53.642410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.259 qpair failed and we were unable to recover it. 00:28:19.259 [2024-04-25 03:28:53.642594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.259 [2024-04-25 03:28:53.642802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.259 [2024-04-25 03:28:53.642827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.259 qpair failed and we were unable to recover it. 00:28:19.259 [2024-04-25 03:28:53.643020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.259 [2024-04-25 03:28:53.643218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.259 [2024-04-25 03:28:53.643244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.259 qpair failed and we were unable to recover it. 00:28:19.259 [2024-04-25 03:28:53.643433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.259 [2024-04-25 03:28:53.643596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.259 [2024-04-25 03:28:53.643650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.259 qpair failed and we were unable to recover it. 00:28:19.259 [2024-04-25 03:28:53.644789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.259 [2024-04-25 03:28:53.644969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.259 [2024-04-25 03:28:53.644995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.259 qpair failed and we were unable to recover it. 00:28:19.259 [2024-04-25 03:28:53.645159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.259 [2024-04-25 03:28:53.645327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.259 [2024-04-25 03:28:53.645352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.259 qpair failed and we were unable to recover it. 00:28:19.259 [2024-04-25 03:28:53.645545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.259 [2024-04-25 03:28:53.645774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.259 [2024-04-25 03:28:53.645801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.259 qpair failed and we were unable to recover it. 00:28:19.259 [2024-04-25 03:28:53.645996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.259 [2024-04-25 03:28:53.646159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.259 [2024-04-25 03:28:53.646184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.259 qpair failed and we were unable to recover it. 00:28:19.259 [2024-04-25 03:28:53.646351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.259 [2024-04-25 03:28:53.646543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.259 [2024-04-25 03:28:53.646568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.259 qpair failed and we were unable to recover it. 00:28:19.259 [2024-04-25 03:28:53.646754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.259 [2024-04-25 03:28:53.646916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.259 [2024-04-25 03:28:53.646941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.259 qpair failed and we were unable to recover it. 00:28:19.259 [2024-04-25 03:28:53.647160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.259 [2024-04-25 03:28:53.647379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.259 [2024-04-25 03:28:53.647404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.259 qpair failed and we were unable to recover it. 00:28:19.259 [2024-04-25 03:28:53.647575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.259 [2024-04-25 03:28:53.647752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.259 [2024-04-25 03:28:53.647778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.259 qpair failed and we were unable to recover it. 00:28:19.259 [2024-04-25 03:28:53.647946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.259 [2024-04-25 03:28:53.648149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.259 [2024-04-25 03:28:53.648175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.259 qpair failed and we were unable to recover it. 00:28:19.259 [2024-04-25 03:28:53.648337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.259 [2024-04-25 03:28:53.648506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.259 [2024-04-25 03:28:53.648531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.259 qpair failed and we were unable to recover it. 00:28:19.259 [2024-04-25 03:28:53.648748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.259 [2024-04-25 03:28:53.648934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.259 [2024-04-25 03:28:53.648960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.259 qpair failed and we were unable to recover it. 00:28:19.259 [2024-04-25 03:28:53.649172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.259 [2024-04-25 03:28:53.649370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.259 [2024-04-25 03:28:53.649395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.259 qpair failed and we were unable to recover it. 00:28:19.259 [2024-04-25 03:28:53.649578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.259 [2024-04-25 03:28:53.649789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.259 [2024-04-25 03:28:53.649814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.259 qpair failed and we were unable to recover it. 00:28:19.259 [2024-04-25 03:28:53.649985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.259 [2024-04-25 03:28:53.650182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.259 [2024-04-25 03:28:53.650207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.259 qpair failed and we were unable to recover it. 00:28:19.259 [2024-04-25 03:28:53.650369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.259 [2024-04-25 03:28:53.650536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.259 [2024-04-25 03:28:53.650561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.259 qpair failed and we were unable to recover it. 00:28:19.259 [2024-04-25 03:28:53.650735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.259 [2024-04-25 03:28:53.650948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.259 [2024-04-25 03:28:53.650973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.259 qpair failed and we were unable to recover it. 00:28:19.259 [2024-04-25 03:28:53.651166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.259 [2024-04-25 03:28:53.651355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.259 [2024-04-25 03:28:53.651380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.259 qpair failed and we were unable to recover it. 00:28:19.259 [2024-04-25 03:28:53.651550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.259 [2024-04-25 03:28:53.651716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.259 [2024-04-25 03:28:53.651742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.259 qpair failed and we were unable to recover it. 00:28:19.259 [2024-04-25 03:28:53.651908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.259 [2024-04-25 03:28:53.652100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.259 [2024-04-25 03:28:53.652131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.259 qpair failed and we were unable to recover it. 00:28:19.259 [2024-04-25 03:28:53.652299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.259 [2024-04-25 03:28:53.652459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.259 [2024-04-25 03:28:53.652485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.259 qpair failed and we were unable to recover it. 00:28:19.259 [2024-04-25 03:28:53.652675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.259 [2024-04-25 03:28:53.652869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.260 [2024-04-25 03:28:53.652893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.260 qpair failed and we were unable to recover it. 00:28:19.260 [2024-04-25 03:28:53.653122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.260 [2024-04-25 03:28:53.653319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.260 [2024-04-25 03:28:53.653345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.260 qpair failed and we were unable to recover it. 00:28:19.260 [2024-04-25 03:28:53.653508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.260 [2024-04-25 03:28:53.653721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.260 [2024-04-25 03:28:53.653747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.260 qpair failed and we were unable to recover it. 00:28:19.260 [2024-04-25 03:28:53.653936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.260 [2024-04-25 03:28:53.654134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.260 [2024-04-25 03:28:53.654159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.260 qpair failed and we were unable to recover it. 00:28:19.260 [2024-04-25 03:28:53.654331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.260 [2024-04-25 03:28:53.654526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.260 [2024-04-25 03:28:53.654552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.260 qpair failed and we were unable to recover it. 00:28:19.260 [2024-04-25 03:28:53.654721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.260 [2024-04-25 03:28:53.654922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.260 [2024-04-25 03:28:53.654948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.260 qpair failed and we were unable to recover it. 00:28:19.260 [2024-04-25 03:28:53.655108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.260 [2024-04-25 03:28:53.655309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.260 [2024-04-25 03:28:53.655334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.260 qpair failed and we were unable to recover it. 00:28:19.260 [2024-04-25 03:28:53.655537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.260 [2024-04-25 03:28:53.655738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.260 [2024-04-25 03:28:53.655763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.260 qpair failed and we were unable to recover it. 00:28:19.260 [2024-04-25 03:28:53.655934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.260 [2024-04-25 03:28:53.656153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.260 [2024-04-25 03:28:53.656177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.260 qpair failed and we were unable to recover it. 00:28:19.260 [2024-04-25 03:28:53.656344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.260 [2024-04-25 03:28:53.656545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.260 [2024-04-25 03:28:53.656571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.260 qpair failed and we were unable to recover it. 00:28:19.260 [2024-04-25 03:28:53.656766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.260 [2024-04-25 03:28:53.656962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.260 [2024-04-25 03:28:53.656987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.260 qpair failed and we were unable to recover it. 00:28:19.260 [2024-04-25 03:28:53.657143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.260 [2024-04-25 03:28:53.657338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.260 [2024-04-25 03:28:53.657363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.260 qpair failed and we were unable to recover it. 00:28:19.260 [2024-04-25 03:28:53.657569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.260 [2024-04-25 03:28:53.657776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.260 [2024-04-25 03:28:53.657801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.260 qpair failed and we were unable to recover it. 00:28:19.260 [2024-04-25 03:28:53.657978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.260 [2024-04-25 03:28:53.658168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.260 [2024-04-25 03:28:53.658193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.260 qpair failed and we were unable to recover it. 00:28:19.260 [2024-04-25 03:28:53.658387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.260 [2024-04-25 03:28:53.658551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.260 [2024-04-25 03:28:53.658576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.260 qpair failed and we were unable to recover it. 00:28:19.260 [2024-04-25 03:28:53.658774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.260 [2024-04-25 03:28:53.658940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.260 [2024-04-25 03:28:53.658965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.260 qpair failed and we were unable to recover it. 00:28:19.260 [2024-04-25 03:28:53.659163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.260 [2024-04-25 03:28:53.659320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.260 [2024-04-25 03:28:53.659345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.260 qpair failed and we were unable to recover it. 00:28:19.260 [2024-04-25 03:28:53.659538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.260 [2024-04-25 03:28:53.659735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.260 [2024-04-25 03:28:53.659761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.260 qpair failed and we were unable to recover it. 00:28:19.260 [2024-04-25 03:28:53.659952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.260 [2024-04-25 03:28:53.660149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.260 [2024-04-25 03:28:53.660175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.260 qpair failed and we were unable to recover it. 00:28:19.260 [2024-04-25 03:28:53.660369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.260 [2024-04-25 03:28:53.660564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.260 [2024-04-25 03:28:53.660590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.260 qpair failed and we were unable to recover it. 00:28:19.260 [2024-04-25 03:28:53.660759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.260 [2024-04-25 03:28:53.660955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.260 [2024-04-25 03:28:53.660980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.260 qpair failed and we were unable to recover it. 00:28:19.260 [2024-04-25 03:28:53.661173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.260 [2024-04-25 03:28:53.661359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.260 [2024-04-25 03:28:53.661384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.260 qpair failed and we were unable to recover it. 00:28:19.260 [2024-04-25 03:28:53.661579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.260 [2024-04-25 03:28:53.661755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.260 [2024-04-25 03:28:53.661781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.260 qpair failed and we were unable to recover it. 00:28:19.260 [2024-04-25 03:28:53.661966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.260 [2024-04-25 03:28:53.662154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.260 [2024-04-25 03:28:53.662179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.260 qpair failed and we were unable to recover it. 00:28:19.260 [2024-04-25 03:28:53.662341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.260 [2024-04-25 03:28:53.662537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.260 [2024-04-25 03:28:53.662563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.260 qpair failed and we were unable to recover it. 00:28:19.260 [2024-04-25 03:28:53.662734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.260 [2024-04-25 03:28:53.662894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.260 [2024-04-25 03:28:53.662930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.260 qpair failed and we were unable to recover it. 00:28:19.260 [2024-04-25 03:28:53.663120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.260 [2024-04-25 03:28:53.663280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.260 [2024-04-25 03:28:53.663305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.260 qpair failed and we were unable to recover it. 00:28:19.260 [2024-04-25 03:28:53.663470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.260 [2024-04-25 03:28:53.663646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.260 [2024-04-25 03:28:53.663673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.260 qpair failed and we were unable to recover it. 00:28:19.260 [2024-04-25 03:28:53.663891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.260 [2024-04-25 03:28:53.664097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.260 [2024-04-25 03:28:53.664123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.260 qpair failed and we were unable to recover it. 00:28:19.260 [2024-04-25 03:28:53.664343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.260 [2024-04-25 03:28:53.664537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.261 [2024-04-25 03:28:53.664563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.261 qpair failed and we were unable to recover it. 00:28:19.261 [2024-04-25 03:28:53.664745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.261 [2024-04-25 03:28:53.664914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.261 [2024-04-25 03:28:53.664941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.261 qpair failed and we were unable to recover it. 00:28:19.261 [2024-04-25 03:28:53.665170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.261 [2024-04-25 03:28:53.665333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.261 [2024-04-25 03:28:53.665358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.261 qpair failed and we were unable to recover it. 00:28:19.261 [2024-04-25 03:28:53.665532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.261 [2024-04-25 03:28:53.665727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.261 [2024-04-25 03:28:53.665752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.261 qpair failed and we were unable to recover it. 00:28:19.261 [2024-04-25 03:28:53.665928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.261 [2024-04-25 03:28:53.666093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.261 [2024-04-25 03:28:53.666118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.261 qpair failed and we were unable to recover it. 00:28:19.261 [2024-04-25 03:28:53.666286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.261 [2024-04-25 03:28:53.666476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.261 [2024-04-25 03:28:53.666501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.261 qpair failed and we were unable to recover it. 00:28:19.261 [2024-04-25 03:28:53.666666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.261 [2024-04-25 03:28:53.666827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.261 [2024-04-25 03:28:53.666852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.261 qpair failed and we were unable to recover it. 00:28:19.261 [2024-04-25 03:28:53.667018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.261 [2024-04-25 03:28:53.667181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.261 [2024-04-25 03:28:53.667207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.261 qpair failed and we were unable to recover it. 00:28:19.261 [2024-04-25 03:28:53.667397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.261 [2024-04-25 03:28:53.667587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.261 [2024-04-25 03:28:53.667612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.261 qpair failed and we were unable to recover it. 00:28:19.261 [2024-04-25 03:28:53.667797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.261 [2024-04-25 03:28:53.667968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.261 [2024-04-25 03:28:53.667993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.261 qpair failed and we were unable to recover it. 00:28:19.261 [2024-04-25 03:28:53.668180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.261 [2024-04-25 03:28:53.668404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.261 [2024-04-25 03:28:53.668432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.261 qpair failed and we were unable to recover it. 00:28:19.261 [2024-04-25 03:28:53.668595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.261 [2024-04-25 03:28:53.668832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.261 [2024-04-25 03:28:53.668858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.261 qpair failed and we were unable to recover it. 00:28:19.261 [2024-04-25 03:28:53.669074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.261 [2024-04-25 03:28:53.669270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.261 [2024-04-25 03:28:53.669295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.261 qpair failed and we were unable to recover it. 00:28:19.261 [2024-04-25 03:28:53.669454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.261 [2024-04-25 03:28:53.669635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.261 [2024-04-25 03:28:53.669660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.261 qpair failed and we were unable to recover it. 00:28:19.261 [2024-04-25 03:28:53.669833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.261 [2024-04-25 03:28:53.670000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.261 [2024-04-25 03:28:53.670026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.261 qpair failed and we were unable to recover it. 00:28:19.261 [2024-04-25 03:28:53.670242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.261 [2024-04-25 03:28:53.670398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.261 [2024-04-25 03:28:53.670423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.261 qpair failed and we were unable to recover it. 00:28:19.261 [2024-04-25 03:28:53.670606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.261 [2024-04-25 03:28:53.670803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.261 [2024-04-25 03:28:53.670829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.261 qpair failed and we were unable to recover it. 00:28:19.261 [2024-04-25 03:28:53.671026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.261 [2024-04-25 03:28:53.671248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.261 [2024-04-25 03:28:53.671273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.261 qpair failed and we were unable to recover it. 00:28:19.261 [2024-04-25 03:28:53.671470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.261 [2024-04-25 03:28:53.671641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.261 [2024-04-25 03:28:53.671667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.261 qpair failed and we were unable to recover it. 00:28:19.261 [2024-04-25 03:28:53.671830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.261 [2024-04-25 03:28:53.672008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.261 [2024-04-25 03:28:53.672033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.261 qpair failed and we were unable to recover it. 00:28:19.261 [2024-04-25 03:28:53.672233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.261 [2024-04-25 03:28:53.672400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.261 [2024-04-25 03:28:53.672428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.261 qpair failed and we were unable to recover it. 00:28:19.261 [2024-04-25 03:28:53.672635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.261 [2024-04-25 03:28:53.672851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.261 [2024-04-25 03:28:53.672876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.261 qpair failed and we were unable to recover it. 00:28:19.261 [2024-04-25 03:28:53.673067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.261 [2024-04-25 03:28:53.673237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.261 [2024-04-25 03:28:53.673264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.261 qpair failed and we were unable to recover it. 00:28:19.261 [2024-04-25 03:28:53.673466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.261 [2024-04-25 03:28:53.673637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.261 [2024-04-25 03:28:53.673664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.261 qpair failed and we were unable to recover it. 00:28:19.261 [2024-04-25 03:28:53.673859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.261 [2024-04-25 03:28:53.674023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.261 [2024-04-25 03:28:53.674048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.261 qpair failed and we were unable to recover it. 00:28:19.261 [2024-04-25 03:28:53.674242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.261 [2024-04-25 03:28:53.674426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.261 [2024-04-25 03:28:53.674451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.261 qpair failed and we were unable to recover it. 00:28:19.261 [2024-04-25 03:28:53.674652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.261 [2024-04-25 03:28:53.674844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.261 [2024-04-25 03:28:53.674871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.261 qpair failed and we were unable to recover it. 00:28:19.261 [2024-04-25 03:28:53.675061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.261 [2024-04-25 03:28:53.675279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.261 [2024-04-25 03:28:53.675304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.261 qpair failed and we were unable to recover it. 00:28:19.261 [2024-04-25 03:28:53.675519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.261 [2024-04-25 03:28:53.675707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.261 [2024-04-25 03:28:53.675733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.261 qpair failed and we were unable to recover it. 00:28:19.261 [2024-04-25 03:28:53.675942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.261 [2024-04-25 03:28:53.676145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.261 [2024-04-25 03:28:53.676171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.262 qpair failed and we were unable to recover it. 00:28:19.262 [2024-04-25 03:28:53.676345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.262 [2024-04-25 03:28:53.676509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.262 [2024-04-25 03:28:53.676534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.262 qpair failed and we were unable to recover it. 00:28:19.262 [2024-04-25 03:28:53.676707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.262 [2024-04-25 03:28:53.676899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.262 [2024-04-25 03:28:53.676931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.262 qpair failed and we were unable to recover it. 00:28:19.262 [2024-04-25 03:28:53.677096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.262 [2024-04-25 03:28:53.677255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.262 [2024-04-25 03:28:53.677280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.262 qpair failed and we were unable to recover it. 00:28:19.262 [2024-04-25 03:28:53.677475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.262 [2024-04-25 03:28:53.677658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.262 [2024-04-25 03:28:53.677684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.262 qpair failed and we were unable to recover it. 00:28:19.262 [2024-04-25 03:28:53.677847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.262 [2024-04-25 03:28:53.678020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.262 [2024-04-25 03:28:53.678045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.262 qpair failed and we were unable to recover it. 00:28:19.262 [2024-04-25 03:28:53.678241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.262 [2024-04-25 03:28:53.678431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.262 [2024-04-25 03:28:53.678456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.262 qpair failed and we were unable to recover it. 00:28:19.262 [2024-04-25 03:28:53.678674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.262 [2024-04-25 03:28:53.678846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.262 [2024-04-25 03:28:53.678871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.262 qpair failed and we were unable to recover it. 00:28:19.262 [2024-04-25 03:28:53.679062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.262 [2024-04-25 03:28:53.679245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.262 [2024-04-25 03:28:53.679269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.262 qpair failed and we were unable to recover it. 00:28:19.262 [2024-04-25 03:28:53.679451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.262 [2024-04-25 03:28:53.679650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.262 [2024-04-25 03:28:53.679676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.262 qpair failed and we were unable to recover it. 00:28:19.262 [2024-04-25 03:28:53.679877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.262 [2024-04-25 03:28:53.680080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.262 [2024-04-25 03:28:53.680105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.262 qpair failed and we were unable to recover it. 00:28:19.262 [2024-04-25 03:28:53.680300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.262 [2024-04-25 03:28:53.680468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.262 [2024-04-25 03:28:53.680495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.262 qpair failed and we were unable to recover it. 00:28:19.262 [2024-04-25 03:28:53.680694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.262 [2024-04-25 03:28:53.680894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.262 [2024-04-25 03:28:53.680919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.262 qpair failed and we were unable to recover it. 00:28:19.262 [2024-04-25 03:28:53.681120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.262 [2024-04-25 03:28:53.681282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.262 [2024-04-25 03:28:53.681308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.262 qpair failed and we were unable to recover it. 00:28:19.262 [2024-04-25 03:28:53.681537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.262 [2024-04-25 03:28:53.681702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.262 [2024-04-25 03:28:53.681728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.262 qpair failed and we were unable to recover it. 00:28:19.262 [2024-04-25 03:28:53.681947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.262 [2024-04-25 03:28:53.682163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.262 [2024-04-25 03:28:53.682189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.262 qpair failed and we were unable to recover it. 00:28:19.262 [2024-04-25 03:28:53.682350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.262 [2024-04-25 03:28:53.682519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.262 [2024-04-25 03:28:53.682545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.262 qpair failed and we were unable to recover it. 00:28:19.262 [2024-04-25 03:28:53.682745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.262 [2024-04-25 03:28:53.682935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.262 [2024-04-25 03:28:53.682960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.262 qpair failed and we were unable to recover it. 00:28:19.262 [2024-04-25 03:28:53.683158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.262 [2024-04-25 03:28:53.683371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.262 [2024-04-25 03:28:53.683396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.262 qpair failed and we were unable to recover it. 00:28:19.262 [2024-04-25 03:28:53.683589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.262 [2024-04-25 03:28:53.683787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.262 [2024-04-25 03:28:53.683814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.262 qpair failed and we were unable to recover it. 00:28:19.262 [2024-04-25 03:28:53.684006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.262 [2024-04-25 03:28:53.684166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.262 [2024-04-25 03:28:53.684192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.262 qpair failed and we were unable to recover it. 00:28:19.262 [2024-04-25 03:28:53.684415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.262 [2024-04-25 03:28:53.684574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.262 [2024-04-25 03:28:53.684598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.262 qpair failed and we were unable to recover it. 00:28:19.262 [2024-04-25 03:28:53.684803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.262 [2024-04-25 03:28:53.685021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.262 [2024-04-25 03:28:53.685046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.262 qpair failed and we were unable to recover it. 00:28:19.262 [2024-04-25 03:28:53.685210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.262 [2024-04-25 03:28:53.685384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.262 [2024-04-25 03:28:53.685410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.262 qpair failed and we were unable to recover it. 00:28:19.262 [2024-04-25 03:28:53.685607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.262 [2024-04-25 03:28:53.685817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.262 [2024-04-25 03:28:53.685842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.262 qpair failed and we were unable to recover it. 00:28:19.262 [2024-04-25 03:28:53.686017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.262 [2024-04-25 03:28:53.686208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.262 [2024-04-25 03:28:53.686232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.262 qpair failed and we were unable to recover it. 00:28:19.262 [2024-04-25 03:28:53.686392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.262 [2024-04-25 03:28:53.686596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.262 [2024-04-25 03:28:53.686635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.262 qpair failed and we were unable to recover it. 00:28:19.262 [2024-04-25 03:28:53.686827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.262 [2024-04-25 03:28:53.686995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.262 [2024-04-25 03:28:53.687022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.262 qpair failed and we were unable to recover it. 00:28:19.262 [2024-04-25 03:28:53.687197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.262 [2024-04-25 03:28:53.688449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.262 [2024-04-25 03:28:53.688481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.262 qpair failed and we were unable to recover it. 00:28:19.262 [2024-04-25 03:28:53.688657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.262 [2024-04-25 03:28:53.688824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.263 [2024-04-25 03:28:53.688850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.263 qpair failed and we were unable to recover it. 00:28:19.263 [2024-04-25 03:28:53.689074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.263 [2024-04-25 03:28:53.689271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.263 [2024-04-25 03:28:53.689296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.263 qpair failed and we were unable to recover it. 00:28:19.263 [2024-04-25 03:28:53.689464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.263 [2024-04-25 03:28:53.689634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.263 [2024-04-25 03:28:53.689659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.263 qpair failed and we were unable to recover it. 00:28:19.263 [2024-04-25 03:28:53.689859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.263 [2024-04-25 03:28:53.690079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.263 [2024-04-25 03:28:53.690108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.263 qpair failed and we were unable to recover it. 00:28:19.263 [2024-04-25 03:28:53.691327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.263 [2024-04-25 03:28:53.691537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.263 [2024-04-25 03:28:53.691567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.263 qpair failed and we were unable to recover it. 00:28:19.263 [2024-04-25 03:28:53.691796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.263 [2024-04-25 03:28:53.691999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.263 [2024-04-25 03:28:53.692029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.263 qpair failed and we were unable to recover it. 00:28:19.263 [2024-04-25 03:28:53.692734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.263 [2024-04-25 03:28:53.693853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.263 [2024-04-25 03:28:53.693883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.263 qpair failed and we were unable to recover it. 00:28:19.263 [2024-04-25 03:28:53.694070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.263 [2024-04-25 03:28:53.694233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.263 [2024-04-25 03:28:53.694259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.263 qpair failed and we were unable to recover it. 00:28:19.263 [2024-04-25 03:28:53.694438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.263 [2024-04-25 03:28:53.694607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.263 [2024-04-25 03:28:53.694637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.263 qpair failed and we were unable to recover it. 00:28:19.263 [2024-04-25 03:28:53.694808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.263 [2024-04-25 03:28:53.695029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.263 [2024-04-25 03:28:53.695054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.263 qpair failed and we were unable to recover it. 00:28:19.263 [2024-04-25 03:28:53.695245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.263 [2024-04-25 03:28:53.695416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.263 [2024-04-25 03:28:53.695441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.263 qpair failed and we were unable to recover it. 00:28:19.263 [2024-04-25 03:28:53.695667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.263 [2024-04-25 03:28:53.695864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.263 [2024-04-25 03:28:53.695889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.263 qpair failed and we were unable to recover it. 00:28:19.263 [2024-04-25 03:28:53.696091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.263 [2024-04-25 03:28:53.696281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.263 [2024-04-25 03:28:53.696307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.263 qpair failed and we were unable to recover it. 00:28:19.263 [2024-04-25 03:28:53.696499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.263 [2024-04-25 03:28:53.696723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.263 [2024-04-25 03:28:53.696749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.263 qpair failed and we were unable to recover it. 00:28:19.263 [2024-04-25 03:28:53.696923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.263 [2024-04-25 03:28:53.697120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.263 [2024-04-25 03:28:53.697147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.263 qpair failed and we were unable to recover it. 00:28:19.263 [2024-04-25 03:28:53.697308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.263 [2024-04-25 03:28:53.697533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.263 [2024-04-25 03:28:53.697558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.263 qpair failed and we were unable to recover it. 00:28:19.263 [2024-04-25 03:28:53.697772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.263 [2024-04-25 03:28:53.697945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.263 [2024-04-25 03:28:53.697971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.263 qpair failed and we were unable to recover it. 00:28:19.263 [2024-04-25 03:28:53.698162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.263 [2024-04-25 03:28:53.698330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.263 [2024-04-25 03:28:53.698355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.263 qpair failed and we were unable to recover it. 00:28:19.263 [2024-04-25 03:28:53.698516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.263 [2024-04-25 03:28:53.698731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.263 [2024-04-25 03:28:53.698758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.263 qpair failed and we were unable to recover it. 00:28:19.263 [2024-04-25 03:28:53.698929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.263 [2024-04-25 03:28:53.699093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.263 [2024-04-25 03:28:53.699119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.263 qpair failed and we were unable to recover it. 00:28:19.263 [2024-04-25 03:28:53.699315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.263 [2024-04-25 03:28:53.699479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.263 [2024-04-25 03:28:53.699504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.263 qpair failed and we were unable to recover it. 00:28:19.263 [2024-04-25 03:28:53.699702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.263 [2024-04-25 03:28:53.699891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.263 [2024-04-25 03:28:53.699916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.263 qpair failed and we were unable to recover it. 00:28:19.263 [2024-04-25 03:28:53.700130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.263 [2024-04-25 03:28:53.700298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.263 [2024-04-25 03:28:53.700324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.263 qpair failed and we were unable to recover it. 00:28:19.263 [2024-04-25 03:28:53.700486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.263 [2024-04-25 03:28:53.700713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.263 [2024-04-25 03:28:53.700738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.263 qpair failed and we were unable to recover it. 00:28:19.263 [2024-04-25 03:28:53.700913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.263 [2024-04-25 03:28:53.701106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.263 [2024-04-25 03:28:53.701131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.263 qpair failed and we were unable to recover it. 00:28:19.263 [2024-04-25 03:28:53.701299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.263 [2024-04-25 03:28:53.701463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.264 [2024-04-25 03:28:53.701489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.264 qpair failed and we were unable to recover it. 00:28:19.264 [2024-04-25 03:28:53.701661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.264 [2024-04-25 03:28:53.701854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.264 [2024-04-25 03:28:53.701879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.264 qpair failed and we were unable to recover it. 00:28:19.264 [2024-04-25 03:28:53.702080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.264 [2024-04-25 03:28:53.702276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.264 [2024-04-25 03:28:53.702301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.264 qpair failed and we were unable to recover it. 00:28:19.264 [2024-04-25 03:28:53.702475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.264 [2024-04-25 03:28:53.702677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.264 [2024-04-25 03:28:53.702703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.264 qpair failed and we were unable to recover it. 00:28:19.264 [2024-04-25 03:28:53.702900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.264 [2024-04-25 03:28:53.703061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.264 [2024-04-25 03:28:53.703086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.264 qpair failed and we were unable to recover it. 00:28:19.264 [2024-04-25 03:28:53.703303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.264 [2024-04-25 03:28:53.703501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.264 [2024-04-25 03:28:53.703526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.264 qpair failed and we were unable to recover it. 00:28:19.264 [2024-04-25 03:28:53.703723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.264 [2024-04-25 03:28:53.703923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.264 [2024-04-25 03:28:53.703948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.264 qpair failed and we were unable to recover it. 00:28:19.264 [2024-04-25 03:28:53.704111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.264 [2024-04-25 03:28:53.704297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.264 [2024-04-25 03:28:53.704322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.264 qpair failed and we were unable to recover it. 00:28:19.264 [2024-04-25 03:28:53.704521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.264 [2024-04-25 03:28:53.704723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.264 [2024-04-25 03:28:53.704749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.264 qpair failed and we were unable to recover it. 00:28:19.264 [2024-04-25 03:28:53.704920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.264 [2024-04-25 03:28:53.705090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.264 [2024-04-25 03:28:53.705116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.264 qpair failed and we were unable to recover it. 00:28:19.264 [2024-04-25 03:28:53.705311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.264 [2024-04-25 03:28:53.705474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.264 [2024-04-25 03:28:53.705500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.264 qpair failed and we were unable to recover it. 00:28:19.264 [2024-04-25 03:28:53.705671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.264 [2024-04-25 03:28:53.705844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.264 [2024-04-25 03:28:53.705869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.264 qpair failed and we were unable to recover it. 00:28:19.264 [2024-04-25 03:28:53.706092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.264 [2024-04-25 03:28:53.706287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.264 [2024-04-25 03:28:53.706313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.264 qpair failed and we were unable to recover it. 00:28:19.264 [2024-04-25 03:28:53.706506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.264 [2024-04-25 03:28:53.706714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.264 [2024-04-25 03:28:53.706740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.264 qpair failed and we were unable to recover it. 00:28:19.264 [2024-04-25 03:28:53.707673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.264 [2024-04-25 03:28:53.707884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.264 [2024-04-25 03:28:53.707914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.264 qpair failed and we were unable to recover it. 00:28:19.264 [2024-04-25 03:28:53.708137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.264 [2024-04-25 03:28:53.709269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.264 [2024-04-25 03:28:53.709301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.264 qpair failed and we were unable to recover it. 00:28:19.264 [2024-04-25 03:28:53.709488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.264 [2024-04-25 03:28:53.709710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.264 [2024-04-25 03:28:53.709736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.264 qpair failed and we were unable to recover it. 00:28:19.264 [2024-04-25 03:28:53.709923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.264 [2024-04-25 03:28:53.710100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.264 [2024-04-25 03:28:53.710127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.264 qpair failed and we were unable to recover it. 00:28:19.264 [2024-04-25 03:28:53.710323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.264 [2024-04-25 03:28:53.710508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.264 [2024-04-25 03:28:53.710539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.264 qpair failed and we were unable to recover it. 00:28:19.264 [2024-04-25 03:28:53.710773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.264 [2024-04-25 03:28:53.710951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.264 [2024-04-25 03:28:53.710978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.264 qpair failed and we were unable to recover it. 00:28:19.264 [2024-04-25 03:28:53.711139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.264 [2024-04-25 03:28:53.711334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.264 [2024-04-25 03:28:53.711360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.264 qpair failed and we were unable to recover it. 00:28:19.264 [2024-04-25 03:28:53.711535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.264 [2024-04-25 03:28:53.711725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.264 [2024-04-25 03:28:53.711751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.264 qpair failed and we were unable to recover it. 00:28:19.264 [2024-04-25 03:28:53.711919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.264 [2024-04-25 03:28:53.712087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.264 [2024-04-25 03:28:53.712113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.264 qpair failed and we were unable to recover it. 00:28:19.264 [2024-04-25 03:28:53.712284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.264 [2024-04-25 03:28:53.712455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.264 [2024-04-25 03:28:53.712481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.264 qpair failed and we were unable to recover it. 00:28:19.264 [2024-04-25 03:28:53.712708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.264 [2024-04-25 03:28:53.712878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.264 [2024-04-25 03:28:53.712903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.264 qpair failed and we were unable to recover it. 00:28:19.264 [2024-04-25 03:28:53.713095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.264 [2024-04-25 03:28:53.713268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.264 [2024-04-25 03:28:53.713295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.264 qpair failed and we were unable to recover it. 00:28:19.264 [2024-04-25 03:28:53.713487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.264 [2024-04-25 03:28:53.713656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.264 [2024-04-25 03:28:53.713684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.264 qpair failed and we were unable to recover it. 00:28:19.264 [2024-04-25 03:28:53.713856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.264 [2024-04-25 03:28:53.714051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.264 [2024-04-25 03:28:53.714078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.264 qpair failed and we were unable to recover it. 00:28:19.264 [2024-04-25 03:28:53.714248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.264 [2024-04-25 03:28:53.714446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.264 [2024-04-25 03:28:53.714472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.264 qpair failed and we were unable to recover it. 00:28:19.264 [2024-04-25 03:28:53.714650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.264 [2024-04-25 03:28:53.714826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.264 [2024-04-25 03:28:53.714855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.265 qpair failed and we were unable to recover it. 00:28:19.265 [2024-04-25 03:28:53.715047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.265 [2024-04-25 03:28:53.715221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.265 [2024-04-25 03:28:53.715248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.265 qpair failed and we were unable to recover it. 00:28:19.265 [2024-04-25 03:28:53.715417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.265 [2024-04-25 03:28:53.715607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.265 [2024-04-25 03:28:53.715638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.265 qpair failed and we were unable to recover it. 00:28:19.265 03:28:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:19.265 [2024-04-25 03:28:53.715802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.265 03:28:53 -- common/autotest_common.sh@850 -- # return 0 00:28:19.265 [2024-04-25 03:28:53.715971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.265 [2024-04-25 03:28:53.715997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.265 qpair failed and we were unable to recover it. 00:28:19.265 03:28:53 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:28:19.265 [2024-04-25 03:28:53.716185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.265 03:28:53 -- common/autotest_common.sh@716 -- # xtrace_disable 00:28:19.265 [2024-04-25 03:28:53.716357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.265 [2024-04-25 03:28:53.716383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.265 qpair failed and we were unable to recover it. 00:28:19.265 03:28:53 -- common/autotest_common.sh@10 -- # set +x 00:28:19.265 [2024-04-25 03:28:53.716548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.265 [2024-04-25 03:28:53.716740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.265 [2024-04-25 03:28:53.716765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.265 qpair failed and we were unable to recover it. 00:28:19.265 [2024-04-25 03:28:53.716936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.265 [2024-04-25 03:28:53.717130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.265 [2024-04-25 03:28:53.717155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.265 qpair failed and we were unable to recover it. 00:28:19.265 [2024-04-25 03:28:53.717342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.265 [2024-04-25 03:28:53.717513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.265 [2024-04-25 03:28:53.717539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.265 qpair failed and we were unable to recover it. 00:28:19.265 [2024-04-25 03:28:53.717730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.265 [2024-04-25 03:28:53.717939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.265 [2024-04-25 03:28:53.717964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.265 qpair failed and we were unable to recover it. 00:28:19.265 [2024-04-25 03:28:53.718133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.265 [2024-04-25 03:28:53.718304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.265 [2024-04-25 03:28:53.718330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.265 qpair failed and we were unable to recover it. 00:28:19.265 [2024-04-25 03:28:53.718549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.265 [2024-04-25 03:28:53.718747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.265 [2024-04-25 03:28:53.718775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.265 qpair failed and we were unable to recover it. 00:28:19.265 [2024-04-25 03:28:53.718971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.265 [2024-04-25 03:28:53.719137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.265 [2024-04-25 03:28:53.719163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.265 qpair failed and we were unable to recover it. 00:28:19.265 [2024-04-25 03:28:53.719351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.265 [2024-04-25 03:28:53.719548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.265 [2024-04-25 03:28:53.719572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.265 qpair failed and we were unable to recover it. 00:28:19.265 [2024-04-25 03:28:53.719749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.265 [2024-04-25 03:28:53.719920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.265 [2024-04-25 03:28:53.719945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.265 qpair failed and we were unable to recover it. 00:28:19.265 [2024-04-25 03:28:53.720108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.265 [2024-04-25 03:28:53.720280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.265 [2024-04-25 03:28:53.720304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.265 qpair failed and we were unable to recover it. 00:28:19.265 [2024-04-25 03:28:53.720467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.265 [2024-04-25 03:28:53.720635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.265 [2024-04-25 03:28:53.720661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.265 qpair failed and we were unable to recover it. 00:28:19.265 [2024-04-25 03:28:53.720854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.265 [2024-04-25 03:28:53.721081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.265 [2024-04-25 03:28:53.721105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.265 qpair failed and we were unable to recover it. 00:28:19.265 [2024-04-25 03:28:53.721298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.265 [2024-04-25 03:28:53.721460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.265 [2024-04-25 03:28:53.721485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.265 qpair failed and we were unable to recover it. 00:28:19.265 [2024-04-25 03:28:53.721658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.265 [2024-04-25 03:28:53.721856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.265 [2024-04-25 03:28:53.721882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.265 qpair failed and we were unable to recover it. 00:28:19.265 [2024-04-25 03:28:53.722071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.265 [2024-04-25 03:28:53.722256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.265 [2024-04-25 03:28:53.722281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.265 qpair failed and we were unable to recover it. 00:28:19.265 [2024-04-25 03:28:53.722479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.265 [2024-04-25 03:28:53.722656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.265 [2024-04-25 03:28:53.722692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.265 qpair failed and we were unable to recover it. 00:28:19.265 [2024-04-25 03:28:53.722890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.265 [2024-04-25 03:28:53.723061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.265 [2024-04-25 03:28:53.723086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.265 qpair failed and we were unable to recover it. 00:28:19.265 [2024-04-25 03:28:53.723261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.265 [2024-04-25 03:28:53.723427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.265 [2024-04-25 03:28:53.723452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.265 qpair failed and we were unable to recover it. 00:28:19.265 [2024-04-25 03:28:53.723675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.265 [2024-04-25 03:28:53.723872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.265 [2024-04-25 03:28:53.723898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.265 qpair failed and we were unable to recover it. 00:28:19.265 [2024-04-25 03:28:53.724061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.265 [2024-04-25 03:28:53.724250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.265 [2024-04-25 03:28:53.724275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.265 qpair failed and we were unable to recover it. 00:28:19.265 [2024-04-25 03:28:53.724435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.265 [2024-04-25 03:28:53.724625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.265 [2024-04-25 03:28:53.724659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.265 qpair failed and we were unable to recover it. 00:28:19.265 [2024-04-25 03:28:53.724859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.265 [2024-04-25 03:28:53.725026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.265 [2024-04-25 03:28:53.725051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.265 qpair failed and we were unable to recover it. 00:28:19.265 [2024-04-25 03:28:53.725251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.265 [2024-04-25 03:28:53.725427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.265 [2024-04-25 03:28:53.725453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.265 qpair failed and we were unable to recover it. 00:28:19.265 [2024-04-25 03:28:53.725642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.529 [2024-04-25 03:28:53.725910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.529 [2024-04-25 03:28:53.725934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.529 qpair failed and we were unable to recover it. 00:28:19.529 [2024-04-25 03:28:53.726120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.529 [2024-04-25 03:28:53.726309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.529 [2024-04-25 03:28:53.726334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.529 qpair failed and we were unable to recover it. 00:28:19.529 [2024-04-25 03:28:53.726519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.529 [2024-04-25 03:28:53.726687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.529 [2024-04-25 03:28:53.726717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.529 qpair failed and we were unable to recover it. 00:28:19.529 [2024-04-25 03:28:53.726954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.529 [2024-04-25 03:28:53.727153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.529 [2024-04-25 03:28:53.727179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.529 qpair failed and we were unable to recover it. 00:28:19.529 [2024-04-25 03:28:53.727381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.529 [2024-04-25 03:28:53.727560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.529 [2024-04-25 03:28:53.727585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.529 qpair failed and we were unable to recover it. 00:28:19.529 [2024-04-25 03:28:53.727792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.529 [2024-04-25 03:28:53.727983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.529 [2024-04-25 03:28:53.728010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.529 qpair failed and we were unable to recover it. 00:28:19.529 [2024-04-25 03:28:53.728227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.529 [2024-04-25 03:28:53.728419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.529 [2024-04-25 03:28:53.728445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.529 qpair failed and we were unable to recover it. 00:28:19.529 [2024-04-25 03:28:53.728664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.529 [2024-04-25 03:28:53.728825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.529 [2024-04-25 03:28:53.728850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.529 qpair failed and we were unable to recover it. 00:28:19.529 [2024-04-25 03:28:53.729069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.529 [2024-04-25 03:28:53.729237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.529 [2024-04-25 03:28:53.729262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.529 qpair failed and we were unable to recover it. 00:28:19.529 [2024-04-25 03:28:53.729457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.529 [2024-04-25 03:28:53.729618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.529 [2024-04-25 03:28:53.729696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.529 qpair failed and we were unable to recover it. 00:28:19.529 [2024-04-25 03:28:53.729897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.529 [2024-04-25 03:28:53.730100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.530 [2024-04-25 03:28:53.730127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.530 qpair failed and we were unable to recover it. 00:28:19.530 [2024-04-25 03:28:53.730320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.530 [2024-04-25 03:28:53.730484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.530 [2024-04-25 03:28:53.730511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.530 qpair failed and we were unable to recover it. 00:28:19.530 [2024-04-25 03:28:53.730700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.530 [2024-04-25 03:28:53.730866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.530 [2024-04-25 03:28:53.730892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.530 qpair failed and we were unable to recover it. 00:28:19.530 [2024-04-25 03:28:53.731094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.530 [2024-04-25 03:28:53.731286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.530 [2024-04-25 03:28:53.731311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.530 qpair failed and we were unable to recover it. 00:28:19.530 [2024-04-25 03:28:53.731474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.530 [2024-04-25 03:28:53.731662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.530 [2024-04-25 03:28:53.731687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.530 qpair failed and we were unable to recover it. 00:28:19.530 [2024-04-25 03:28:53.731871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.530 [2024-04-25 03:28:53.732067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.530 [2024-04-25 03:28:53.732093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.530 qpair failed and we were unable to recover it. 00:28:19.530 [2024-04-25 03:28:53.732286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.530 [2024-04-25 03:28:53.732481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.530 [2024-04-25 03:28:53.732507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.530 qpair failed and we were unable to recover it. 00:28:19.530 [2024-04-25 03:28:53.732699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.530 [2024-04-25 03:28:53.732896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.530 [2024-04-25 03:28:53.732922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.530 qpair failed and we were unable to recover it. 00:28:19.530 [2024-04-25 03:28:53.733114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.530 [2024-04-25 03:28:53.733281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.530 [2024-04-25 03:28:53.733308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.530 qpair failed and we were unable to recover it. 00:28:19.530 [2024-04-25 03:28:53.733511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.530 [2024-04-25 03:28:53.733739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.530 [2024-04-25 03:28:53.733765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.530 qpair failed and we were unable to recover it. 00:28:19.530 [2024-04-25 03:28:53.733979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.530 [2024-04-25 03:28:53.734178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.530 [2024-04-25 03:28:53.734205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.530 qpair failed and we were unable to recover it. 00:28:19.530 [2024-04-25 03:28:53.734405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.530 [2024-04-25 03:28:53.734607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.530 [2024-04-25 03:28:53.734640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.530 qpair failed and we were unable to recover it. 00:28:19.530 [2024-04-25 03:28:53.734806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.530 [2024-04-25 03:28:53.734975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.530 [2024-04-25 03:28:53.735002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.530 qpair failed and we were unable to recover it. 00:28:19.530 [2024-04-25 03:28:53.735209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.530 [2024-04-25 03:28:53.735399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.530 [2024-04-25 03:28:53.735425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.530 qpair failed and we were unable to recover it. 00:28:19.530 [2024-04-25 03:28:53.735654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.530 [2024-04-25 03:28:53.735825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.530 [2024-04-25 03:28:53.735851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.530 qpair failed and we were unable to recover it. 00:28:19.530 [2024-04-25 03:28:53.736016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.530 [2024-04-25 03:28:53.736206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.530 [2024-04-25 03:28:53.736231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.530 qpair failed and we were unable to recover it. 00:28:19.530 [2024-04-25 03:28:53.736412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.530 [2024-04-25 03:28:53.736637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.530 [2024-04-25 03:28:53.736662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.530 qpair failed and we were unable to recover it. 00:28:19.530 [2024-04-25 03:28:53.736834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.530 [2024-04-25 03:28:53.736996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.530 [2024-04-25 03:28:53.737021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.530 qpair failed and we were unable to recover it. 00:28:19.530 [2024-04-25 03:28:53.737223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.530 [2024-04-25 03:28:53.737418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.530 [2024-04-25 03:28:53.737443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.530 qpair failed and we were unable to recover it. 00:28:19.530 03:28:53 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:19.530 [2024-04-25 03:28:53.737607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.530 [2024-04-25 03:28:53.737794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.530 03:28:53 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:19.530 [2024-04-25 03:28:53.737820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.530 qpair failed and we were unable to recover it. 00:28:19.530 03:28:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:19.530 [2024-04-25 03:28:53.738011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.530 03:28:53 -- common/autotest_common.sh@10 -- # set +x 00:28:19.530 [2024-04-25 03:28:53.738212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.530 [2024-04-25 03:28:53.738237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.530 qpair failed and we were unable to recover it. 00:28:19.530 [2024-04-25 03:28:53.738436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.530 [2024-04-25 03:28:53.738633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.530 [2024-04-25 03:28:53.738660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.530 qpair failed and we were unable to recover it. 00:28:19.530 [2024-04-25 03:28:53.738843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.530 [2024-04-25 03:28:53.739042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.530 [2024-04-25 03:28:53.739069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.530 qpair failed and we were unable to recover it. 00:28:19.530 [2024-04-25 03:28:53.739256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.530 [2024-04-25 03:28:53.739450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.530 [2024-04-25 03:28:53.739475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.530 qpair failed and we were unable to recover it. 00:28:19.530 [2024-04-25 03:28:53.739642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.530 [2024-04-25 03:28:53.739808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.530 [2024-04-25 03:28:53.739834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.530 qpair failed and we were unable to recover it. 00:28:19.530 [2024-04-25 03:28:53.739996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.530 [2024-04-25 03:28:53.740193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.530 [2024-04-25 03:28:53.740220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.530 qpair failed and we were unable to recover it. 00:28:19.530 [2024-04-25 03:28:53.740419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.530 [2024-04-25 03:28:53.740590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.530 [2024-04-25 03:28:53.740617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.530 qpair failed and we were unable to recover it. 00:28:19.530 [2024-04-25 03:28:53.740811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.530 [2024-04-25 03:28:53.741054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.530 [2024-04-25 03:28:53.741080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.530 qpair failed and we were unable to recover it. 00:28:19.530 [2024-04-25 03:28:53.741249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.530 [2024-04-25 03:28:53.741421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.531 [2024-04-25 03:28:53.741447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.531 qpair failed and we were unable to recover it. 00:28:19.531 [2024-04-25 03:28:53.741605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.531 [2024-04-25 03:28:53.741816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.531 [2024-04-25 03:28:53.741843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.531 qpair failed and we were unable to recover it. 00:28:19.531 [2024-04-25 03:28:53.742079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.531 [2024-04-25 03:28:53.742276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.531 [2024-04-25 03:28:53.742302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.531 qpair failed and we were unable to recover it. 00:28:19.531 [2024-04-25 03:28:53.742473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.531 [2024-04-25 03:28:53.742637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.531 [2024-04-25 03:28:53.742664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.531 qpair failed and we were unable to recover it. 00:28:19.531 [2024-04-25 03:28:53.742850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.531 [2024-04-25 03:28:53.743054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.531 [2024-04-25 03:28:53.743084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.531 qpair failed and we were unable to recover it. 00:28:19.531 [2024-04-25 03:28:53.743311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.531 [2024-04-25 03:28:53.743470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.531 [2024-04-25 03:28:53.743495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.531 qpair failed and we were unable to recover it. 00:28:19.531 [2024-04-25 03:28:53.743697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.531 [2024-04-25 03:28:53.743895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.531 [2024-04-25 03:28:53.743921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.531 qpair failed and we were unable to recover it. 00:28:19.531 [2024-04-25 03:28:53.744281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.531 [2024-04-25 03:28:53.744480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.531 [2024-04-25 03:28:53.744506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.531 qpair failed and we were unable to recover it. 00:28:19.531 [2024-04-25 03:28:53.744689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.531 [2024-04-25 03:28:53.744855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.531 [2024-04-25 03:28:53.744881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.531 qpair failed and we were unable to recover it. 00:28:19.531 [2024-04-25 03:28:53.745087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.531 [2024-04-25 03:28:53.745258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.531 [2024-04-25 03:28:53.745283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.531 qpair failed and we were unable to recover it. 00:28:19.531 [2024-04-25 03:28:53.745452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.531 [2024-04-25 03:28:53.745661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.531 [2024-04-25 03:28:53.745693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.531 qpair failed and we were unable to recover it. 00:28:19.531 [2024-04-25 03:28:53.745876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.531 [2024-04-25 03:28:53.746056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.531 [2024-04-25 03:28:53.746083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.531 qpair failed and we were unable to recover it. 00:28:19.531 [2024-04-25 03:28:53.746250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.531 [2024-04-25 03:28:53.746415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.531 [2024-04-25 03:28:53.746441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.531 qpair failed and we were unable to recover it. 00:28:19.531 [2024-04-25 03:28:53.746642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.531 [2024-04-25 03:28:53.746809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.531 [2024-04-25 03:28:53.746834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.531 qpair failed and we were unable to recover it. 00:28:19.531 [2024-04-25 03:28:53.747063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.531 [2024-04-25 03:28:53.747254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.531 [2024-04-25 03:28:53.747279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.531 qpair failed and we were unable to recover it. 00:28:19.531 [2024-04-25 03:28:53.747481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.531 [2024-04-25 03:28:53.747678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.531 [2024-04-25 03:28:53.747722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.531 qpair failed and we were unable to recover it. 00:28:19.531 [2024-04-25 03:28:53.747890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.531 [2024-04-25 03:28:53.748070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.531 [2024-04-25 03:28:53.748095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.531 qpair failed and we were unable to recover it. 00:28:19.531 [2024-04-25 03:28:53.748294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.531 [2024-04-25 03:28:53.748457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.531 [2024-04-25 03:28:53.748483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.531 qpair failed and we were unable to recover it. 00:28:19.531 [2024-04-25 03:28:53.748649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.531 [2024-04-25 03:28:53.748853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.531 [2024-04-25 03:28:53.748878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.531 qpair failed and we were unable to recover it. 00:28:19.531 [2024-04-25 03:28:53.749056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.531 [2024-04-25 03:28:53.749279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.531 [2024-04-25 03:28:53.749304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.531 qpair failed and we were unable to recover it. 00:28:19.531 [2024-04-25 03:28:53.749469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.531 [2024-04-25 03:28:53.749646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.531 [2024-04-25 03:28:53.749671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.531 qpair failed and we were unable to recover it. 00:28:19.531 [2024-04-25 03:28:53.749872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.531 [2024-04-25 03:28:53.750052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.531 [2024-04-25 03:28:53.750078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.531 qpair failed and we were unable to recover it. 00:28:19.531 [2024-04-25 03:28:53.750245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.531 [2024-04-25 03:28:53.750452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.531 [2024-04-25 03:28:53.750478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.531 qpair failed and we were unable to recover it. 00:28:19.531 [2024-04-25 03:28:53.750644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.531 [2024-04-25 03:28:53.750816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.531 [2024-04-25 03:28:53.750842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.531 qpair failed and we were unable to recover it. 00:28:19.531 [2024-04-25 03:28:53.751013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.531 [2024-04-25 03:28:53.751179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.531 [2024-04-25 03:28:53.751206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.531 qpair failed and we were unable to recover it. 00:28:19.531 [2024-04-25 03:28:53.751406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.531 [2024-04-25 03:28:53.751569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.531 [2024-04-25 03:28:53.751595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.531 qpair failed and we were unable to recover it. 00:28:19.531 [2024-04-25 03:28:53.751811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.531 [2024-04-25 03:28:53.752003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.531 [2024-04-25 03:28:53.752029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.531 qpair failed and we were unable to recover it. 00:28:19.531 [2024-04-25 03:28:53.752213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.531 [2024-04-25 03:28:53.752412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.531 [2024-04-25 03:28:53.752438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.531 qpair failed and we were unable to recover it. 00:28:19.531 [2024-04-25 03:28:53.752643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.531 [2024-04-25 03:28:53.752846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.531 [2024-04-25 03:28:53.752872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.531 qpair failed and we were unable to recover it. 00:28:19.531 [2024-04-25 03:28:53.753048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.531 [2024-04-25 03:28:53.753244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.532 [2024-04-25 03:28:53.753270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.532 qpair failed and we were unable to recover it. 00:28:19.532 [2024-04-25 03:28:53.753480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.532 [2024-04-25 03:28:53.753655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.532 [2024-04-25 03:28:53.753693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.532 qpair failed and we were unable to recover it. 00:28:19.532 [2024-04-25 03:28:53.753885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.532 [2024-04-25 03:28:53.754058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.532 [2024-04-25 03:28:53.754084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.532 qpair failed and we were unable to recover it. 00:28:19.532 [2024-04-25 03:28:53.754259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.532 [2024-04-25 03:28:53.754458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.532 [2024-04-25 03:28:53.754484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.532 qpair failed and we were unable to recover it. 00:28:19.532 [2024-04-25 03:28:53.754713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.532 [2024-04-25 03:28:53.754907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.532 [2024-04-25 03:28:53.754932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.532 qpair failed and we were unable to recover it. 00:28:19.532 [2024-04-25 03:28:53.755130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.532 [2024-04-25 03:28:53.755315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.532 [2024-04-25 03:28:53.755343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.532 qpair failed and we were unable to recover it. 00:28:19.532 [2024-04-25 03:28:53.755509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.532 [2024-04-25 03:28:53.755689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.532 [2024-04-25 03:28:53.755716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.532 qpair failed and we were unable to recover it. 00:28:19.532 [2024-04-25 03:28:53.755879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.532 [2024-04-25 03:28:53.756057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.532 [2024-04-25 03:28:53.756083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.532 qpair failed and we were unable to recover it. 00:28:19.532 [2024-04-25 03:28:53.756282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.532 [2024-04-25 03:28:53.756487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.532 [2024-04-25 03:28:53.756512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.532 qpair failed and we were unable to recover it. 00:28:19.532 [2024-04-25 03:28:53.756690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.532 [2024-04-25 03:28:53.756875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.532 [2024-04-25 03:28:53.756910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.532 qpair failed and we were unable to recover it. 00:28:19.532 [2024-04-25 03:28:53.757138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.532 [2024-04-25 03:28:53.757298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.532 [2024-04-25 03:28:53.757323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.532 qpair failed and we were unable to recover it. 00:28:19.532 [2024-04-25 03:28:53.757592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.532 [2024-04-25 03:28:53.757768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.532 [2024-04-25 03:28:53.757794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.532 qpair failed and we were unable to recover it. 00:28:19.532 [2024-04-25 03:28:53.757989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.532 [2024-04-25 03:28:53.758155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.532 [2024-04-25 03:28:53.758181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.532 qpair failed and we were unable to recover it. 00:28:19.532 [2024-04-25 03:28:53.758373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.532 [2024-04-25 03:28:53.758538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.532 [2024-04-25 03:28:53.758563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.532 qpair failed and we were unable to recover it. 00:28:19.532 [2024-04-25 03:28:53.758726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.532 [2024-04-25 03:28:53.758899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.532 [2024-04-25 03:28:53.758925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.532 qpair failed and we were unable to recover it. 00:28:19.532 [2024-04-25 03:28:53.759154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.532 [2024-04-25 03:28:53.759371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.532 [2024-04-25 03:28:53.759396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.532 qpair failed and we were unable to recover it. 00:28:19.532 [2024-04-25 03:28:53.759571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.532 [2024-04-25 03:28:53.759790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.532 [2024-04-25 03:28:53.759816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.532 qpair failed and we were unable to recover it. 00:28:19.532 [2024-04-25 03:28:53.759993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.532 [2024-04-25 03:28:53.760225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.532 [2024-04-25 03:28:53.760251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.532 qpair failed and we were unable to recover it. 00:28:19.532 [2024-04-25 03:28:53.760466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.532 [2024-04-25 03:28:53.760668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.532 [2024-04-25 03:28:53.760702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.532 qpair failed and we were unable to recover it. 00:28:19.532 [2024-04-25 03:28:53.760907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.532 [2024-04-25 03:28:53.761103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.532 [2024-04-25 03:28:53.761129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.532 qpair failed and we were unable to recover it. 00:28:19.532 [2024-04-25 03:28:53.761333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.532 [2024-04-25 03:28:53.761502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.532 [2024-04-25 03:28:53.761528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.532 qpair failed and we were unable to recover it. 00:28:19.532 [2024-04-25 03:28:53.761748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.532 [2024-04-25 03:28:53.761917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.532 [2024-04-25 03:28:53.761943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.532 qpair failed and we were unable to recover it. 00:28:19.532 [2024-04-25 03:28:53.762144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.532 [2024-04-25 03:28:53.762301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.532 [2024-04-25 03:28:53.762326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.532 qpair failed and we were unable to recover it. 00:28:19.532 [2024-04-25 03:28:53.762509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.532 [2024-04-25 03:28:53.762717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.532 [2024-04-25 03:28:53.762742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.532 qpair failed and we were unable to recover it. 00:28:19.532 [2024-04-25 03:28:53.762940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.532 [2024-04-25 03:28:53.763108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.532 [2024-04-25 03:28:53.763134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.532 qpair failed and we were unable to recover it. 00:28:19.532 [2024-04-25 03:28:53.763323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.532 Malloc0 00:28:19.532 [2024-04-25 03:28:53.763505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.532 [2024-04-25 03:28:53.763531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.532 qpair failed and we were unable to recover it. 00:28:19.532 [2024-04-25 03:28:53.763723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.532 03:28:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:19.532 [2024-04-25 03:28:53.763929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.532 03:28:53 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:28:19.532 [2024-04-25 03:28:53.763959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.532 qpair failed and we were unable to recover it. 00:28:19.532 [2024-04-25 03:28:53.764155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.532 03:28:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:19.532 03:28:53 -- common/autotest_common.sh@10 -- # set +x 00:28:19.532 [2024-04-25 03:28:53.764323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.532 [2024-04-25 03:28:53.764349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.532 qpair failed and we were unable to recover it. 00:28:19.532 [2024-04-25 03:28:53.764543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.533 [2024-04-25 03:28:53.764739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.533 [2024-04-25 03:28:53.764764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.533 qpair failed and we were unable to recover it. 00:28:19.533 [2024-04-25 03:28:53.764934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.533 [2024-04-25 03:28:53.765129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.533 [2024-04-25 03:28:53.765154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.533 qpair failed and we were unable to recover it. 00:28:19.533 [2024-04-25 03:28:53.765349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.533 [2024-04-25 03:28:53.765519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.533 [2024-04-25 03:28:53.765544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.533 qpair failed and we were unable to recover it. 00:28:19.533 [2024-04-25 03:28:53.765732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.533 [2024-04-25 03:28:53.765928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.533 [2024-04-25 03:28:53.765953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.533 qpair failed and we were unable to recover it. 00:28:19.533 [2024-04-25 03:28:53.766149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.533 [2024-04-25 03:28:53.766316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.533 [2024-04-25 03:28:53.766343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.533 qpair failed and we were unable to recover it. 00:28:19.533 [2024-04-25 03:28:53.766537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.533 [2024-04-25 03:28:53.766707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.533 [2024-04-25 03:28:53.766733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.533 qpair failed and we were unable to recover it. 00:28:19.533 [2024-04-25 03:28:53.766927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.533 [2024-04-25 03:28:53.767123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.533 [2024-04-25 03:28:53.767122] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:19.533 [2024-04-25 03:28:53.767148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.533 qpair failed and we were unable to recover it. 00:28:19.533 [2024-04-25 03:28:53.767367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.533 [2024-04-25 03:28:53.767537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.533 [2024-04-25 03:28:53.767563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.533 qpair failed and we were unable to recover it. 00:28:19.533 [2024-04-25 03:28:53.767736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.533 [2024-04-25 03:28:53.767959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.533 [2024-04-25 03:28:53.767985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.533 qpair failed and we were unable to recover it. 00:28:19.533 [2024-04-25 03:28:53.768173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.533 [2024-04-25 03:28:53.768360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.533 [2024-04-25 03:28:53.768386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.533 qpair failed and we were unable to recover it. 00:28:19.533 [2024-04-25 03:28:53.768556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.533 [2024-04-25 03:28:53.768760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.533 [2024-04-25 03:28:53.768786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.533 qpair failed and we were unable to recover it. 00:28:19.533 [2024-04-25 03:28:53.768963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.533 [2024-04-25 03:28:53.769146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.533 [2024-04-25 03:28:53.769174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.533 qpair failed and we were unable to recover it. 00:28:19.533 [2024-04-25 03:28:53.769405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.533 [2024-04-25 03:28:53.769606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.533 [2024-04-25 03:28:53.769643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.533 qpair failed and we were unable to recover it. 00:28:19.533 [2024-04-25 03:28:53.769857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.533 [2024-04-25 03:28:53.770065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.533 [2024-04-25 03:28:53.770091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.533 qpair failed and we were unable to recover it. 00:28:19.533 [2024-04-25 03:28:53.770312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.533 [2024-04-25 03:28:53.770520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.533 [2024-04-25 03:28:53.770546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.533 qpair failed and we were unable to recover it. 00:28:19.533 [2024-04-25 03:28:53.770743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.533 [2024-04-25 03:28:53.770935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.533 [2024-04-25 03:28:53.770960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.533 qpair failed and we were unable to recover it. 00:28:19.533 [2024-04-25 03:28:53.771179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.533 [2024-04-25 03:28:53.771350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.533 [2024-04-25 03:28:53.771375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.533 qpair failed and we were unable to recover it. 00:28:19.533 [2024-04-25 03:28:53.771574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.533 [2024-04-25 03:28:53.771773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.533 [2024-04-25 03:28:53.771799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.533 qpair failed and we were unable to recover it. 00:28:19.533 [2024-04-25 03:28:53.771971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.533 [2024-04-25 03:28:53.772171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.533 [2024-04-25 03:28:53.772196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.533 qpair failed and we were unable to recover it. 00:28:19.533 [2024-04-25 03:28:53.772382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.533 [2024-04-25 03:28:53.772581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.533 [2024-04-25 03:28:53.772606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.533 qpair failed and we were unable to recover it. 00:28:19.533 [2024-04-25 03:28:53.772793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.533 [2024-04-25 03:28:53.772966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.533 [2024-04-25 03:28:53.772992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.533 qpair failed and we were unable to recover it. 00:28:19.533 [2024-04-25 03:28:53.773163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.533 [2024-04-25 03:28:53.773338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.533 [2024-04-25 03:28:53.773363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.533 qpair failed and we were unable to recover it. 00:28:19.533 [2024-04-25 03:28:53.773527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.533 [2024-04-25 03:28:53.773699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.533 [2024-04-25 03:28:53.773726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.533 qpair failed and we were unable to recover it. 00:28:19.533 [2024-04-25 03:28:53.773919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.533 [2024-04-25 03:28:53.774112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.533 [2024-04-25 03:28:53.774137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.533 qpair failed and we were unable to recover it. 00:28:19.533 [2024-04-25 03:28:53.774312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.533 [2024-04-25 03:28:53.774495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.533 [2024-04-25 03:28:53.774520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.533 qpair failed and we were unable to recover it. 00:28:19.533 [2024-04-25 03:28:53.774721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.533 [2024-04-25 03:28:53.774893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.533 [2024-04-25 03:28:53.774918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.533 qpair failed and we were unable to recover it. 00:28:19.533 [2024-04-25 03:28:53.775131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.533 [2024-04-25 03:28:53.775322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.533 [2024-04-25 03:28:53.775348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.533 03:28:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:19.533 qpair failed and we were unable to recover it. 00:28:19.533 03:28:53 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:19.533 [2024-04-25 03:28:53.775538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.533 03:28:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:19.533 [2024-04-25 03:28:53.775726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.533 03:28:53 -- common/autotest_common.sh@10 -- # set +x 00:28:19.533 [2024-04-25 03:28:53.775752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.533 qpair failed and we were unable to recover it. 00:28:19.533 [2024-04-25 03:28:53.775952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.533 [2024-04-25 03:28:53.776140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.534 [2024-04-25 03:28:53.776167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.534 qpair failed and we were unable to recover it. 00:28:19.534 [2024-04-25 03:28:53.776335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.534 [2024-04-25 03:28:53.776501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.534 [2024-04-25 03:28:53.776527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.534 qpair failed and we were unable to recover it. 00:28:19.534 [2024-04-25 03:28:53.776754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.534 [2024-04-25 03:28:53.776927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.534 [2024-04-25 03:28:53.776953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.534 qpair failed and we were unable to recover it. 00:28:19.534 [2024-04-25 03:28:53.777145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.534 [2024-04-25 03:28:53.777333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.534 [2024-04-25 03:28:53.777359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.534 qpair failed and we were unable to recover it. 00:28:19.534 [2024-04-25 03:28:53.777529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.534 [2024-04-25 03:28:53.777726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.534 [2024-04-25 03:28:53.777751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.534 qpair failed and we were unable to recover it. 00:28:19.534 [2024-04-25 03:28:53.777946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.534 [2024-04-25 03:28:53.778141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.534 [2024-04-25 03:28:53.778167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.534 qpair failed and we were unable to recover it. 00:28:19.534 [2024-04-25 03:28:53.778365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.534 [2024-04-25 03:28:53.778566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.534 [2024-04-25 03:28:53.778592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.534 qpair failed and we were unable to recover it. 00:28:19.534 [2024-04-25 03:28:53.778770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.534 [2024-04-25 03:28:53.778940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.534 [2024-04-25 03:28:53.778967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.534 qpair failed and we were unable to recover it. 00:28:19.534 [2024-04-25 03:28:53.779131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.534 [2024-04-25 03:28:53.779327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.534 [2024-04-25 03:28:53.779353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.534 qpair failed and we were unable to recover it. 00:28:19.534 [2024-04-25 03:28:53.779519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.534 [2024-04-25 03:28:53.779684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.534 [2024-04-25 03:28:53.779711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.534 qpair failed and we were unable to recover it. 00:28:19.534 [2024-04-25 03:28:53.779891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.534 [2024-04-25 03:28:53.780068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.534 [2024-04-25 03:28:53.780094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.534 qpair failed and we were unable to recover it. 00:28:19.534 [2024-04-25 03:28:53.780263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.534 [2024-04-25 03:28:53.780436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.534 [2024-04-25 03:28:53.780462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.534 qpair failed and we were unable to recover it. 00:28:19.534 [2024-04-25 03:28:53.780623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.534 [2024-04-25 03:28:53.780824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.534 [2024-04-25 03:28:53.780850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.534 qpair failed and we were unable to recover it. 00:28:19.534 [2024-04-25 03:28:53.781044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.534 [2024-04-25 03:28:53.781264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.534 [2024-04-25 03:28:53.781289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.534 qpair failed and we were unable to recover it. 00:28:19.534 [2024-04-25 03:28:53.781457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.534 [2024-04-25 03:28:53.781623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.534 [2024-04-25 03:28:53.781656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.534 qpair failed and we were unable to recover it. 00:28:19.534 [2024-04-25 03:28:53.781847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.534 [2024-04-25 03:28:53.782010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.534 [2024-04-25 03:28:53.782039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.534 qpair failed and we were unable to recover it. 00:28:19.534 [2024-04-25 03:28:53.782261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.534 [2024-04-25 03:28:53.782430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.534 [2024-04-25 03:28:53.782456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.534 qpair failed and we were unable to recover it. 00:28:19.534 [2024-04-25 03:28:53.782621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.534 [2024-04-25 03:28:53.782825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.534 [2024-04-25 03:28:53.782851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.534 qpair failed and we were unable to recover it. 00:28:19.534 [2024-04-25 03:28:53.783057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.534 [2024-04-25 03:28:53.783223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.534 [2024-04-25 03:28:53.783249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.534 qpair failed and we were unable to recover it. 00:28:19.534 [2024-04-25 03:28:53.783420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.534 03:28:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:19.534 [2024-04-25 03:28:53.783580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.534 03:28:53 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:19.534 [2024-04-25 03:28:53.783606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.534 qpair failed and we were unable to recover it. 00:28:19.534 03:28:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:19.534 [2024-04-25 03:28:53.783783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.534 03:28:53 -- common/autotest_common.sh@10 -- # set +x 00:28:19.534 [2024-04-25 03:28:53.783988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.534 [2024-04-25 03:28:53.784014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.534 qpair failed and we were unable to recover it. 00:28:19.534 [2024-04-25 03:28:53.784177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.534 [2024-04-25 03:28:53.784340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.534 [2024-04-25 03:28:53.784367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.534 qpair failed and we were unable to recover it. 00:28:19.534 [2024-04-25 03:28:53.784568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.534 [2024-04-25 03:28:53.784778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.534 [2024-04-25 03:28:53.784804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.535 qpair failed and we were unable to recover it. 00:28:19.535 [2024-04-25 03:28:53.784972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.535 [2024-04-25 03:28:53.785144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.535 [2024-04-25 03:28:53.785169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.535 qpair failed and we were unable to recover it. 00:28:19.535 [2024-04-25 03:28:53.785369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.535 [2024-04-25 03:28:53.785560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.535 [2024-04-25 03:28:53.785586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.535 qpair failed and we were unable to recover it. 00:28:19.535 [2024-04-25 03:28:53.785772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.535 [2024-04-25 03:28:53.785965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.535 [2024-04-25 03:28:53.785991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.535 qpair failed and we were unable to recover it. 00:28:19.535 [2024-04-25 03:28:53.786189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.535 [2024-04-25 03:28:53.786385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.535 [2024-04-25 03:28:53.786411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.535 qpair failed and we were unable to recover it. 00:28:19.535 [2024-04-25 03:28:53.786569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.535 [2024-04-25 03:28:53.786766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.535 [2024-04-25 03:28:53.786791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.535 qpair failed and we were unable to recover it. 00:28:19.535 [2024-04-25 03:28:53.786963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.535 [2024-04-25 03:28:53.787135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.535 [2024-04-25 03:28:53.787164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.535 qpair failed and we were unable to recover it. 00:28:19.535 [2024-04-25 03:28:53.787390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.535 [2024-04-25 03:28:53.787616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.535 [2024-04-25 03:28:53.787664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.535 qpair failed and we were unable to recover it. 00:28:19.535 [2024-04-25 03:28:53.787857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.535 [2024-04-25 03:28:53.788029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.535 [2024-04-25 03:28:53.788056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.535 qpair failed and we were unable to recover it. 00:28:19.535 [2024-04-25 03:28:53.788260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.535 [2024-04-25 03:28:53.788457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.535 [2024-04-25 03:28:53.788483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.535 qpair failed and we were unable to recover it. 00:28:19.535 [2024-04-25 03:28:53.788684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.535 [2024-04-25 03:28:53.788852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.535 [2024-04-25 03:28:53.788877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.535 qpair failed and we were unable to recover it. 00:28:19.535 [2024-04-25 03:28:53.789041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.535 [2024-04-25 03:28:53.789271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.535 [2024-04-25 03:28:53.789297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.535 qpair failed and we were unable to recover it. 00:28:19.535 [2024-04-25 03:28:53.789495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.535 [2024-04-25 03:28:53.789699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.535 [2024-04-25 03:28:53.789726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.535 qpair failed and we were unable to recover it. 00:28:19.535 [2024-04-25 03:28:53.789948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.535 [2024-04-25 03:28:53.790116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.535 [2024-04-25 03:28:53.790142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.535 qpair failed and we were unable to recover it. 00:28:19.535 [2024-04-25 03:28:53.790344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.535 [2024-04-25 03:28:53.790503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.535 [2024-04-25 03:28:53.790528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.535 qpair failed and we were unable to recover it. 00:28:19.535 [2024-04-25 03:28:53.790737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.535 [2024-04-25 03:28:53.790902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.535 [2024-04-25 03:28:53.790928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.535 qpair failed and we were unable to recover it. 00:28:19.535 [2024-04-25 03:28:53.791125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.535 [2024-04-25 03:28:53.791285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.535 [2024-04-25 03:28:53.791310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.535 qpair failed and we were unable to recover it. 00:28:19.535 03:28:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:19.535 [2024-04-25 03:28:53.791507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.535 03:28:53 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:19.535 03:28:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:19.535 [2024-04-25 03:28:53.791679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.535 [2024-04-25 03:28:53.791704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.535 qpair failed and we were unable to recover it. 00:28:19.535 03:28:53 -- common/autotest_common.sh@10 -- # set +x 00:28:19.535 [2024-04-25 03:28:53.791869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.535 [2024-04-25 03:28:53.792069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.535 [2024-04-25 03:28:53.792095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.535 qpair failed and we were unable to recover it. 00:28:19.535 [2024-04-25 03:28:53.792289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.535 [2024-04-25 03:28:53.792500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.535 [2024-04-25 03:28:53.792525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.535 qpair failed and we were unable to recover it. 00:28:19.535 [2024-04-25 03:28:53.792710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.535 [2024-04-25 03:28:53.792905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.535 [2024-04-25 03:28:53.792934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.535 qpair failed and we were unable to recover it. 00:28:19.535 [2024-04-25 03:28:53.793122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.535 [2024-04-25 03:28:53.793289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.535 [2024-04-25 03:28:53.793315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.535 qpair failed and we were unable to recover it. 00:28:19.535 [2024-04-25 03:28:53.793505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.535 [2024-04-25 03:28:53.793726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.535 [2024-04-25 03:28:53.793751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.535 qpair failed and we were unable to recover it. 00:28:19.535 [2024-04-25 03:28:53.793938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.535 [2024-04-25 03:28:53.794160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.535 [2024-04-25 03:28:53.794185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.535 qpair failed and we were unable to recover it. 00:28:19.535 [2024-04-25 03:28:53.794349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.535 [2024-04-25 03:28:53.794520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.535 [2024-04-25 03:28:53.794546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.535 qpair failed and we were unable to recover it. 00:28:19.535 [2024-04-25 03:28:53.794723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.535 [2024-04-25 03:28:53.794892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.535 [2024-04-25 03:28:53.794918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.535 qpair failed and we were unable to recover it. 00:28:19.535 [2024-04-25 03:28:53.795126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.535 [2024-04-25 03:28:53.795327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.535 [2024-04-25 03:28:53.795353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4cf30 with addr=10.0.0.2, port=4420 00:28:19.535 qpair failed and we were unable to recover it. 00:28:19.535 [2024-04-25 03:28:53.795361] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:19.535 [2024-04-25 03:28:53.797901] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.535 [2024-04-25 03:28:53.798112] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.535 [2024-04-25 03:28:53.798138] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.535 [2024-04-25 03:28:53.798154] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.535 [2024-04-25 03:28:53.798167] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:19.536 [2024-04-25 03:28:53.798204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.536 qpair failed and we were unable to recover it. 00:28:19.536 03:28:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:19.536 03:28:53 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:19.536 03:28:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:19.536 03:28:53 -- common/autotest_common.sh@10 -- # set +x 00:28:19.536 03:28:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:19.536 03:28:53 -- host/target_disconnect.sh@58 -- # wait 1623296 00:28:19.536 [2024-04-25 03:28:53.807802] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.536 [2024-04-25 03:28:53.807982] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.536 [2024-04-25 03:28:53.808010] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.536 [2024-04-25 03:28:53.808026] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.536 [2024-04-25 03:28:53.808039] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:19.536 [2024-04-25 03:28:53.808067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.536 qpair failed and we were unable to recover it. 00:28:19.536 [2024-04-25 03:28:53.817805] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.536 [2024-04-25 03:28:53.817977] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.536 [2024-04-25 03:28:53.818004] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.536 [2024-04-25 03:28:53.818020] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.536 [2024-04-25 03:28:53.818033] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:19.536 [2024-04-25 03:28:53.818062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.536 qpair failed and we were unable to recover it. 00:28:19.536 [2024-04-25 03:28:53.827726] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.536 [2024-04-25 03:28:53.827904] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.536 [2024-04-25 03:28:53.827933] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.536 [2024-04-25 03:28:53.827948] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.536 [2024-04-25 03:28:53.827961] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:19.536 [2024-04-25 03:28:53.827990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.536 qpair failed and we were unable to recover it. 00:28:19.536 [2024-04-25 03:28:53.837800] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.536 [2024-04-25 03:28:53.837983] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.536 [2024-04-25 03:28:53.838011] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.536 [2024-04-25 03:28:53.838028] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.536 [2024-04-25 03:28:53.838042] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:19.536 [2024-04-25 03:28:53.838070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.536 qpair failed and we were unable to recover it. 00:28:19.536 [2024-04-25 03:28:53.847790] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.536 [2024-04-25 03:28:53.847954] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.536 [2024-04-25 03:28:53.847979] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.536 [2024-04-25 03:28:53.847995] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.536 [2024-04-25 03:28:53.848009] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:19.536 [2024-04-25 03:28:53.848037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.536 qpair failed and we were unable to recover it. 00:28:19.536 [2024-04-25 03:28:53.857811] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.536 [2024-04-25 03:28:53.857991] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.536 [2024-04-25 03:28:53.858018] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.536 [2024-04-25 03:28:53.858033] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.536 [2024-04-25 03:28:53.858045] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:19.536 [2024-04-25 03:28:53.858073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.536 qpair failed and we were unable to recover it. 00:28:19.536 [2024-04-25 03:28:53.867799] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.536 [2024-04-25 03:28:53.867975] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.536 [2024-04-25 03:28:53.867999] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.536 [2024-04-25 03:28:53.868013] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.536 [2024-04-25 03:28:53.868026] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:19.536 [2024-04-25 03:28:53.868055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.536 qpair failed and we were unable to recover it. 00:28:19.536 [2024-04-25 03:28:53.877818] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.536 [2024-04-25 03:28:53.877992] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.536 [2024-04-25 03:28:53.878022] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.536 [2024-04-25 03:28:53.878043] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.536 [2024-04-25 03:28:53.878056] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:19.536 [2024-04-25 03:28:53.878085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.536 qpair failed and we were unable to recover it. 00:28:19.536 [2024-04-25 03:28:53.887847] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.536 [2024-04-25 03:28:53.888012] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.536 [2024-04-25 03:28:53.888037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.536 [2024-04-25 03:28:53.888052] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.536 [2024-04-25 03:28:53.888076] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:19.536 [2024-04-25 03:28:53.888104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.536 qpair failed and we were unable to recover it. 00:28:19.536 [2024-04-25 03:28:53.897914] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.536 [2024-04-25 03:28:53.898085] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.536 [2024-04-25 03:28:53.898109] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.536 [2024-04-25 03:28:53.898124] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.536 [2024-04-25 03:28:53.898138] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:19.536 [2024-04-25 03:28:53.898167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.536 qpair failed and we were unable to recover it. 00:28:19.536 [2024-04-25 03:28:53.907925] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.536 [2024-04-25 03:28:53.908094] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.536 [2024-04-25 03:28:53.908126] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.536 [2024-04-25 03:28:53.908140] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.536 [2024-04-25 03:28:53.908153] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:19.536 [2024-04-25 03:28:53.908180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.536 qpair failed and we were unable to recover it. 00:28:19.536 [2024-04-25 03:28:53.917987] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.536 [2024-04-25 03:28:53.918163] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.536 [2024-04-25 03:28:53.918188] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.536 [2024-04-25 03:28:53.918203] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.536 [2024-04-25 03:28:53.918230] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:19.536 [2024-04-25 03:28:53.918259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.536 qpair failed and we were unable to recover it. 00:28:19.536 [2024-04-25 03:28:53.928042] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.536 [2024-04-25 03:28:53.928234] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.536 [2024-04-25 03:28:53.928259] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.536 [2024-04-25 03:28:53.928273] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.536 [2024-04-25 03:28:53.928287] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:19.537 [2024-04-25 03:28:53.928316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.537 qpair failed and we were unable to recover it. 00:28:19.537 [2024-04-25 03:28:53.938057] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.537 [2024-04-25 03:28:53.938228] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.537 [2024-04-25 03:28:53.938252] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.537 [2024-04-25 03:28:53.938267] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.537 [2024-04-25 03:28:53.938279] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:19.537 [2024-04-25 03:28:53.938307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.537 qpair failed and we were unable to recover it. 00:28:19.537 [2024-04-25 03:28:53.948047] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.537 [2024-04-25 03:28:53.948225] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.537 [2024-04-25 03:28:53.948250] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.537 [2024-04-25 03:28:53.948265] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.537 [2024-04-25 03:28:53.948277] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:19.537 [2024-04-25 03:28:53.948306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.537 qpair failed and we were unable to recover it. 00:28:19.537 [2024-04-25 03:28:53.958154] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.537 [2024-04-25 03:28:53.958329] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.537 [2024-04-25 03:28:53.958359] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.537 [2024-04-25 03:28:53.958374] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.537 [2024-04-25 03:28:53.958401] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:19.537 [2024-04-25 03:28:53.958436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.537 qpair failed and we were unable to recover it. 00:28:19.537 [2024-04-25 03:28:53.968309] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.537 [2024-04-25 03:28:53.968496] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.537 [2024-04-25 03:28:53.968527] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.537 [2024-04-25 03:28:53.968542] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.537 [2024-04-25 03:28:53.968555] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:19.537 [2024-04-25 03:28:53.968584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.537 qpair failed and we were unable to recover it. 00:28:19.537 [2024-04-25 03:28:53.978191] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.537 [2024-04-25 03:28:53.978360] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.537 [2024-04-25 03:28:53.978385] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.537 [2024-04-25 03:28:53.978400] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.537 [2024-04-25 03:28:53.978413] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:19.537 [2024-04-25 03:28:53.978441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.537 qpair failed and we were unable to recover it. 00:28:19.537 [2024-04-25 03:28:53.988212] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.537 [2024-04-25 03:28:53.988389] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.537 [2024-04-25 03:28:53.988414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.537 [2024-04-25 03:28:53.988428] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.537 [2024-04-25 03:28:53.988441] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:19.537 [2024-04-25 03:28:53.988470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.537 qpair failed and we were unable to recover it. 00:28:19.537 [2024-04-25 03:28:53.998281] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.537 [2024-04-25 03:28:53.998458] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.537 [2024-04-25 03:28:53.998483] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.537 [2024-04-25 03:28:53.998498] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.537 [2024-04-25 03:28:53.998511] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:19.537 [2024-04-25 03:28:53.998538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.537 qpair failed and we were unable to recover it. 00:28:19.537 [2024-04-25 03:28:54.008230] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.537 [2024-04-25 03:28:54.008402] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.537 [2024-04-25 03:28:54.008427] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.537 [2024-04-25 03:28:54.008441] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.537 [2024-04-25 03:28:54.008454] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:19.537 [2024-04-25 03:28:54.008482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.537 qpair failed and we were unable to recover it. 00:28:19.537 [2024-04-25 03:28:54.018270] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.537 [2024-04-25 03:28:54.018439] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.537 [2024-04-25 03:28:54.018464] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.537 [2024-04-25 03:28:54.018479] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.537 [2024-04-25 03:28:54.018492] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:19.537 [2024-04-25 03:28:54.018520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.537 qpair failed and we were unable to recover it. 00:28:19.797 [2024-04-25 03:28:54.028312] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.797 [2024-04-25 03:28:54.028496] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.797 [2024-04-25 03:28:54.028521] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.797 [2024-04-25 03:28:54.028536] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.797 [2024-04-25 03:28:54.028549] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:19.797 [2024-04-25 03:28:54.028578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.797 qpair failed and we were unable to recover it. 00:28:19.797 [2024-04-25 03:28:54.038322] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.797 [2024-04-25 03:28:54.038505] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.797 [2024-04-25 03:28:54.038530] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.797 [2024-04-25 03:28:54.038544] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.797 [2024-04-25 03:28:54.038557] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:19.797 [2024-04-25 03:28:54.038586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.797 qpair failed and we were unable to recover it. 00:28:19.797 [2024-04-25 03:28:54.048349] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.797 [2024-04-25 03:28:54.048521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.797 [2024-04-25 03:28:54.048546] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.797 [2024-04-25 03:28:54.048561] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.797 [2024-04-25 03:28:54.048574] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:19.797 [2024-04-25 03:28:54.048601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.797 qpair failed and we were unable to recover it. 00:28:19.797 [2024-04-25 03:28:54.058393] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.797 [2024-04-25 03:28:54.058567] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.797 [2024-04-25 03:28:54.058600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.797 [2024-04-25 03:28:54.058618] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.797 [2024-04-25 03:28:54.058638] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:19.797 [2024-04-25 03:28:54.058669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.797 qpair failed and we were unable to recover it. 00:28:19.797 [2024-04-25 03:28:54.068382] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.797 [2024-04-25 03:28:54.068562] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.797 [2024-04-25 03:28:54.068587] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.797 [2024-04-25 03:28:54.068601] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.797 [2024-04-25 03:28:54.068614] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:19.797 [2024-04-25 03:28:54.068650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.797 qpair failed and we were unable to recover it. 00:28:19.797 [2024-04-25 03:28:54.078449] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.797 [2024-04-25 03:28:54.078637] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.797 [2024-04-25 03:28:54.078663] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.797 [2024-04-25 03:28:54.078678] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.797 [2024-04-25 03:28:54.078691] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:19.797 [2024-04-25 03:28:54.078720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.797 qpair failed and we were unable to recover it. 00:28:19.797 [2024-04-25 03:28:54.088468] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.797 [2024-04-25 03:28:54.088658] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.797 [2024-04-25 03:28:54.088683] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.797 [2024-04-25 03:28:54.088698] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.797 [2024-04-25 03:28:54.088711] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:19.797 [2024-04-25 03:28:54.088739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.797 qpair failed and we were unable to recover it. 00:28:19.797 [2024-04-25 03:28:54.098501] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.797 [2024-04-25 03:28:54.098691] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.797 [2024-04-25 03:28:54.098715] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.797 [2024-04-25 03:28:54.098730] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.797 [2024-04-25 03:28:54.098743] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:19.797 [2024-04-25 03:28:54.098778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.797 qpair failed and we were unable to recover it. 00:28:19.797 [2024-04-25 03:28:54.108507] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.797 [2024-04-25 03:28:54.108688] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.797 [2024-04-25 03:28:54.108714] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.797 [2024-04-25 03:28:54.108729] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.797 [2024-04-25 03:28:54.108741] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:19.797 [2024-04-25 03:28:54.108769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.797 qpair failed and we were unable to recover it. 00:28:19.797 [2024-04-25 03:28:54.118555] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.797 [2024-04-25 03:28:54.118735] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.797 [2024-04-25 03:28:54.118760] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.797 [2024-04-25 03:28:54.118774] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.797 [2024-04-25 03:28:54.118787] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:19.797 [2024-04-25 03:28:54.118815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.797 qpair failed and we were unable to recover it. 00:28:19.797 [2024-04-25 03:28:54.128601] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.797 [2024-04-25 03:28:54.128782] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.797 [2024-04-25 03:28:54.128807] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.797 [2024-04-25 03:28:54.128821] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.797 [2024-04-25 03:28:54.128834] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:19.797 [2024-04-25 03:28:54.128863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.797 qpair failed and we were unable to recover it. 00:28:19.797 [2024-04-25 03:28:54.138688] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.797 [2024-04-25 03:28:54.138856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.797 [2024-04-25 03:28:54.138880] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.797 [2024-04-25 03:28:54.138895] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.797 [2024-04-25 03:28:54.138908] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:19.797 [2024-04-25 03:28:54.138937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.797 qpair failed and we were unable to recover it. 00:28:19.797 [2024-04-25 03:28:54.148612] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.798 [2024-04-25 03:28:54.148793] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.798 [2024-04-25 03:28:54.148823] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.798 [2024-04-25 03:28:54.148839] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.798 [2024-04-25 03:28:54.148852] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:19.798 [2024-04-25 03:28:54.148881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.798 qpair failed and we were unable to recover it. 00:28:19.798 [2024-04-25 03:28:54.158647] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.798 [2024-04-25 03:28:54.158820] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.798 [2024-04-25 03:28:54.158845] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.798 [2024-04-25 03:28:54.158860] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.798 [2024-04-25 03:28:54.158873] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:19.798 [2024-04-25 03:28:54.158902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.798 qpair failed and we were unable to recover it. 00:28:19.798 [2024-04-25 03:28:54.168667] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.798 [2024-04-25 03:28:54.168834] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.798 [2024-04-25 03:28:54.168858] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.798 [2024-04-25 03:28:54.168872] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.798 [2024-04-25 03:28:54.168885] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:19.798 [2024-04-25 03:28:54.168913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.798 qpair failed and we were unable to recover it. 00:28:19.798 [2024-04-25 03:28:54.178792] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.798 [2024-04-25 03:28:54.178961] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.798 [2024-04-25 03:28:54.178986] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.798 [2024-04-25 03:28:54.179000] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.798 [2024-04-25 03:28:54.179013] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:19.798 [2024-04-25 03:28:54.179041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.798 qpair failed and we were unable to recover it. 00:28:19.798 [2024-04-25 03:28:54.188726] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.798 [2024-04-25 03:28:54.188900] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.798 [2024-04-25 03:28:54.188925] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.798 [2024-04-25 03:28:54.188940] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.798 [2024-04-25 03:28:54.188953] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:19.798 [2024-04-25 03:28:54.188986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.798 qpair failed and we were unable to recover it. 00:28:19.798 [2024-04-25 03:28:54.198749] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.798 [2024-04-25 03:28:54.198934] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.798 [2024-04-25 03:28:54.198959] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.798 [2024-04-25 03:28:54.198973] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.798 [2024-04-25 03:28:54.198986] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:19.798 [2024-04-25 03:28:54.199030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.798 qpair failed and we were unable to recover it. 00:28:19.798 [2024-04-25 03:28:54.208821] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.798 [2024-04-25 03:28:54.209025] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.798 [2024-04-25 03:28:54.209052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.798 [2024-04-25 03:28:54.209072] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.798 [2024-04-25 03:28:54.209085] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:19.798 [2024-04-25 03:28:54.209115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.798 qpair failed and we were unable to recover it. 00:28:19.798 [2024-04-25 03:28:54.218911] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.798 [2024-04-25 03:28:54.219086] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.798 [2024-04-25 03:28:54.219111] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.798 [2024-04-25 03:28:54.219126] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.798 [2024-04-25 03:28:54.219139] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:19.798 [2024-04-25 03:28:54.219168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.798 qpair failed and we were unable to recover it. 00:28:19.798 [2024-04-25 03:28:54.228874] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.798 [2024-04-25 03:28:54.229044] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.798 [2024-04-25 03:28:54.229069] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.798 [2024-04-25 03:28:54.229084] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.798 [2024-04-25 03:28:54.229096] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:19.798 [2024-04-25 03:28:54.229125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.798 qpair failed and we were unable to recover it. 00:28:19.798 [2024-04-25 03:28:54.238914] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.798 [2024-04-25 03:28:54.239085] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.798 [2024-04-25 03:28:54.239129] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.798 [2024-04-25 03:28:54.239144] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.798 [2024-04-25 03:28:54.239158] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:19.798 [2024-04-25 03:28:54.239201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.798 qpair failed and we were unable to recover it. 00:28:19.798 [2024-04-25 03:28:54.248905] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.798 [2024-04-25 03:28:54.249077] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.798 [2024-04-25 03:28:54.249101] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.798 [2024-04-25 03:28:54.249116] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.798 [2024-04-25 03:28:54.249129] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:19.798 [2024-04-25 03:28:54.249157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.798 qpair failed and we were unable to recover it. 00:28:19.798 [2024-04-25 03:28:54.258917] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.798 [2024-04-25 03:28:54.259091] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.798 [2024-04-25 03:28:54.259115] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.798 [2024-04-25 03:28:54.259129] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.798 [2024-04-25 03:28:54.259142] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:19.798 [2024-04-25 03:28:54.259171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.798 qpair failed and we were unable to recover it. 00:28:19.798 [2024-04-25 03:28:54.269011] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.798 [2024-04-25 03:28:54.269192] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.798 [2024-04-25 03:28:54.269216] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.798 [2024-04-25 03:28:54.269231] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.798 [2024-04-25 03:28:54.269244] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:19.798 [2024-04-25 03:28:54.269275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.798 qpair failed and we were unable to recover it. 00:28:19.798 [2024-04-25 03:28:54.278983] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.798 [2024-04-25 03:28:54.279157] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.798 [2024-04-25 03:28:54.279182] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.798 [2024-04-25 03:28:54.279197] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.798 [2024-04-25 03:28:54.279215] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:19.799 [2024-04-25 03:28:54.279245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.799 qpair failed and we were unable to recover it. 00:28:19.799 [2024-04-25 03:28:54.289047] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.799 [2024-04-25 03:28:54.289229] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.799 [2024-04-25 03:28:54.289254] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.799 [2024-04-25 03:28:54.289269] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.799 [2024-04-25 03:28:54.289282] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:19.799 [2024-04-25 03:28:54.289313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.799 qpair failed and we were unable to recover it. 00:28:20.058 [2024-04-25 03:28:54.299052] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.058 [2024-04-25 03:28:54.299221] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.058 [2024-04-25 03:28:54.299245] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.058 [2024-04-25 03:28:54.299260] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.058 [2024-04-25 03:28:54.299272] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:20.058 [2024-04-25 03:28:54.299301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.058 qpair failed and we were unable to recover it. 00:28:20.058 [2024-04-25 03:28:54.309082] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.058 [2024-04-25 03:28:54.309258] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.058 [2024-04-25 03:28:54.309284] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.058 [2024-04-25 03:28:54.309299] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.058 [2024-04-25 03:28:54.309311] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:20.058 [2024-04-25 03:28:54.309339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.058 qpair failed and we were unable to recover it. 00:28:20.058 [2024-04-25 03:28:54.319123] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.058 [2024-04-25 03:28:54.319297] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.058 [2024-04-25 03:28:54.319322] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.058 [2024-04-25 03:28:54.319350] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.058 [2024-04-25 03:28:54.319363] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:20.058 [2024-04-25 03:28:54.319392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.058 qpair failed and we were unable to recover it. 00:28:20.058 [2024-04-25 03:28:54.329116] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.058 [2024-04-25 03:28:54.329287] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.058 [2024-04-25 03:28:54.329317] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.058 [2024-04-25 03:28:54.329332] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.058 [2024-04-25 03:28:54.329345] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:20.058 [2024-04-25 03:28:54.329374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.058 qpair failed and we were unable to recover it. 00:28:20.058 [2024-04-25 03:28:54.339277] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.058 [2024-04-25 03:28:54.339448] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.058 [2024-04-25 03:28:54.339473] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.058 [2024-04-25 03:28:54.339488] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.058 [2024-04-25 03:28:54.339501] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:20.058 [2024-04-25 03:28:54.339529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.058 qpair failed and we were unable to recover it. 00:28:20.058 [2024-04-25 03:28:54.349186] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.059 [2024-04-25 03:28:54.349373] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.059 [2024-04-25 03:28:54.349398] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.059 [2024-04-25 03:28:54.349413] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.059 [2024-04-25 03:28:54.349427] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:20.059 [2024-04-25 03:28:54.349457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.059 qpair failed and we were unable to recover it. 00:28:20.059 [2024-04-25 03:28:54.359228] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.059 [2024-04-25 03:28:54.359404] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.059 [2024-04-25 03:28:54.359429] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.059 [2024-04-25 03:28:54.359444] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.059 [2024-04-25 03:28:54.359457] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:20.059 [2024-04-25 03:28:54.359485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.059 qpair failed and we were unable to recover it. 00:28:20.059 [2024-04-25 03:28:54.369223] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.059 [2024-04-25 03:28:54.369388] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.059 [2024-04-25 03:28:54.369413] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.059 [2024-04-25 03:28:54.369427] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.059 [2024-04-25 03:28:54.369444] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:20.059 [2024-04-25 03:28:54.369474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.059 qpair failed and we were unable to recover it. 00:28:20.059 [2024-04-25 03:28:54.379266] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.059 [2024-04-25 03:28:54.379450] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.059 [2024-04-25 03:28:54.379475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.059 [2024-04-25 03:28:54.379489] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.059 [2024-04-25 03:28:54.379502] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:20.059 [2024-04-25 03:28:54.379531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.059 qpair failed and we were unable to recover it. 00:28:20.059 [2024-04-25 03:28:54.389341] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.059 [2024-04-25 03:28:54.389529] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.059 [2024-04-25 03:28:54.389553] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.059 [2024-04-25 03:28:54.389568] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.059 [2024-04-25 03:28:54.389582] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:20.059 [2024-04-25 03:28:54.389610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.059 qpair failed and we were unable to recover it. 00:28:20.059 [2024-04-25 03:28:54.399361] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.059 [2024-04-25 03:28:54.399535] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.059 [2024-04-25 03:28:54.399559] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.059 [2024-04-25 03:28:54.399590] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.059 [2024-04-25 03:28:54.399602] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:20.059 [2024-04-25 03:28:54.399652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.059 qpair failed and we were unable to recover it. 00:28:20.059 [2024-04-25 03:28:54.409367] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.059 [2024-04-25 03:28:54.409538] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.059 [2024-04-25 03:28:54.409563] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.059 [2024-04-25 03:28:54.409577] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.059 [2024-04-25 03:28:54.409590] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:20.059 [2024-04-25 03:28:54.409618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.059 qpair failed and we were unable to recover it. 00:28:20.059 [2024-04-25 03:28:54.419395] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.059 [2024-04-25 03:28:54.419565] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.059 [2024-04-25 03:28:54.419590] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.059 [2024-04-25 03:28:54.419605] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.059 [2024-04-25 03:28:54.419618] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:20.059 [2024-04-25 03:28:54.419655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.059 qpair failed and we were unable to recover it. 00:28:20.059 [2024-04-25 03:28:54.429452] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.059 [2024-04-25 03:28:54.429665] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.059 [2024-04-25 03:28:54.429692] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.059 [2024-04-25 03:28:54.429708] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.059 [2024-04-25 03:28:54.429725] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:20.059 [2024-04-25 03:28:54.429757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.059 qpair failed and we were unable to recover it. 00:28:20.059 [2024-04-25 03:28:54.439486] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.059 [2024-04-25 03:28:54.439670] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.059 [2024-04-25 03:28:54.439695] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.059 [2024-04-25 03:28:54.439710] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.059 [2024-04-25 03:28:54.439723] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:20.059 [2024-04-25 03:28:54.439752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.059 qpair failed and we were unable to recover it. 00:28:20.059 [2024-04-25 03:28:54.449471] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.059 [2024-04-25 03:28:54.449658] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.059 [2024-04-25 03:28:54.449683] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.059 [2024-04-25 03:28:54.449698] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.059 [2024-04-25 03:28:54.449710] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:20.059 [2024-04-25 03:28:54.449739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.059 qpair failed and we were unable to recover it. 00:28:20.059 [2024-04-25 03:28:54.459478] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.059 [2024-04-25 03:28:54.459659] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.059 [2024-04-25 03:28:54.459685] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.059 [2024-04-25 03:28:54.459700] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.059 [2024-04-25 03:28:54.459718] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:20.059 [2024-04-25 03:28:54.459747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.059 qpair failed and we were unable to recover it. 00:28:20.059 [2024-04-25 03:28:54.469558] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.059 [2024-04-25 03:28:54.469773] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.059 [2024-04-25 03:28:54.469804] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.059 [2024-04-25 03:28:54.469821] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.059 [2024-04-25 03:28:54.469833] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:20.059 [2024-04-25 03:28:54.469863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.059 qpair failed and we were unable to recover it. 00:28:20.059 [2024-04-25 03:28:54.479565] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.059 [2024-04-25 03:28:54.479761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.060 [2024-04-25 03:28:54.479788] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.060 [2024-04-25 03:28:54.479802] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.060 [2024-04-25 03:28:54.479815] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:20.060 [2024-04-25 03:28:54.479845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.060 qpair failed and we were unable to recover it. 00:28:20.060 [2024-04-25 03:28:54.489709] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.060 [2024-04-25 03:28:54.489896] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.060 [2024-04-25 03:28:54.489921] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.060 [2024-04-25 03:28:54.489936] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.060 [2024-04-25 03:28:54.489948] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:20.060 [2024-04-25 03:28:54.489975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.060 qpair failed and we were unable to recover it. 00:28:20.060 [2024-04-25 03:28:54.499609] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.060 [2024-04-25 03:28:54.499786] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.060 [2024-04-25 03:28:54.499812] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.060 [2024-04-25 03:28:54.499826] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.060 [2024-04-25 03:28:54.499838] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:20.060 [2024-04-25 03:28:54.499866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.060 qpair failed and we were unable to recover it. 00:28:20.060 [2024-04-25 03:28:54.509651] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.060 [2024-04-25 03:28:54.509829] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.060 [2024-04-25 03:28:54.509855] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.060 [2024-04-25 03:28:54.509870] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.060 [2024-04-25 03:28:54.509882] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:20.060 [2024-04-25 03:28:54.509912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.060 qpair failed and we were unable to recover it. 00:28:20.060 [2024-04-25 03:28:54.519672] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.060 [2024-04-25 03:28:54.519849] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.060 [2024-04-25 03:28:54.519874] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.060 [2024-04-25 03:28:54.519889] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.060 [2024-04-25 03:28:54.519901] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:20.060 [2024-04-25 03:28:54.519930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.060 qpair failed and we were unable to recover it. 00:28:20.060 [2024-04-25 03:28:54.529719] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.060 [2024-04-25 03:28:54.529930] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.060 [2024-04-25 03:28:54.529955] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.060 [2024-04-25 03:28:54.529970] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.060 [2024-04-25 03:28:54.529982] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:20.060 [2024-04-25 03:28:54.530010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.060 qpair failed and we were unable to recover it. 00:28:20.060 [2024-04-25 03:28:54.539718] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.060 [2024-04-25 03:28:54.539885] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.060 [2024-04-25 03:28:54.539910] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.060 [2024-04-25 03:28:54.539926] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.060 [2024-04-25 03:28:54.539938] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:20.060 [2024-04-25 03:28:54.539966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.060 qpair failed and we were unable to recover it. 00:28:20.060 [2024-04-25 03:28:54.549855] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.060 [2024-04-25 03:28:54.550037] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.060 [2024-04-25 03:28:54.550062] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.060 [2024-04-25 03:28:54.550076] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.060 [2024-04-25 03:28:54.550094] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:20.060 [2024-04-25 03:28:54.550122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.060 qpair failed and we were unable to recover it. 00:28:20.319 [2024-04-25 03:28:54.559780] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.319 [2024-04-25 03:28:54.559999] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.319 [2024-04-25 03:28:54.560025] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.319 [2024-04-25 03:28:54.560039] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.319 [2024-04-25 03:28:54.560052] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:20.319 [2024-04-25 03:28:54.560080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.319 qpair failed and we were unable to recover it. 00:28:20.319 [2024-04-25 03:28:54.569873] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.319 [2024-04-25 03:28:54.570081] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.319 [2024-04-25 03:28:54.570108] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.319 [2024-04-25 03:28:54.570123] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.319 [2024-04-25 03:28:54.570138] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:20.319 [2024-04-25 03:28:54.570168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.319 qpair failed and we were unable to recover it. 00:28:20.319 [2024-04-25 03:28:54.579865] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.319 [2024-04-25 03:28:54.580080] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.319 [2024-04-25 03:28:54.580107] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.319 [2024-04-25 03:28:54.580121] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.319 [2024-04-25 03:28:54.580134] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:20.319 [2024-04-25 03:28:54.580162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.319 qpair failed and we were unable to recover it. 00:28:20.319 [2024-04-25 03:28:54.589854] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.319 [2024-04-25 03:28:54.590040] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.319 [2024-04-25 03:28:54.590066] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.319 [2024-04-25 03:28:54.590081] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.319 [2024-04-25 03:28:54.590093] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:20.319 [2024-04-25 03:28:54.590121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.319 qpair failed and we were unable to recover it. 00:28:20.319 [2024-04-25 03:28:54.599975] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.319 [2024-04-25 03:28:54.600159] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.319 [2024-04-25 03:28:54.600185] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.319 [2024-04-25 03:28:54.600199] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.319 [2024-04-25 03:28:54.600212] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:20.319 [2024-04-25 03:28:54.600240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.319 qpair failed and we were unable to recover it. 00:28:20.319 [2024-04-25 03:28:54.609918] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.319 [2024-04-25 03:28:54.610133] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.320 [2024-04-25 03:28:54.610159] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.320 [2024-04-25 03:28:54.610173] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.320 [2024-04-25 03:28:54.610186] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:20.320 [2024-04-25 03:28:54.610213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.320 qpair failed and we were unable to recover it. 00:28:20.320 [2024-04-25 03:28:54.619991] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.320 [2024-04-25 03:28:54.620160] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.320 [2024-04-25 03:28:54.620185] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.320 [2024-04-25 03:28:54.620199] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.320 [2024-04-25 03:28:54.620212] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:20.320 [2024-04-25 03:28:54.620240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.320 qpair failed and we were unable to recover it. 00:28:20.320 [2024-04-25 03:28:54.630037] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.320 [2024-04-25 03:28:54.630250] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.320 [2024-04-25 03:28:54.630276] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.320 [2024-04-25 03:28:54.630290] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.320 [2024-04-25 03:28:54.630302] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:20.320 [2024-04-25 03:28:54.630330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.320 qpair failed and we were unable to recover it. 00:28:20.320 [2024-04-25 03:28:54.640021] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.320 [2024-04-25 03:28:54.640241] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.320 [2024-04-25 03:28:54.640270] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.320 [2024-04-25 03:28:54.640291] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.320 [2024-04-25 03:28:54.640303] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:20.320 [2024-04-25 03:28:54.640332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.320 qpair failed and we were unable to recover it. 00:28:20.320 [2024-04-25 03:28:54.650068] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.320 [2024-04-25 03:28:54.650255] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.320 [2024-04-25 03:28:54.650282] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.320 [2024-04-25 03:28:54.650297] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.320 [2024-04-25 03:28:54.650309] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:20.320 [2024-04-25 03:28:54.650336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.320 qpair failed and we were unable to recover it. 00:28:20.320 [2024-04-25 03:28:54.660183] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.320 [2024-04-25 03:28:54.660352] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.320 [2024-04-25 03:28:54.660378] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.320 [2024-04-25 03:28:54.660392] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.320 [2024-04-25 03:28:54.660405] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:20.320 [2024-04-25 03:28:54.660432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.320 qpair failed and we were unable to recover it. 00:28:20.320 [2024-04-25 03:28:54.670125] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.320 [2024-04-25 03:28:54.670334] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.320 [2024-04-25 03:28:54.670360] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.320 [2024-04-25 03:28:54.670374] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.320 [2024-04-25 03:28:54.670387] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:20.320 [2024-04-25 03:28:54.670415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.320 qpair failed and we were unable to recover it. 00:28:20.320 [2024-04-25 03:28:54.680128] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.320 [2024-04-25 03:28:54.680311] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.320 [2024-04-25 03:28:54.680337] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.320 [2024-04-25 03:28:54.680351] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.320 [2024-04-25 03:28:54.680363] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:20.320 [2024-04-25 03:28:54.680391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.320 qpair failed and we were unable to recover it. 00:28:20.320 [2024-04-25 03:28:54.690156] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.320 [2024-04-25 03:28:54.690323] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.320 [2024-04-25 03:28:54.690348] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.320 [2024-04-25 03:28:54.690362] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.320 [2024-04-25 03:28:54.690375] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:20.320 [2024-04-25 03:28:54.690402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.320 qpair failed and we were unable to recover it. 00:28:20.320 [2024-04-25 03:28:54.700172] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.320 [2024-04-25 03:28:54.700353] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.320 [2024-04-25 03:28:54.700378] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.320 [2024-04-25 03:28:54.700392] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.320 [2024-04-25 03:28:54.700404] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:20.320 [2024-04-25 03:28:54.700431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.320 qpair failed and we were unable to recover it. 00:28:20.320 [2024-04-25 03:28:54.710233] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.320 [2024-04-25 03:28:54.710407] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.320 [2024-04-25 03:28:54.710432] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.320 [2024-04-25 03:28:54.710446] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.320 [2024-04-25 03:28:54.710458] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:20.320 [2024-04-25 03:28:54.710486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.320 qpair failed and we were unable to recover it. 00:28:20.320 [2024-04-25 03:28:54.720249] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.320 [2024-04-25 03:28:54.720434] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.320 [2024-04-25 03:28:54.720460] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.320 [2024-04-25 03:28:54.720475] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.320 [2024-04-25 03:28:54.720488] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:20.320 [2024-04-25 03:28:54.720516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.320 qpair failed and we were unable to recover it. 00:28:20.320 [2024-04-25 03:28:54.730259] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.320 [2024-04-25 03:28:54.730429] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.320 [2024-04-25 03:28:54.730455] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.320 [2024-04-25 03:28:54.730476] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.320 [2024-04-25 03:28:54.730489] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:20.320 [2024-04-25 03:28:54.730518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.320 qpair failed and we were unable to recover it. 00:28:20.320 [2024-04-25 03:28:54.740276] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.320 [2024-04-25 03:28:54.740446] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.320 [2024-04-25 03:28:54.740472] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.320 [2024-04-25 03:28:54.740487] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.320 [2024-04-25 03:28:54.740499] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:20.320 [2024-04-25 03:28:54.740528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.321 qpair failed and we were unable to recover it. 00:28:20.321 [2024-04-25 03:28:54.750327] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.321 [2024-04-25 03:28:54.750551] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.321 [2024-04-25 03:28:54.750577] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.321 [2024-04-25 03:28:54.750592] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.321 [2024-04-25 03:28:54.750604] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:20.321 [2024-04-25 03:28:54.750646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.321 qpair failed and we were unable to recover it. 00:28:20.321 [2024-04-25 03:28:54.760348] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.321 [2024-04-25 03:28:54.760526] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.321 [2024-04-25 03:28:54.760552] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.321 [2024-04-25 03:28:54.760567] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.321 [2024-04-25 03:28:54.760579] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:20.321 [2024-04-25 03:28:54.760607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.321 qpair failed and we were unable to recover it. 00:28:20.321 [2024-04-25 03:28:54.770420] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.321 [2024-04-25 03:28:54.770595] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.321 [2024-04-25 03:28:54.770620] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.321 [2024-04-25 03:28:54.770648] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.321 [2024-04-25 03:28:54.770661] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:20.321 [2024-04-25 03:28:54.770690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.321 qpair failed and we were unable to recover it. 00:28:20.321 [2024-04-25 03:28:54.780510] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.321 [2024-04-25 03:28:54.780687] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.321 [2024-04-25 03:28:54.780713] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.321 [2024-04-25 03:28:54.780728] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.321 [2024-04-25 03:28:54.780740] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:20.321 [2024-04-25 03:28:54.780768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.321 qpair failed and we were unable to recover it. 00:28:20.321 [2024-04-25 03:28:54.790543] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.321 [2024-04-25 03:28:54.790731] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.321 [2024-04-25 03:28:54.790757] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.321 [2024-04-25 03:28:54.790772] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.321 [2024-04-25 03:28:54.790785] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:20.321 [2024-04-25 03:28:54.790812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.321 qpair failed and we were unable to recover it. 00:28:20.321 [2024-04-25 03:28:54.800512] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.321 [2024-04-25 03:28:54.800750] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.321 [2024-04-25 03:28:54.800777] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.321 [2024-04-25 03:28:54.800792] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.321 [2024-04-25 03:28:54.800804] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:20.321 [2024-04-25 03:28:54.800833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.321 qpair failed and we were unable to recover it. 00:28:20.321 [2024-04-25 03:28:54.810507] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.321 [2024-04-25 03:28:54.810681] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.321 [2024-04-25 03:28:54.810708] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.321 [2024-04-25 03:28:54.810723] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.321 [2024-04-25 03:28:54.810736] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:20.321 [2024-04-25 03:28:54.810765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.321 qpair failed and we were unable to recover it. 00:28:20.580 [2024-04-25 03:28:54.820556] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.580 [2024-04-25 03:28:54.820741] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.580 [2024-04-25 03:28:54.820767] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.580 [2024-04-25 03:28:54.820788] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.580 [2024-04-25 03:28:54.820801] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:20.580 [2024-04-25 03:28:54.820829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.580 qpair failed and we were unable to recover it. 00:28:20.580 [2024-04-25 03:28:54.830611] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.580 [2024-04-25 03:28:54.830829] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.580 [2024-04-25 03:28:54.830855] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.580 [2024-04-25 03:28:54.830870] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.580 [2024-04-25 03:28:54.830883] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:20.580 [2024-04-25 03:28:54.830911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.580 qpair failed and we were unable to recover it. 00:28:20.580 [2024-04-25 03:28:54.840581] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.580 [2024-04-25 03:28:54.840770] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.580 [2024-04-25 03:28:54.840797] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.580 [2024-04-25 03:28:54.840811] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.580 [2024-04-25 03:28:54.840824] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:20.580 [2024-04-25 03:28:54.840852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.580 qpair failed and we were unable to recover it. 00:28:20.580 [2024-04-25 03:28:54.850664] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.580 [2024-04-25 03:28:54.850897] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.580 [2024-04-25 03:28:54.850923] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.580 [2024-04-25 03:28:54.850938] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.580 [2024-04-25 03:28:54.850950] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:20.580 [2024-04-25 03:28:54.850978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.580 qpair failed and we were unable to recover it. 00:28:20.580 [2024-04-25 03:28:54.860661] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.580 [2024-04-25 03:28:54.860834] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.581 [2024-04-25 03:28:54.860860] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.581 [2024-04-25 03:28:54.860874] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.581 [2024-04-25 03:28:54.860887] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:20.581 [2024-04-25 03:28:54.860916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.581 qpair failed and we were unable to recover it. 00:28:20.581 [2024-04-25 03:28:54.870682] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.581 [2024-04-25 03:28:54.870894] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.581 [2024-04-25 03:28:54.870919] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.581 [2024-04-25 03:28:54.870935] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.581 [2024-04-25 03:28:54.870947] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:20.581 [2024-04-25 03:28:54.870975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.581 qpair failed and we were unable to recover it. 00:28:20.581 [2024-04-25 03:28:54.880704] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.581 [2024-04-25 03:28:54.880882] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.581 [2024-04-25 03:28:54.880908] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.581 [2024-04-25 03:28:54.880923] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.581 [2024-04-25 03:28:54.880936] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:20.581 [2024-04-25 03:28:54.880964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.581 qpair failed and we were unable to recover it. 00:28:20.581 [2024-04-25 03:28:54.890765] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.581 [2024-04-25 03:28:54.890938] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.581 [2024-04-25 03:28:54.890964] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.581 [2024-04-25 03:28:54.890979] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.581 [2024-04-25 03:28:54.890992] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:20.581 [2024-04-25 03:28:54.891021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.581 qpair failed and we were unable to recover it. 00:28:20.581 [2024-04-25 03:28:54.900785] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.581 [2024-04-25 03:28:54.900965] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.581 [2024-04-25 03:28:54.900991] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.581 [2024-04-25 03:28:54.901006] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.581 [2024-04-25 03:28:54.901019] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:20.581 [2024-04-25 03:28:54.901047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.581 qpair failed and we were unable to recover it. 00:28:20.581 [2024-04-25 03:28:54.910816] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.581 [2024-04-25 03:28:54.911039] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.581 [2024-04-25 03:28:54.911070] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.581 [2024-04-25 03:28:54.911086] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.581 [2024-04-25 03:28:54.911099] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:20.581 [2024-04-25 03:28:54.911127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.581 qpair failed and we were unable to recover it. 00:28:20.581 [2024-04-25 03:28:54.920823] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.581 [2024-04-25 03:28:54.921053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.581 [2024-04-25 03:28:54.921078] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.581 [2024-04-25 03:28:54.921093] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.581 [2024-04-25 03:28:54.921106] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:20.581 [2024-04-25 03:28:54.921134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.581 qpair failed and we were unable to recover it. 00:28:20.581 [2024-04-25 03:28:54.930937] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.581 [2024-04-25 03:28:54.931114] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.581 [2024-04-25 03:28:54.931140] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.581 [2024-04-25 03:28:54.931155] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.581 [2024-04-25 03:28:54.931168] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:20.581 [2024-04-25 03:28:54.931198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.581 qpair failed and we were unable to recover it. 00:28:20.581 [2024-04-25 03:28:54.940875] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.581 [2024-04-25 03:28:54.941047] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.581 [2024-04-25 03:28:54.941073] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.581 [2024-04-25 03:28:54.941088] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.581 [2024-04-25 03:28:54.941101] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:20.581 [2024-04-25 03:28:54.941129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.581 qpair failed and we were unable to recover it. 00:28:20.581 [2024-04-25 03:28:54.951009] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.581 [2024-04-25 03:28:54.951196] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.581 [2024-04-25 03:28:54.951222] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.581 [2024-04-25 03:28:54.951237] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.581 [2024-04-25 03:28:54.951249] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:20.581 [2024-04-25 03:28:54.951277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.581 qpair failed and we were unable to recover it. 00:28:20.581 [2024-04-25 03:28:54.960964] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.581 [2024-04-25 03:28:54.961140] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.581 [2024-04-25 03:28:54.961166] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.581 [2024-04-25 03:28:54.961181] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.581 [2024-04-25 03:28:54.961194] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:20.581 [2024-04-25 03:28:54.961225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.581 qpair failed and we were unable to recover it. 00:28:20.581 [2024-04-25 03:28:54.970991] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.581 [2024-04-25 03:28:54.971165] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.581 [2024-04-25 03:28:54.971191] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.581 [2024-04-25 03:28:54.971207] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.581 [2024-04-25 03:28:54.971220] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:20.581 [2024-04-25 03:28:54.971248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.581 qpair failed and we were unable to recover it. 00:28:20.581 [2024-04-25 03:28:54.981024] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.581 [2024-04-25 03:28:54.981234] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.581 [2024-04-25 03:28:54.981262] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.581 [2024-04-25 03:28:54.981277] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.581 [2024-04-25 03:28:54.981290] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:20.581 [2024-04-25 03:28:54.981319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.581 qpair failed and we were unable to recover it. 00:28:20.581 [2024-04-25 03:28:54.991051] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.581 [2024-04-25 03:28:54.991255] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.581 [2024-04-25 03:28:54.991281] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.581 [2024-04-25 03:28:54.991296] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.581 [2024-04-25 03:28:54.991308] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:20.581 [2024-04-25 03:28:54.991337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.582 qpair failed and we were unable to recover it. 00:28:20.582 [2024-04-25 03:28:55.001077] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.582 [2024-04-25 03:28:55.001305] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.582 [2024-04-25 03:28:55.001338] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.582 [2024-04-25 03:28:55.001354] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.582 [2024-04-25 03:28:55.001367] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:20.582 [2024-04-25 03:28:55.001410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.582 qpair failed and we were unable to recover it. 00:28:20.582 [2024-04-25 03:28:55.011071] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.582 [2024-04-25 03:28:55.011297] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.582 [2024-04-25 03:28:55.011324] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.582 [2024-04-25 03:28:55.011340] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.582 [2024-04-25 03:28:55.011352] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:20.582 [2024-04-25 03:28:55.011380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.582 qpair failed and we were unable to recover it. 00:28:20.582 [2024-04-25 03:28:55.021214] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.582 [2024-04-25 03:28:55.021388] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.582 [2024-04-25 03:28:55.021414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.582 [2024-04-25 03:28:55.021428] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.582 [2024-04-25 03:28:55.021441] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:20.582 [2024-04-25 03:28:55.021470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.582 qpair failed and we were unable to recover it. 00:28:20.582 [2024-04-25 03:28:55.031139] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.582 [2024-04-25 03:28:55.031359] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.582 [2024-04-25 03:28:55.031385] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.582 [2024-04-25 03:28:55.031401] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.582 [2024-04-25 03:28:55.031413] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:20.582 [2024-04-25 03:28:55.031441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.582 qpair failed and we were unable to recover it. 00:28:20.582 [2024-04-25 03:28:55.041185] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.582 [2024-04-25 03:28:55.041366] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.582 [2024-04-25 03:28:55.041393] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.582 [2024-04-25 03:28:55.041424] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.582 [2024-04-25 03:28:55.041437] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:20.582 [2024-04-25 03:28:55.041485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.582 qpair failed and we were unable to recover it. 00:28:20.582 [2024-04-25 03:28:55.051195] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.582 [2024-04-25 03:28:55.051415] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.582 [2024-04-25 03:28:55.051442] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.582 [2024-04-25 03:28:55.051457] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.582 [2024-04-25 03:28:55.051470] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:20.582 [2024-04-25 03:28:55.051498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.582 qpair failed and we were unable to recover it. 00:28:20.582 [2024-04-25 03:28:55.061234] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.582 [2024-04-25 03:28:55.061406] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.582 [2024-04-25 03:28:55.061432] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.582 [2024-04-25 03:28:55.061447] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.582 [2024-04-25 03:28:55.061460] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:20.582 [2024-04-25 03:28:55.061488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.582 qpair failed and we were unable to recover it. 00:28:20.582 [2024-04-25 03:28:55.071308] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.582 [2024-04-25 03:28:55.071486] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.582 [2024-04-25 03:28:55.071512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.582 [2024-04-25 03:28:55.071527] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.582 [2024-04-25 03:28:55.071540] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:20.582 [2024-04-25 03:28:55.071567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.582 qpair failed and we were unable to recover it. 00:28:20.841 [2024-04-25 03:28:55.081276] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.841 [2024-04-25 03:28:55.081459] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.841 [2024-04-25 03:28:55.081485] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.841 [2024-04-25 03:28:55.081500] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.841 [2024-04-25 03:28:55.081513] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:20.841 [2024-04-25 03:28:55.081541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.841 qpair failed and we were unable to recover it. 00:28:20.841 [2024-04-25 03:28:55.091326] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.841 [2024-04-25 03:28:55.091522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.841 [2024-04-25 03:28:55.091553] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.842 [2024-04-25 03:28:55.091569] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.842 [2024-04-25 03:28:55.091581] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:20.842 [2024-04-25 03:28:55.091610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.842 qpair failed and we were unable to recover it. 00:28:20.842 [2024-04-25 03:28:55.101355] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.842 [2024-04-25 03:28:55.101572] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.842 [2024-04-25 03:28:55.101597] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.842 [2024-04-25 03:28:55.101612] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.842 [2024-04-25 03:28:55.101625] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:20.842 [2024-04-25 03:28:55.101663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.842 qpair failed and we were unable to recover it. 00:28:20.842 [2024-04-25 03:28:55.111365] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.842 [2024-04-25 03:28:55.111548] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.842 [2024-04-25 03:28:55.111573] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.842 [2024-04-25 03:28:55.111588] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.842 [2024-04-25 03:28:55.111600] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:20.842 [2024-04-25 03:28:55.111634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.842 qpair failed and we were unable to recover it. 00:28:20.842 [2024-04-25 03:28:55.121387] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.842 [2024-04-25 03:28:55.121572] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.842 [2024-04-25 03:28:55.121597] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.842 [2024-04-25 03:28:55.121612] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.842 [2024-04-25 03:28:55.121625] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:20.842 [2024-04-25 03:28:55.121663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.842 qpair failed and we were unable to recover it. 00:28:20.842 [2024-04-25 03:28:55.131465] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.842 [2024-04-25 03:28:55.131643] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.842 [2024-04-25 03:28:55.131670] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.842 [2024-04-25 03:28:55.131685] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.842 [2024-04-25 03:28:55.131698] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:20.842 [2024-04-25 03:28:55.131732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.842 qpair failed and we were unable to recover it. 00:28:20.842 [2024-04-25 03:28:55.141466] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.842 [2024-04-25 03:28:55.141659] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.842 [2024-04-25 03:28:55.141685] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.842 [2024-04-25 03:28:55.141700] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.842 [2024-04-25 03:28:55.141712] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:20.842 [2024-04-25 03:28:55.141741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.842 qpair failed and we were unable to recover it. 00:28:20.842 [2024-04-25 03:28:55.151484] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.842 [2024-04-25 03:28:55.151668] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.842 [2024-04-25 03:28:55.151693] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.842 [2024-04-25 03:28:55.151708] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.842 [2024-04-25 03:28:55.151721] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:20.842 [2024-04-25 03:28:55.151749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.842 qpair failed and we were unable to recover it. 00:28:20.842 [2024-04-25 03:28:55.161566] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.842 [2024-04-25 03:28:55.161756] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.842 [2024-04-25 03:28:55.161784] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.842 [2024-04-25 03:28:55.161802] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.842 [2024-04-25 03:28:55.161816] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:20.842 [2024-04-25 03:28:55.161845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.842 qpair failed and we were unable to recover it. 00:28:20.842 [2024-04-25 03:28:55.171539] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.842 [2024-04-25 03:28:55.171721] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.842 [2024-04-25 03:28:55.171748] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.842 [2024-04-25 03:28:55.171763] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.842 [2024-04-25 03:28:55.171775] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:20.842 [2024-04-25 03:28:55.171804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.842 qpair failed and we were unable to recover it. 00:28:20.842 [2024-04-25 03:28:55.181554] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.842 [2024-04-25 03:28:55.181724] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.842 [2024-04-25 03:28:55.181755] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.842 [2024-04-25 03:28:55.181771] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.842 [2024-04-25 03:28:55.181783] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:20.842 [2024-04-25 03:28:55.181812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.842 qpair failed and we were unable to recover it. 00:28:20.842 [2024-04-25 03:28:55.191593] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.842 [2024-04-25 03:28:55.191830] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.842 [2024-04-25 03:28:55.191856] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.842 [2024-04-25 03:28:55.191871] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.842 [2024-04-25 03:28:55.191884] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:20.842 [2024-04-25 03:28:55.191912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.842 qpair failed and we were unable to recover it. 00:28:20.842 [2024-04-25 03:28:55.201641] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.842 [2024-04-25 03:28:55.201825] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.842 [2024-04-25 03:28:55.201852] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.842 [2024-04-25 03:28:55.201867] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.842 [2024-04-25 03:28:55.201879] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:20.842 [2024-04-25 03:28:55.201908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.842 qpair failed and we were unable to recover it. 00:28:20.842 [2024-04-25 03:28:55.211688] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.842 [2024-04-25 03:28:55.211869] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.842 [2024-04-25 03:28:55.211895] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.842 [2024-04-25 03:28:55.211910] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.842 [2024-04-25 03:28:55.211923] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:20.842 [2024-04-25 03:28:55.211952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.842 qpair failed and we were unable to recover it. 00:28:20.842 [2024-04-25 03:28:55.221681] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.842 [2024-04-25 03:28:55.221861] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.842 [2024-04-25 03:28:55.221888] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.842 [2024-04-25 03:28:55.221903] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.842 [2024-04-25 03:28:55.221916] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:20.842 [2024-04-25 03:28:55.221950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.842 qpair failed and we were unable to recover it. 00:28:20.843 [2024-04-25 03:28:55.231731] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.843 [2024-04-25 03:28:55.231914] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.843 [2024-04-25 03:28:55.231949] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.843 [2024-04-25 03:28:55.231964] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.843 [2024-04-25 03:28:55.231976] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:20.843 [2024-04-25 03:28:55.232005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.843 qpair failed and we were unable to recover it. 00:28:20.843 [2024-04-25 03:28:55.241770] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.843 [2024-04-25 03:28:55.241955] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.843 [2024-04-25 03:28:55.241990] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.843 [2024-04-25 03:28:55.242006] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.843 [2024-04-25 03:28:55.242018] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:20.843 [2024-04-25 03:28:55.242047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.843 qpair failed and we were unable to recover it. 00:28:20.843 [2024-04-25 03:28:55.251765] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.843 [2024-04-25 03:28:55.251943] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.843 [2024-04-25 03:28:55.251968] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.843 [2024-04-25 03:28:55.251983] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.843 [2024-04-25 03:28:55.251996] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:20.843 [2024-04-25 03:28:55.252024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.843 qpair failed and we were unable to recover it. 00:28:20.843 [2024-04-25 03:28:55.261806] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.843 [2024-04-25 03:28:55.262013] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.843 [2024-04-25 03:28:55.262038] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.843 [2024-04-25 03:28:55.262052] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.843 [2024-04-25 03:28:55.262065] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:20.843 [2024-04-25 03:28:55.262093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.843 qpair failed and we were unable to recover it. 00:28:20.843 [2024-04-25 03:28:55.271850] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.843 [2024-04-25 03:28:55.272041] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.843 [2024-04-25 03:28:55.272074] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.843 [2024-04-25 03:28:55.272090] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.843 [2024-04-25 03:28:55.272103] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:20.843 [2024-04-25 03:28:55.272131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.843 qpair failed and we were unable to recover it. 00:28:20.843 [2024-04-25 03:28:55.281858] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.843 [2024-04-25 03:28:55.282073] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.843 [2024-04-25 03:28:55.282099] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.843 [2024-04-25 03:28:55.282114] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.843 [2024-04-25 03:28:55.282127] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:20.843 [2024-04-25 03:28:55.282156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.843 qpair failed and we were unable to recover it. 00:28:20.843 [2024-04-25 03:28:55.291871] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.843 [2024-04-25 03:28:55.292048] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.843 [2024-04-25 03:28:55.292073] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.843 [2024-04-25 03:28:55.292088] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.843 [2024-04-25 03:28:55.292100] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:20.843 [2024-04-25 03:28:55.292129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.843 qpair failed and we were unable to recover it. 00:28:20.843 [2024-04-25 03:28:55.301921] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.843 [2024-04-25 03:28:55.302099] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.843 [2024-04-25 03:28:55.302124] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.843 [2024-04-25 03:28:55.302139] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.843 [2024-04-25 03:28:55.302152] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:20.843 [2024-04-25 03:28:55.302179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.843 qpair failed and we were unable to recover it. 00:28:20.843 [2024-04-25 03:28:55.311962] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.843 [2024-04-25 03:28:55.312148] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.843 [2024-04-25 03:28:55.312174] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.843 [2024-04-25 03:28:55.312190] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.843 [2024-04-25 03:28:55.312207] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:20.843 [2024-04-25 03:28:55.312236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.843 qpair failed and we were unable to recover it. 00:28:20.843 [2024-04-25 03:28:55.321967] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.843 [2024-04-25 03:28:55.322148] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.843 [2024-04-25 03:28:55.322175] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.843 [2024-04-25 03:28:55.322190] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.843 [2024-04-25 03:28:55.322202] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:20.843 [2024-04-25 03:28:55.322230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.843 qpair failed and we were unable to recover it. 00:28:20.843 [2024-04-25 03:28:55.332010] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.843 [2024-04-25 03:28:55.332190] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.843 [2024-04-25 03:28:55.332216] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.843 [2024-04-25 03:28:55.332232] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.843 [2024-04-25 03:28:55.332244] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:20.843 [2024-04-25 03:28:55.332272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.843 qpair failed and we were unable to recover it. 00:28:21.102 [2024-04-25 03:28:55.342015] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.102 [2024-04-25 03:28:55.342192] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.102 [2024-04-25 03:28:55.342218] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.102 [2024-04-25 03:28:55.342233] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.102 [2024-04-25 03:28:55.342245] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:21.102 [2024-04-25 03:28:55.342273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.102 qpair failed and we were unable to recover it. 00:28:21.102 [2024-04-25 03:28:55.352071] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.102 [2024-04-25 03:28:55.352297] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.102 [2024-04-25 03:28:55.352323] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.102 [2024-04-25 03:28:55.352338] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.102 [2024-04-25 03:28:55.352351] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:21.102 [2024-04-25 03:28:55.352379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.102 qpair failed and we were unable to recover it. 00:28:21.102 [2024-04-25 03:28:55.362068] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.102 [2024-04-25 03:28:55.362299] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.102 [2024-04-25 03:28:55.362325] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.102 [2024-04-25 03:28:55.362340] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.102 [2024-04-25 03:28:55.362353] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:21.102 [2024-04-25 03:28:55.362381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.102 qpair failed and we were unable to recover it. 00:28:21.102 [2024-04-25 03:28:55.372154] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.102 [2024-04-25 03:28:55.372334] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.102 [2024-04-25 03:28:55.372361] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.102 [2024-04-25 03:28:55.372376] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.102 [2024-04-25 03:28:55.372389] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:21.102 [2024-04-25 03:28:55.372417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.102 qpair failed and we were unable to recover it. 00:28:21.102 [2024-04-25 03:28:55.382170] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.102 [2024-04-25 03:28:55.382362] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.102 [2024-04-25 03:28:55.382386] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.103 [2024-04-25 03:28:55.382401] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.103 [2024-04-25 03:28:55.382414] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:21.103 [2024-04-25 03:28:55.382441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.103 qpair failed and we were unable to recover it. 00:28:21.103 [2024-04-25 03:28:55.392217] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.103 [2024-04-25 03:28:55.392402] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.103 [2024-04-25 03:28:55.392428] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.103 [2024-04-25 03:28:55.392443] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.103 [2024-04-25 03:28:55.392455] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:21.103 [2024-04-25 03:28:55.392484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.103 qpair failed and we were unable to recover it. 00:28:21.103 [2024-04-25 03:28:55.402248] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.103 [2024-04-25 03:28:55.402426] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.103 [2024-04-25 03:28:55.402452] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.103 [2024-04-25 03:28:55.402467] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.103 [2024-04-25 03:28:55.402485] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:21.103 [2024-04-25 03:28:55.402514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.103 qpair failed and we were unable to recover it. 00:28:21.103 [2024-04-25 03:28:55.412250] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.103 [2024-04-25 03:28:55.412424] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.103 [2024-04-25 03:28:55.412449] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.103 [2024-04-25 03:28:55.412463] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.103 [2024-04-25 03:28:55.412476] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:21.103 [2024-04-25 03:28:55.412503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.103 qpair failed and we were unable to recover it. 00:28:21.103 [2024-04-25 03:28:55.422279] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.103 [2024-04-25 03:28:55.422480] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.103 [2024-04-25 03:28:55.422504] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.103 [2024-04-25 03:28:55.422519] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.103 [2024-04-25 03:28:55.422533] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:21.103 [2024-04-25 03:28:55.422561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.103 qpair failed and we were unable to recover it. 00:28:21.103 [2024-04-25 03:28:55.432283] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.103 [2024-04-25 03:28:55.432453] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.103 [2024-04-25 03:28:55.432477] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.103 [2024-04-25 03:28:55.432492] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.103 [2024-04-25 03:28:55.432504] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:21.103 [2024-04-25 03:28:55.432532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.103 qpair failed and we were unable to recover it. 00:28:21.103 [2024-04-25 03:28:55.442406] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.103 [2024-04-25 03:28:55.442580] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.103 [2024-04-25 03:28:55.442604] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.103 [2024-04-25 03:28:55.442620] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.103 [2024-04-25 03:28:55.442640] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:21.103 [2024-04-25 03:28:55.442670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.103 qpair failed and we were unable to recover it. 00:28:21.103 [2024-04-25 03:28:55.452359] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.103 [2024-04-25 03:28:55.452538] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.103 [2024-04-25 03:28:55.452562] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.103 [2024-04-25 03:28:55.452577] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.103 [2024-04-25 03:28:55.452590] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:21.103 [2024-04-25 03:28:55.452618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.103 qpair failed and we were unable to recover it. 00:28:21.103 [2024-04-25 03:28:55.462369] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.103 [2024-04-25 03:28:55.462546] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.103 [2024-04-25 03:28:55.462570] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.103 [2024-04-25 03:28:55.462585] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.103 [2024-04-25 03:28:55.462598] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:21.103 [2024-04-25 03:28:55.462635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.103 qpair failed and we were unable to recover it. 00:28:21.103 [2024-04-25 03:28:55.472400] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.103 [2024-04-25 03:28:55.472571] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.103 [2024-04-25 03:28:55.472595] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.103 [2024-04-25 03:28:55.472610] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.103 [2024-04-25 03:28:55.472622] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:21.103 [2024-04-25 03:28:55.472658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.103 qpair failed and we were unable to recover it. 00:28:21.103 [2024-04-25 03:28:55.482453] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.103 [2024-04-25 03:28:55.482633] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.103 [2024-04-25 03:28:55.482658] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.103 [2024-04-25 03:28:55.482673] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.103 [2024-04-25 03:28:55.482686] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:21.103 [2024-04-25 03:28:55.482715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.103 qpair failed and we were unable to recover it. 00:28:21.103 [2024-04-25 03:28:55.492481] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.103 [2024-04-25 03:28:55.492659] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.103 [2024-04-25 03:28:55.492684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.103 [2024-04-25 03:28:55.492698] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.104 [2024-04-25 03:28:55.492716] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:21.104 [2024-04-25 03:28:55.492746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.104 qpair failed and we were unable to recover it. 00:28:21.104 [2024-04-25 03:28:55.502473] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.104 [2024-04-25 03:28:55.502656] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.104 [2024-04-25 03:28:55.502680] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.104 [2024-04-25 03:28:55.502695] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.104 [2024-04-25 03:28:55.502708] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:21.104 [2024-04-25 03:28:55.502736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.104 qpair failed and we were unable to recover it. 00:28:21.104 [2024-04-25 03:28:55.512568] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.104 [2024-04-25 03:28:55.512746] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.104 [2024-04-25 03:28:55.512771] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.104 [2024-04-25 03:28:55.512786] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.104 [2024-04-25 03:28:55.512798] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:21.104 [2024-04-25 03:28:55.512827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.104 qpair failed and we were unable to recover it. 00:28:21.104 [2024-04-25 03:28:55.522546] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.104 [2024-04-25 03:28:55.522729] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.104 [2024-04-25 03:28:55.522755] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.104 [2024-04-25 03:28:55.522770] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.104 [2024-04-25 03:28:55.522783] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:21.104 [2024-04-25 03:28:55.522811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.104 qpair failed and we were unable to recover it. 00:28:21.104 [2024-04-25 03:28:55.532594] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.104 [2024-04-25 03:28:55.532791] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.104 [2024-04-25 03:28:55.532816] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.104 [2024-04-25 03:28:55.532830] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.104 [2024-04-25 03:28:55.532844] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:21.104 [2024-04-25 03:28:55.532872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.104 qpair failed and we were unable to recover it. 00:28:21.104 [2024-04-25 03:28:55.542586] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.104 [2024-04-25 03:28:55.542766] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.104 [2024-04-25 03:28:55.542792] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.104 [2024-04-25 03:28:55.542807] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.104 [2024-04-25 03:28:55.542819] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:21.104 [2024-04-25 03:28:55.542848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.104 qpair failed and we were unable to recover it. 00:28:21.104 [2024-04-25 03:28:55.552689] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.104 [2024-04-25 03:28:55.552893] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.104 [2024-04-25 03:28:55.552917] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.104 [2024-04-25 03:28:55.552932] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.104 [2024-04-25 03:28:55.552946] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:21.104 [2024-04-25 03:28:55.552974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.104 qpair failed and we were unable to recover it. 00:28:21.104 [2024-04-25 03:28:55.562658] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.104 [2024-04-25 03:28:55.562869] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.104 [2024-04-25 03:28:55.562896] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.104 [2024-04-25 03:28:55.562911] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.104 [2024-04-25 03:28:55.562924] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:21.104 [2024-04-25 03:28:55.562968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.104 qpair failed and we were unable to recover it. 00:28:21.104 [2024-04-25 03:28:55.572709] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.104 [2024-04-25 03:28:55.572955] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.104 [2024-04-25 03:28:55.572982] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.104 [2024-04-25 03:28:55.572997] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.104 [2024-04-25 03:28:55.573011] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:21.104 [2024-04-25 03:28:55.573054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.104 qpair failed and we were unable to recover it. 00:28:21.104 [2024-04-25 03:28:55.582733] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.104 [2024-04-25 03:28:55.582965] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.104 [2024-04-25 03:28:55.582991] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.104 [2024-04-25 03:28:55.583012] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.104 [2024-04-25 03:28:55.583027] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:21.104 [2024-04-25 03:28:55.583055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.104 qpair failed and we were unable to recover it. 00:28:21.104 [2024-04-25 03:28:55.592782] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.104 [2024-04-25 03:28:55.592996] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.104 [2024-04-25 03:28:55.593021] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.104 [2024-04-25 03:28:55.593041] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.104 [2024-04-25 03:28:55.593054] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:21.104 [2024-04-25 03:28:55.593083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.104 qpair failed and we were unable to recover it. 00:28:21.364 [2024-04-25 03:28:55.602771] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.364 [2024-04-25 03:28:55.602942] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.364 [2024-04-25 03:28:55.602967] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.364 [2024-04-25 03:28:55.602982] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.364 [2024-04-25 03:28:55.602995] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:21.364 [2024-04-25 03:28:55.603024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.364 qpair failed and we were unable to recover it. 00:28:21.364 [2024-04-25 03:28:55.612823] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.364 [2024-04-25 03:28:55.613015] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.364 [2024-04-25 03:28:55.613040] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.364 [2024-04-25 03:28:55.613055] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.364 [2024-04-25 03:28:55.613069] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:21.364 [2024-04-25 03:28:55.613097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.364 qpair failed and we were unable to recover it. 00:28:21.364 [2024-04-25 03:28:55.622830] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.364 [2024-04-25 03:28:55.622998] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.364 [2024-04-25 03:28:55.623022] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.364 [2024-04-25 03:28:55.623037] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.364 [2024-04-25 03:28:55.623049] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:21.364 [2024-04-25 03:28:55.623078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.364 qpair failed and we were unable to recover it. 00:28:21.364 [2024-04-25 03:28:55.632866] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.364 [2024-04-25 03:28:55.633088] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.364 [2024-04-25 03:28:55.633115] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.364 [2024-04-25 03:28:55.633131] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.364 [2024-04-25 03:28:55.633145] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:21.364 [2024-04-25 03:28:55.633173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.364 qpair failed and we were unable to recover it. 00:28:21.364 [2024-04-25 03:28:55.642933] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.364 [2024-04-25 03:28:55.643109] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.364 [2024-04-25 03:28:55.643134] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.364 [2024-04-25 03:28:55.643164] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.364 [2024-04-25 03:28:55.643178] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:21.364 [2024-04-25 03:28:55.643207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.364 qpair failed and we were unable to recover it. 00:28:21.364 [2024-04-25 03:28:55.652975] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.364 [2024-04-25 03:28:55.653146] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.364 [2024-04-25 03:28:55.653172] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.364 [2024-04-25 03:28:55.653186] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.364 [2024-04-25 03:28:55.653199] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:21.364 [2024-04-25 03:28:55.653227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.364 qpair failed and we were unable to recover it. 00:28:21.364 [2024-04-25 03:28:55.663017] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.364 [2024-04-25 03:28:55.663210] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.364 [2024-04-25 03:28:55.663234] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.364 [2024-04-25 03:28:55.663249] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.364 [2024-04-25 03:28:55.663263] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:21.364 [2024-04-25 03:28:55.663291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.364 qpair failed and we were unable to recover it. 00:28:21.364 [2024-04-25 03:28:55.673030] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.364 [2024-04-25 03:28:55.673211] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.364 [2024-04-25 03:28:55.673238] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.364 [2024-04-25 03:28:55.673262] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.364 [2024-04-25 03:28:55.673276] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:21.364 [2024-04-25 03:28:55.673305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.364 qpair failed and we were unable to recover it. 00:28:21.364 [2024-04-25 03:28:55.683030] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.364 [2024-04-25 03:28:55.683233] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.364 [2024-04-25 03:28:55.683258] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.364 [2024-04-25 03:28:55.683273] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.364 [2024-04-25 03:28:55.683287] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:21.364 [2024-04-25 03:28:55.683316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.364 qpair failed and we were unable to recover it. 00:28:21.364 [2024-04-25 03:28:55.693073] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.364 [2024-04-25 03:28:55.693282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.365 [2024-04-25 03:28:55.693307] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.365 [2024-04-25 03:28:55.693322] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.365 [2024-04-25 03:28:55.693335] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:21.365 [2024-04-25 03:28:55.693364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.365 qpair failed and we were unable to recover it. 00:28:21.365 [2024-04-25 03:28:55.703056] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.365 [2024-04-25 03:28:55.703231] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.365 [2024-04-25 03:28:55.703255] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.365 [2024-04-25 03:28:55.703270] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.365 [2024-04-25 03:28:55.703283] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:21.365 [2024-04-25 03:28:55.703311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.365 qpair failed and we were unable to recover it. 00:28:21.365 [2024-04-25 03:28:55.713108] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.365 [2024-04-25 03:28:55.713284] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.365 [2024-04-25 03:28:55.713309] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.365 [2024-04-25 03:28:55.713325] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.365 [2024-04-25 03:28:55.713338] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:21.365 [2024-04-25 03:28:55.713366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.365 qpair failed and we were unable to recover it. 00:28:21.365 [2024-04-25 03:28:55.723113] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.365 [2024-04-25 03:28:55.723330] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.365 [2024-04-25 03:28:55.723356] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.365 [2024-04-25 03:28:55.723371] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.365 [2024-04-25 03:28:55.723384] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:21.365 [2024-04-25 03:28:55.723413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.365 qpair failed and we were unable to recover it. 00:28:21.365 [2024-04-25 03:28:55.733184] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.365 [2024-04-25 03:28:55.733400] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.365 [2024-04-25 03:28:55.733427] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.365 [2024-04-25 03:28:55.733442] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.365 [2024-04-25 03:28:55.733456] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:21.365 [2024-04-25 03:28:55.733485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.365 qpair failed and we were unable to recover it. 00:28:21.365 [2024-04-25 03:28:55.743165] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.365 [2024-04-25 03:28:55.743339] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.365 [2024-04-25 03:28:55.743366] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.365 [2024-04-25 03:28:55.743381] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.365 [2024-04-25 03:28:55.743394] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:21.365 [2024-04-25 03:28:55.743422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.365 qpair failed and we were unable to recover it. 00:28:21.365 [2024-04-25 03:28:55.753276] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.365 [2024-04-25 03:28:55.753489] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.365 [2024-04-25 03:28:55.753515] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.365 [2024-04-25 03:28:55.753530] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.365 [2024-04-25 03:28:55.753542] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:21.365 [2024-04-25 03:28:55.753570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.365 qpair failed and we were unable to recover it. 00:28:21.365 [2024-04-25 03:28:55.763261] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.365 [2024-04-25 03:28:55.763470] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.365 [2024-04-25 03:28:55.763496] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.365 [2024-04-25 03:28:55.763516] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.365 [2024-04-25 03:28:55.763530] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:21.365 [2024-04-25 03:28:55.763558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.365 qpair failed and we were unable to recover it. 00:28:21.365 [2024-04-25 03:28:55.773283] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.365 [2024-04-25 03:28:55.773461] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.365 [2024-04-25 03:28:55.773486] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.365 [2024-04-25 03:28:55.773501] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.365 [2024-04-25 03:28:55.773514] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:21.365 [2024-04-25 03:28:55.773542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.365 qpair failed and we were unable to recover it. 00:28:21.365 [2024-04-25 03:28:55.783344] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.365 [2024-04-25 03:28:55.783535] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.365 [2024-04-25 03:28:55.783562] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.365 [2024-04-25 03:28:55.783577] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.365 [2024-04-25 03:28:55.783590] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:21.365 [2024-04-25 03:28:55.783618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.365 qpair failed and we were unable to recover it. 00:28:21.365 [2024-04-25 03:28:55.793310] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.365 [2024-04-25 03:28:55.793484] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.365 [2024-04-25 03:28:55.793510] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.365 [2024-04-25 03:28:55.793525] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.365 [2024-04-25 03:28:55.793538] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:21.365 [2024-04-25 03:28:55.793566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.365 qpair failed and we were unable to recover it. 00:28:21.365 [2024-04-25 03:28:55.803358] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.365 [2024-04-25 03:28:55.803554] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.365 [2024-04-25 03:28:55.803580] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.365 [2024-04-25 03:28:55.803595] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.365 [2024-04-25 03:28:55.803608] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:21.365 [2024-04-25 03:28:55.803645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.365 qpair failed and we were unable to recover it. 00:28:21.365 [2024-04-25 03:28:55.813351] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.365 [2024-04-25 03:28:55.813519] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.365 [2024-04-25 03:28:55.813546] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.365 [2024-04-25 03:28:55.813561] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.365 [2024-04-25 03:28:55.813574] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:21.365 [2024-04-25 03:28:55.813602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.365 qpair failed and we were unable to recover it. 00:28:21.365 [2024-04-25 03:28:55.823402] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.365 [2024-04-25 03:28:55.823655] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.365 [2024-04-25 03:28:55.823682] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.365 [2024-04-25 03:28:55.823698] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.366 [2024-04-25 03:28:55.823711] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:21.366 [2024-04-25 03:28:55.823741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.366 qpair failed and we were unable to recover it. 00:28:21.366 [2024-04-25 03:28:55.833491] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.366 [2024-04-25 03:28:55.833686] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.366 [2024-04-25 03:28:55.833713] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.366 [2024-04-25 03:28:55.833738] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.366 [2024-04-25 03:28:55.833751] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:21.366 [2024-04-25 03:28:55.833779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.366 qpair failed and we were unable to recover it. 00:28:21.366 [2024-04-25 03:28:55.843475] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.366 [2024-04-25 03:28:55.843671] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.366 [2024-04-25 03:28:55.843700] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.366 [2024-04-25 03:28:55.843715] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.366 [2024-04-25 03:28:55.843728] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:21.366 [2024-04-25 03:28:55.843758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.366 qpair failed and we were unable to recover it. 00:28:21.366 [2024-04-25 03:28:55.853479] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.366 [2024-04-25 03:28:55.853651] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.366 [2024-04-25 03:28:55.853682] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.366 [2024-04-25 03:28:55.853699] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.366 [2024-04-25 03:28:55.853712] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:21.366 [2024-04-25 03:28:55.853740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.366 qpair failed and we were unable to recover it. 00:28:21.625 [2024-04-25 03:28:55.863501] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.625 [2024-04-25 03:28:55.863673] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.625 [2024-04-25 03:28:55.863697] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.625 [2024-04-25 03:28:55.863712] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.625 [2024-04-25 03:28:55.863724] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:21.625 [2024-04-25 03:28:55.863753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.625 qpair failed and we were unable to recover it. 00:28:21.625 [2024-04-25 03:28:55.873553] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.625 [2024-04-25 03:28:55.873738] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.625 [2024-04-25 03:28:55.873763] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.625 [2024-04-25 03:28:55.873781] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.625 [2024-04-25 03:28:55.873794] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:21.625 [2024-04-25 03:28:55.873823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.625 qpair failed and we were unable to recover it. 00:28:21.625 [2024-04-25 03:28:55.883573] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.625 [2024-04-25 03:28:55.883750] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.625 [2024-04-25 03:28:55.883777] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.625 [2024-04-25 03:28:55.883792] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.625 [2024-04-25 03:28:55.883804] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:21.625 [2024-04-25 03:28:55.883833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.625 qpair failed and we were unable to recover it. 00:28:21.625 [2024-04-25 03:28:55.893617] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.625 [2024-04-25 03:28:55.893809] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.625 [2024-04-25 03:28:55.893834] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.625 [2024-04-25 03:28:55.893849] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.625 [2024-04-25 03:28:55.893862] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:21.625 [2024-04-25 03:28:55.893891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.625 qpair failed and we were unable to recover it. 00:28:21.625 [2024-04-25 03:28:55.903619] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.625 [2024-04-25 03:28:55.903809] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.625 [2024-04-25 03:28:55.903833] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.625 [2024-04-25 03:28:55.903847] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.625 [2024-04-25 03:28:55.903860] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:21.625 [2024-04-25 03:28:55.903888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.625 qpair failed and we were unable to recover it. 00:28:21.625 [2024-04-25 03:28:55.913663] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.625 [2024-04-25 03:28:55.913849] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.625 [2024-04-25 03:28:55.913874] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.625 [2024-04-25 03:28:55.913890] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.625 [2024-04-25 03:28:55.913903] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:21.625 [2024-04-25 03:28:55.913931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.625 qpair failed and we were unable to recover it. 00:28:21.625 [2024-04-25 03:28:55.923685] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.625 [2024-04-25 03:28:55.923890] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.625 [2024-04-25 03:28:55.923917] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.625 [2024-04-25 03:28:55.923946] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.625 [2024-04-25 03:28:55.923959] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:21.625 [2024-04-25 03:28:55.923987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.625 qpair failed and we were unable to recover it. 00:28:21.625 [2024-04-25 03:28:55.933747] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.625 [2024-04-25 03:28:55.933914] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.625 [2024-04-25 03:28:55.933941] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.625 [2024-04-25 03:28:55.933957] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.625 [2024-04-25 03:28:55.933971] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:21.625 [2024-04-25 03:28:55.933999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.625 qpair failed and we were unable to recover it. 00:28:21.625 [2024-04-25 03:28:55.943761] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.625 [2024-04-25 03:28:55.943931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.625 [2024-04-25 03:28:55.943963] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.625 [2024-04-25 03:28:55.943980] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.625 [2024-04-25 03:28:55.943993] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:21.625 [2024-04-25 03:28:55.944022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.625 qpair failed and we were unable to recover it. 00:28:21.625 [2024-04-25 03:28:55.953763] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.625 [2024-04-25 03:28:55.953945] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.625 [2024-04-25 03:28:55.953970] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.625 [2024-04-25 03:28:55.953985] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.625 [2024-04-25 03:28:55.953998] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:21.625 [2024-04-25 03:28:55.954026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.625 qpair failed and we were unable to recover it. 00:28:21.625 [2024-04-25 03:28:55.963780] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.625 [2024-04-25 03:28:55.963956] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.625 [2024-04-25 03:28:55.963981] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.625 [2024-04-25 03:28:55.963997] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.625 [2024-04-25 03:28:55.964009] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:21.625 [2024-04-25 03:28:55.964038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.625 qpair failed and we were unable to recover it. 00:28:21.625 [2024-04-25 03:28:55.973869] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.626 [2024-04-25 03:28:55.974070] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.626 [2024-04-25 03:28:55.974095] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.626 [2024-04-25 03:28:55.974110] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.626 [2024-04-25 03:28:55.974123] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:21.626 [2024-04-25 03:28:55.974152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.626 qpair failed and we were unable to recover it. 00:28:21.626 [2024-04-25 03:28:55.983855] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.626 [2024-04-25 03:28:55.984026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.626 [2024-04-25 03:28:55.984050] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.626 [2024-04-25 03:28:55.984064] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.626 [2024-04-25 03:28:55.984078] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:21.626 [2024-04-25 03:28:55.984112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.626 qpair failed and we were unable to recover it. 00:28:21.626 [2024-04-25 03:28:55.993922] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.626 [2024-04-25 03:28:55.994118] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.626 [2024-04-25 03:28:55.994142] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.626 [2024-04-25 03:28:55.994156] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.626 [2024-04-25 03:28:55.994170] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:21.626 [2024-04-25 03:28:55.994199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.626 qpair failed and we were unable to recover it. 00:28:21.626 [2024-04-25 03:28:56.003928] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.626 [2024-04-25 03:28:56.004110] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.626 [2024-04-25 03:28:56.004135] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.626 [2024-04-25 03:28:56.004150] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.626 [2024-04-25 03:28:56.004163] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:21.626 [2024-04-25 03:28:56.004192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.626 qpair failed and we were unable to recover it. 00:28:21.626 [2024-04-25 03:28:56.013996] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.626 [2024-04-25 03:28:56.014166] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.626 [2024-04-25 03:28:56.014191] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.626 [2024-04-25 03:28:56.014205] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.626 [2024-04-25 03:28:56.014218] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:21.626 [2024-04-25 03:28:56.014246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.626 qpair failed and we were unable to recover it. 00:28:21.626 [2024-04-25 03:28:56.023986] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.626 [2024-04-25 03:28:56.024153] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.626 [2024-04-25 03:28:56.024177] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.626 [2024-04-25 03:28:56.024192] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.626 [2024-04-25 03:28:56.024205] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:21.626 [2024-04-25 03:28:56.024233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.626 qpair failed and we were unable to recover it. 00:28:21.626 [2024-04-25 03:28:56.033981] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.626 [2024-04-25 03:28:56.034153] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.626 [2024-04-25 03:28:56.034185] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.626 [2024-04-25 03:28:56.034201] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.626 [2024-04-25 03:28:56.034214] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:21.626 [2024-04-25 03:28:56.034243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.626 qpair failed and we were unable to recover it. 00:28:21.626 [2024-04-25 03:28:56.044037] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.626 [2024-04-25 03:28:56.044218] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.626 [2024-04-25 03:28:56.044242] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.626 [2024-04-25 03:28:56.044272] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.626 [2024-04-25 03:28:56.044286] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:21.626 [2024-04-25 03:28:56.044329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.626 qpair failed and we were unable to recover it. 00:28:21.626 [2024-04-25 03:28:56.054076] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.626 [2024-04-25 03:28:56.054253] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.626 [2024-04-25 03:28:56.054278] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.626 [2024-04-25 03:28:56.054293] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.626 [2024-04-25 03:28:56.054306] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:21.626 [2024-04-25 03:28:56.054335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.626 qpair failed and we were unable to recover it. 00:28:21.626 [2024-04-25 03:28:56.064093] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.626 [2024-04-25 03:28:56.064258] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.626 [2024-04-25 03:28:56.064281] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.626 [2024-04-25 03:28:56.064297] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.626 [2024-04-25 03:28:56.064310] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:21.626 [2024-04-25 03:28:56.064338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.626 qpair failed and we were unable to recover it. 00:28:21.626 [2024-04-25 03:28:56.074142] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.626 [2024-04-25 03:28:56.074381] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.626 [2024-04-25 03:28:56.074408] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.626 [2024-04-25 03:28:56.074423] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.626 [2024-04-25 03:28:56.074437] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:21.626 [2024-04-25 03:28:56.074485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.626 qpair failed and we were unable to recover it. 00:28:21.626 [2024-04-25 03:28:56.084136] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.626 [2024-04-25 03:28:56.084314] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.626 [2024-04-25 03:28:56.084339] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.626 [2024-04-25 03:28:56.084353] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.626 [2024-04-25 03:28:56.084366] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:21.626 [2024-04-25 03:28:56.084395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.626 qpair failed and we were unable to recover it. 00:28:21.626 [2024-04-25 03:28:56.094144] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.626 [2024-04-25 03:28:56.094315] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.626 [2024-04-25 03:28:56.094340] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.626 [2024-04-25 03:28:56.094354] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.626 [2024-04-25 03:28:56.094367] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:21.626 [2024-04-25 03:28:56.094395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.626 qpair failed and we were unable to recover it. 00:28:21.626 [2024-04-25 03:28:56.104176] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.626 [2024-04-25 03:28:56.104352] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.627 [2024-04-25 03:28:56.104376] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.627 [2024-04-25 03:28:56.104391] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.627 [2024-04-25 03:28:56.104404] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:21.627 [2024-04-25 03:28:56.104432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.627 qpair failed and we were unable to recover it. 00:28:21.627 [2024-04-25 03:28:56.114239] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.627 [2024-04-25 03:28:56.114415] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.627 [2024-04-25 03:28:56.114439] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.627 [2024-04-25 03:28:56.114454] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.627 [2024-04-25 03:28:56.114466] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:21.627 [2024-04-25 03:28:56.114495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.627 qpair failed and we were unable to recover it. 00:28:21.886 [2024-04-25 03:28:56.124260] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.886 [2024-04-25 03:28:56.124459] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.886 [2024-04-25 03:28:56.124489] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.886 [2024-04-25 03:28:56.124505] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.886 [2024-04-25 03:28:56.124519] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:21.886 [2024-04-25 03:28:56.124547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.886 qpair failed and we were unable to recover it. 00:28:21.886 [2024-04-25 03:28:56.134317] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.886 [2024-04-25 03:28:56.134511] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.886 [2024-04-25 03:28:56.134536] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.886 [2024-04-25 03:28:56.134550] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.886 [2024-04-25 03:28:56.134563] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:21.886 [2024-04-25 03:28:56.134592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.886 qpair failed and we were unable to recover it. 00:28:21.886 [2024-04-25 03:28:56.144316] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.886 [2024-04-25 03:28:56.144489] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.886 [2024-04-25 03:28:56.144513] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.886 [2024-04-25 03:28:56.144527] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.886 [2024-04-25 03:28:56.144540] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:21.886 [2024-04-25 03:28:56.144569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.886 qpair failed and we were unable to recover it. 00:28:21.886 [2024-04-25 03:28:56.154386] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.886 [2024-04-25 03:28:56.154562] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.886 [2024-04-25 03:28:56.154587] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.886 [2024-04-25 03:28:56.154602] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.886 [2024-04-25 03:28:56.154614] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:21.886 [2024-04-25 03:28:56.154648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.886 qpair failed and we were unable to recover it. 00:28:21.886 [2024-04-25 03:28:56.164382] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.886 [2024-04-25 03:28:56.164558] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.886 [2024-04-25 03:28:56.164582] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.886 [2024-04-25 03:28:56.164612] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.886 [2024-04-25 03:28:56.164626] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:21.886 [2024-04-25 03:28:56.164683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.886 qpair failed and we were unable to recover it. 00:28:21.886 [2024-04-25 03:28:56.174407] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.886 [2024-04-25 03:28:56.174589] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.886 [2024-04-25 03:28:56.174614] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.886 [2024-04-25 03:28:56.174634] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.886 [2024-04-25 03:28:56.174649] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:21.886 [2024-04-25 03:28:56.174678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.886 qpair failed and we were unable to recover it. 00:28:21.886 [2024-04-25 03:28:56.184437] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.886 [2024-04-25 03:28:56.184662] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.886 [2024-04-25 03:28:56.184686] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.886 [2024-04-25 03:28:56.184701] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.886 [2024-04-25 03:28:56.184714] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:21.886 [2024-04-25 03:28:56.184743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.886 qpair failed and we were unable to recover it. 00:28:21.886 [2024-04-25 03:28:56.194471] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.886 [2024-04-25 03:28:56.194657] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.886 [2024-04-25 03:28:56.194682] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.886 [2024-04-25 03:28:56.194697] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.886 [2024-04-25 03:28:56.194709] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:21.886 [2024-04-25 03:28:56.194737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.886 qpair failed and we were unable to recover it. 00:28:21.886 [2024-04-25 03:28:56.204499] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.886 [2024-04-25 03:28:56.204699] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.886 [2024-04-25 03:28:56.204724] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.886 [2024-04-25 03:28:56.204739] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.886 [2024-04-25 03:28:56.204752] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:21.886 [2024-04-25 03:28:56.204781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.886 qpair failed and we were unable to recover it. 00:28:21.886 [2024-04-25 03:28:56.214550] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.886 [2024-04-25 03:28:56.214733] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.886 [2024-04-25 03:28:56.214763] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.886 [2024-04-25 03:28:56.214778] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.886 [2024-04-25 03:28:56.214792] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:21.886 [2024-04-25 03:28:56.214821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.886 qpair failed and we were unable to recover it. 00:28:21.886 [2024-04-25 03:28:56.224581] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.887 [2024-04-25 03:28:56.224762] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.887 [2024-04-25 03:28:56.224786] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.887 [2024-04-25 03:28:56.224801] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.887 [2024-04-25 03:28:56.224813] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:21.887 [2024-04-25 03:28:56.224843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.887 qpair failed and we were unable to recover it. 00:28:21.887 [2024-04-25 03:28:56.234597] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.887 [2024-04-25 03:28:56.234781] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.887 [2024-04-25 03:28:56.234807] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.887 [2024-04-25 03:28:56.234822] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.887 [2024-04-25 03:28:56.234834] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:21.887 [2024-04-25 03:28:56.234863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.887 qpair failed and we were unable to recover it. 00:28:21.887 [2024-04-25 03:28:56.244622] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.887 [2024-04-25 03:28:56.244803] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.887 [2024-04-25 03:28:56.244828] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.887 [2024-04-25 03:28:56.244842] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.887 [2024-04-25 03:28:56.244855] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:21.887 [2024-04-25 03:28:56.244883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.887 qpair failed and we were unable to recover it. 00:28:21.887 [2024-04-25 03:28:56.254642] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.887 [2024-04-25 03:28:56.254857] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.887 [2024-04-25 03:28:56.254881] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.887 [2024-04-25 03:28:56.254896] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.887 [2024-04-25 03:28:56.254915] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:21.887 [2024-04-25 03:28:56.254944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.887 qpair failed and we were unable to recover it. 00:28:21.887 [2024-04-25 03:28:56.264708] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.887 [2024-04-25 03:28:56.264897] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.887 [2024-04-25 03:28:56.264922] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.887 [2024-04-25 03:28:56.264937] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.887 [2024-04-25 03:28:56.264950] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:21.887 [2024-04-25 03:28:56.264979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.887 qpair failed and we were unable to recover it. 00:28:21.887 [2024-04-25 03:28:56.274711] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.887 [2024-04-25 03:28:56.274924] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.887 [2024-04-25 03:28:56.274949] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.887 [2024-04-25 03:28:56.274964] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.887 [2024-04-25 03:28:56.274977] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:21.887 [2024-04-25 03:28:56.275005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.887 qpair failed and we were unable to recover it. 00:28:21.887 [2024-04-25 03:28:56.284745] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.887 [2024-04-25 03:28:56.284921] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.887 [2024-04-25 03:28:56.284947] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.887 [2024-04-25 03:28:56.284961] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.887 [2024-04-25 03:28:56.284974] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:21.887 [2024-04-25 03:28:56.285003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.887 qpair failed and we were unable to recover it. 00:28:21.887 [2024-04-25 03:28:56.294789] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.887 [2024-04-25 03:28:56.295001] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.887 [2024-04-25 03:28:56.295025] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.887 [2024-04-25 03:28:56.295040] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.887 [2024-04-25 03:28:56.295053] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:21.887 [2024-04-25 03:28:56.295080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.887 qpair failed and we were unable to recover it. 00:28:21.887 [2024-04-25 03:28:56.304767] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.887 [2024-04-25 03:28:56.304941] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.887 [2024-04-25 03:28:56.304966] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.887 [2024-04-25 03:28:56.304981] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.887 [2024-04-25 03:28:56.304995] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:21.887 [2024-04-25 03:28:56.305025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.887 qpair failed and we were unable to recover it. 00:28:21.887 [2024-04-25 03:28:56.314818] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.887 [2024-04-25 03:28:56.315001] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.887 [2024-04-25 03:28:56.315025] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.887 [2024-04-25 03:28:56.315040] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.887 [2024-04-25 03:28:56.315052] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:21.887 [2024-04-25 03:28:56.315080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.887 qpair failed and we were unable to recover it. 00:28:21.887 [2024-04-25 03:28:56.324883] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.887 [2024-04-25 03:28:56.325072] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.887 [2024-04-25 03:28:56.325111] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.887 [2024-04-25 03:28:56.325125] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.887 [2024-04-25 03:28:56.325138] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:21.887 [2024-04-25 03:28:56.325181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.887 qpair failed and we were unable to recover it. 00:28:21.887 [2024-04-25 03:28:56.334901] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.887 [2024-04-25 03:28:56.335074] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.887 [2024-04-25 03:28:56.335099] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.887 [2024-04-25 03:28:56.335113] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.887 [2024-04-25 03:28:56.335126] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:21.887 [2024-04-25 03:28:56.335154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.887 qpair failed and we were unable to recover it. 00:28:21.887 [2024-04-25 03:28:56.344925] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.887 [2024-04-25 03:28:56.345104] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.887 [2024-04-25 03:28:56.345128] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.887 [2024-04-25 03:28:56.345143] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.887 [2024-04-25 03:28:56.345161] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:21.887 [2024-04-25 03:28:56.345190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.887 qpair failed and we were unable to recover it. 00:28:21.887 [2024-04-25 03:28:56.354971] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.887 [2024-04-25 03:28:56.355147] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.887 [2024-04-25 03:28:56.355172] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.887 [2024-04-25 03:28:56.355186] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.887 [2024-04-25 03:28:56.355199] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:21.888 [2024-04-25 03:28:56.355229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.888 qpair failed and we were unable to recover it. 00:28:21.888 [2024-04-25 03:28:56.364963] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.888 [2024-04-25 03:28:56.365133] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.888 [2024-04-25 03:28:56.365157] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.888 [2024-04-25 03:28:56.365172] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.888 [2024-04-25 03:28:56.365185] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:21.888 [2024-04-25 03:28:56.365213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.888 qpair failed and we were unable to recover it. 00:28:21.888 [2024-04-25 03:28:56.374997] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.888 [2024-04-25 03:28:56.375160] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.888 [2024-04-25 03:28:56.375184] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.888 [2024-04-25 03:28:56.375199] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.888 [2024-04-25 03:28:56.375212] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:21.888 [2024-04-25 03:28:56.375240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.888 qpair failed and we were unable to recover it. 00:28:22.147 [2024-04-25 03:28:56.385076] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.147 [2024-04-25 03:28:56.385249] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.147 [2024-04-25 03:28:56.385274] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.147 [2024-04-25 03:28:56.385288] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.147 [2024-04-25 03:28:56.385301] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:22.147 [2024-04-25 03:28:56.385329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.147 qpair failed and we were unable to recover it. 00:28:22.147 [2024-04-25 03:28:56.395054] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.147 [2024-04-25 03:28:56.395237] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.147 [2024-04-25 03:28:56.395262] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.147 [2024-04-25 03:28:56.395276] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.147 [2024-04-25 03:28:56.395289] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:22.147 [2024-04-25 03:28:56.395317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.147 qpair failed and we were unable to recover it. 00:28:22.147 [2024-04-25 03:28:56.405060] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.147 [2024-04-25 03:28:56.405234] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.147 [2024-04-25 03:28:56.405258] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.147 [2024-04-25 03:28:56.405273] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.147 [2024-04-25 03:28:56.405286] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:22.147 [2024-04-25 03:28:56.405314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.147 qpair failed and we were unable to recover it. 00:28:22.147 [2024-04-25 03:28:56.415139] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.147 [2024-04-25 03:28:56.415309] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.147 [2024-04-25 03:28:56.415334] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.147 [2024-04-25 03:28:56.415349] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.147 [2024-04-25 03:28:56.415362] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:22.147 [2024-04-25 03:28:56.415390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.147 qpair failed and we were unable to recover it. 00:28:22.147 [2024-04-25 03:28:56.425157] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.147 [2024-04-25 03:28:56.425325] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.147 [2024-04-25 03:28:56.425351] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.147 [2024-04-25 03:28:56.425366] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.147 [2024-04-25 03:28:56.425379] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:22.147 [2024-04-25 03:28:56.425407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.147 qpair failed and we were unable to recover it. 00:28:22.147 [2024-04-25 03:28:56.435158] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.147 [2024-04-25 03:28:56.435377] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.147 [2024-04-25 03:28:56.435403] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.147 [2024-04-25 03:28:56.435418] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.147 [2024-04-25 03:28:56.435435] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:22.147 [2024-04-25 03:28:56.435464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.147 qpair failed and we were unable to recover it. 00:28:22.147 [2024-04-25 03:28:56.445189] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.147 [2024-04-25 03:28:56.445363] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.147 [2024-04-25 03:28:56.445388] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.147 [2024-04-25 03:28:56.445403] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.147 [2024-04-25 03:28:56.445416] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:22.147 [2024-04-25 03:28:56.445445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.147 qpair failed and we were unable to recover it. 00:28:22.147 [2024-04-25 03:28:56.455220] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.147 [2024-04-25 03:28:56.455391] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.147 [2024-04-25 03:28:56.455417] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.147 [2024-04-25 03:28:56.455433] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.147 [2024-04-25 03:28:56.455446] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:22.147 [2024-04-25 03:28:56.455476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.147 qpair failed and we were unable to recover it. 00:28:22.147 [2024-04-25 03:28:56.465282] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.147 [2024-04-25 03:28:56.465444] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.147 [2024-04-25 03:28:56.465471] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.147 [2024-04-25 03:28:56.465486] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.147 [2024-04-25 03:28:56.465498] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:22.147 [2024-04-25 03:28:56.465527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.147 qpair failed and we were unable to recover it. 00:28:22.147 [2024-04-25 03:28:56.475400] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.147 [2024-04-25 03:28:56.475580] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.147 [2024-04-25 03:28:56.475605] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.147 [2024-04-25 03:28:56.475620] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.147 [2024-04-25 03:28:56.475640] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:22.147 [2024-04-25 03:28:56.475669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.147 qpair failed and we were unable to recover it. 00:28:22.147 [2024-04-25 03:28:56.485359] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.147 [2024-04-25 03:28:56.485571] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.148 [2024-04-25 03:28:56.485606] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.148 [2024-04-25 03:28:56.485623] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.148 [2024-04-25 03:28:56.485644] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:22.148 [2024-04-25 03:28:56.485677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:22.148 qpair failed and we were unable to recover it. 00:28:22.148 [2024-04-25 03:28:56.495361] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.148 [2024-04-25 03:28:56.495534] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.148 [2024-04-25 03:28:56.495563] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.148 [2024-04-25 03:28:56.495579] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.148 [2024-04-25 03:28:56.495591] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:22.148 [2024-04-25 03:28:56.495644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:22.148 qpair failed and we were unable to recover it. 00:28:22.148 [2024-04-25 03:28:56.505392] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.148 [2024-04-25 03:28:56.505567] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.148 [2024-04-25 03:28:56.505595] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.148 [2024-04-25 03:28:56.505610] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.148 [2024-04-25 03:28:56.505623] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:22.148 [2024-04-25 03:28:56.505665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:22.148 qpair failed and we were unable to recover it. 00:28:22.148 [2024-04-25 03:28:56.515420] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.148 [2024-04-25 03:28:56.515616] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.148 [2024-04-25 03:28:56.515652] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.148 [2024-04-25 03:28:56.515668] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.148 [2024-04-25 03:28:56.515681] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:22.148 [2024-04-25 03:28:56.515711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:22.148 qpair failed and we were unable to recover it. 00:28:22.148 [2024-04-25 03:28:56.525444] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.148 [2024-04-25 03:28:56.525616] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.148 [2024-04-25 03:28:56.525651] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.148 [2024-04-25 03:28:56.525673] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.148 [2024-04-25 03:28:56.525687] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:22.148 [2024-04-25 03:28:56.525717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:22.148 qpair failed and we were unable to recover it. 00:28:22.148 [2024-04-25 03:28:56.535482] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.148 [2024-04-25 03:28:56.535667] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.148 [2024-04-25 03:28:56.535695] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.148 [2024-04-25 03:28:56.535711] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.148 [2024-04-25 03:28:56.535727] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:22.148 [2024-04-25 03:28:56.535757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:22.148 qpair failed and we were unable to recover it. 00:28:22.148 [2024-04-25 03:28:56.545495] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.148 [2024-04-25 03:28:56.545675] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.148 [2024-04-25 03:28:56.545704] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.148 [2024-04-25 03:28:56.545719] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.148 [2024-04-25 03:28:56.545732] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:22.148 [2024-04-25 03:28:56.545761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:22.148 qpair failed and we were unable to recover it. 00:28:22.148 [2024-04-25 03:28:56.555506] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.148 [2024-04-25 03:28:56.555678] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.148 [2024-04-25 03:28:56.555715] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.148 [2024-04-25 03:28:56.555731] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.148 [2024-04-25 03:28:56.555743] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:22.148 [2024-04-25 03:28:56.555774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:22.148 qpair failed and we were unable to recover it. 00:28:22.148 [2024-04-25 03:28:56.565523] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.148 [2024-04-25 03:28:56.565696] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.148 [2024-04-25 03:28:56.565722] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.148 [2024-04-25 03:28:56.565738] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.148 [2024-04-25 03:28:56.565750] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:22.148 [2024-04-25 03:28:56.565780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:22.148 qpair failed and we were unable to recover it. 00:28:22.148 [2024-04-25 03:28:56.575680] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.148 [2024-04-25 03:28:56.575862] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.148 [2024-04-25 03:28:56.575889] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.148 [2024-04-25 03:28:56.575903] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.148 [2024-04-25 03:28:56.575917] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:22.148 [2024-04-25 03:28:56.575947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:22.148 qpair failed and we were unable to recover it. 00:28:22.148 [2024-04-25 03:28:56.585647] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.148 [2024-04-25 03:28:56.585851] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.148 [2024-04-25 03:28:56.585878] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.148 [2024-04-25 03:28:56.585893] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.148 [2024-04-25 03:28:56.585905] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:22.148 [2024-04-25 03:28:56.585947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:22.148 qpair failed and we were unable to recover it. 00:28:22.148 [2024-04-25 03:28:56.595674] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.148 [2024-04-25 03:28:56.595843] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.148 [2024-04-25 03:28:56.595871] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.148 [2024-04-25 03:28:56.595886] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.148 [2024-04-25 03:28:56.595899] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:22.148 [2024-04-25 03:28:56.595929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:22.148 qpair failed and we were unable to recover it. 00:28:22.148 [2024-04-25 03:28:56.605682] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.148 [2024-04-25 03:28:56.605867] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.148 [2024-04-25 03:28:56.605896] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.148 [2024-04-25 03:28:56.605915] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.148 [2024-04-25 03:28:56.605928] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:22.148 [2024-04-25 03:28:56.605959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:22.148 qpair failed and we were unable to recover it. 00:28:22.148 [2024-04-25 03:28:56.615682] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.148 [2024-04-25 03:28:56.615851] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.148 [2024-04-25 03:28:56.615883] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.148 [2024-04-25 03:28:56.615899] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.148 [2024-04-25 03:28:56.615912] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:22.149 [2024-04-25 03:28:56.615942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:22.149 qpair failed and we were unable to recover it. 00:28:22.149 [2024-04-25 03:28:56.625711] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.149 [2024-04-25 03:28:56.625881] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.149 [2024-04-25 03:28:56.625909] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.149 [2024-04-25 03:28:56.625925] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.149 [2024-04-25 03:28:56.625937] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:22.149 [2024-04-25 03:28:56.625969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:22.149 qpair failed and we were unable to recover it. 00:28:22.149 [2024-04-25 03:28:56.635747] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.149 [2024-04-25 03:28:56.635922] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.149 [2024-04-25 03:28:56.635949] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.149 [2024-04-25 03:28:56.635964] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.149 [2024-04-25 03:28:56.635977] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:22.149 [2024-04-25 03:28:56.636007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:22.149 qpair failed and we were unable to recover it. 00:28:22.408 [2024-04-25 03:28:56.645880] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.408 [2024-04-25 03:28:56.646060] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.408 [2024-04-25 03:28:56.646086] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.408 [2024-04-25 03:28:56.646113] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.408 [2024-04-25 03:28:56.646127] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:22.408 [2024-04-25 03:28:56.646156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:22.408 qpair failed and we were unable to recover it. 00:28:22.408 [2024-04-25 03:28:56.655848] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.408 [2024-04-25 03:28:56.656021] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.408 [2024-04-25 03:28:56.656048] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.408 [2024-04-25 03:28:56.656064] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.408 [2024-04-25 03:28:56.656077] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:22.408 [2024-04-25 03:28:56.656112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:22.408 qpair failed and we were unable to recover it. 00:28:22.408 [2024-04-25 03:28:56.665910] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.408 [2024-04-25 03:28:56.666079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.408 [2024-04-25 03:28:56.666105] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.408 [2024-04-25 03:28:56.666121] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.408 [2024-04-25 03:28:56.666133] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:22.408 [2024-04-25 03:28:56.666163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:22.408 qpair failed and we were unable to recover it. 00:28:22.408 [2024-04-25 03:28:56.675883] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.408 [2024-04-25 03:28:56.676062] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.408 [2024-04-25 03:28:56.676088] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.408 [2024-04-25 03:28:56.676103] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.408 [2024-04-25 03:28:56.676116] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:22.408 [2024-04-25 03:28:56.676146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:22.408 qpair failed and we were unable to recover it. 00:28:22.408 [2024-04-25 03:28:56.685890] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.408 [2024-04-25 03:28:56.686058] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.408 [2024-04-25 03:28:56.686084] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.408 [2024-04-25 03:28:56.686099] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.408 [2024-04-25 03:28:56.686112] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:22.408 [2024-04-25 03:28:56.686141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:22.408 qpair failed and we were unable to recover it. 00:28:22.408 [2024-04-25 03:28:56.695912] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.408 [2024-04-25 03:28:56.696076] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.408 [2024-04-25 03:28:56.696103] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.408 [2024-04-25 03:28:56.696118] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.408 [2024-04-25 03:28:56.696130] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:22.408 [2024-04-25 03:28:56.696160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:22.408 qpair failed and we were unable to recover it. 00:28:22.408 [2024-04-25 03:28:56.705975] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.408 [2024-04-25 03:28:56.706150] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.408 [2024-04-25 03:28:56.706182] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.408 [2024-04-25 03:28:56.706198] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.408 [2024-04-25 03:28:56.706211] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:22.408 [2024-04-25 03:28:56.706240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:22.408 qpair failed and we were unable to recover it. 00:28:22.408 [2024-04-25 03:28:56.715994] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.408 [2024-04-25 03:28:56.716174] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.408 [2024-04-25 03:28:56.716200] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.408 [2024-04-25 03:28:56.716215] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.408 [2024-04-25 03:28:56.716231] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:22.408 [2024-04-25 03:28:56.716262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:22.408 qpair failed and we were unable to recover it. 00:28:22.408 [2024-04-25 03:28:56.725998] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.408 [2024-04-25 03:28:56.726173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.409 [2024-04-25 03:28:56.726200] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.409 [2024-04-25 03:28:56.726214] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.409 [2024-04-25 03:28:56.726227] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:22.409 [2024-04-25 03:28:56.726256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:22.409 qpair failed and we were unable to recover it. 00:28:22.409 [2024-04-25 03:28:56.736038] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.409 [2024-04-25 03:28:56.736229] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.409 [2024-04-25 03:28:56.736255] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.409 [2024-04-25 03:28:56.736271] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.409 [2024-04-25 03:28:56.736283] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:22.409 [2024-04-25 03:28:56.736313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:22.409 qpair failed and we were unable to recover it. 00:28:22.409 [2024-04-25 03:28:56.746124] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.409 [2024-04-25 03:28:56.746294] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.409 [2024-04-25 03:28:56.746319] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.409 [2024-04-25 03:28:56.746335] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.409 [2024-04-25 03:28:56.746347] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:22.409 [2024-04-25 03:28:56.746383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:22.409 qpair failed and we were unable to recover it. 00:28:22.409 [2024-04-25 03:28:56.756111] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.409 [2024-04-25 03:28:56.756331] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.409 [2024-04-25 03:28:56.756357] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.409 [2024-04-25 03:28:56.756373] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.409 [2024-04-25 03:28:56.756386] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:22.409 [2024-04-25 03:28:56.756415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:22.409 qpair failed and we were unable to recover it. 00:28:22.409 [2024-04-25 03:28:56.766139] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.409 [2024-04-25 03:28:56.766307] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.409 [2024-04-25 03:28:56.766333] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.409 [2024-04-25 03:28:56.766349] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.409 [2024-04-25 03:28:56.766361] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:22.409 [2024-04-25 03:28:56.766391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:22.409 qpair failed and we were unable to recover it. 00:28:22.409 [2024-04-25 03:28:56.776176] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.409 [2024-04-25 03:28:56.776353] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.409 [2024-04-25 03:28:56.776377] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.409 [2024-04-25 03:28:56.776392] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.409 [2024-04-25 03:28:56.776405] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:22.409 [2024-04-25 03:28:56.776435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:22.409 qpair failed and we were unable to recover it. 00:28:22.409 [2024-04-25 03:28:56.786176] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.409 [2024-04-25 03:28:56.786346] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.409 [2024-04-25 03:28:56.786374] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.409 [2024-04-25 03:28:56.786389] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.409 [2024-04-25 03:28:56.786402] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:22.409 [2024-04-25 03:28:56.786432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:22.409 qpair failed and we were unable to recover it. 00:28:22.409 [2024-04-25 03:28:56.796308] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.409 [2024-04-25 03:28:56.796516] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.409 [2024-04-25 03:28:56.796550] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.409 [2024-04-25 03:28:56.796569] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.409 [2024-04-25 03:28:56.796584] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:22.409 [2024-04-25 03:28:56.796615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:22.409 qpair failed and we were unable to recover it. 00:28:22.409 [2024-04-25 03:28:56.806228] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.409 [2024-04-25 03:28:56.806400] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.409 [2024-04-25 03:28:56.806427] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.409 [2024-04-25 03:28:56.806442] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.409 [2024-04-25 03:28:56.806455] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:22.409 [2024-04-25 03:28:56.806485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:22.409 qpair failed and we were unable to recover it. 00:28:22.409 [2024-04-25 03:28:56.816259] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.409 [2024-04-25 03:28:56.816422] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.409 [2024-04-25 03:28:56.816449] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.409 [2024-04-25 03:28:56.816465] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.409 [2024-04-25 03:28:56.816478] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:22.409 [2024-04-25 03:28:56.816508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:22.409 qpair failed and we were unable to recover it. 00:28:22.409 [2024-04-25 03:28:56.826287] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.409 [2024-04-25 03:28:56.826463] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.409 [2024-04-25 03:28:56.826490] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.409 [2024-04-25 03:28:56.826505] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.409 [2024-04-25 03:28:56.826518] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:22.409 [2024-04-25 03:28:56.826547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:22.409 qpair failed and we were unable to recover it. 00:28:22.409 [2024-04-25 03:28:56.836339] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.409 [2024-04-25 03:28:56.836508] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.409 [2024-04-25 03:28:56.836535] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.409 [2024-04-25 03:28:56.836551] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.409 [2024-04-25 03:28:56.836569] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:22.409 [2024-04-25 03:28:56.836601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:22.409 qpair failed and we were unable to recover it. 00:28:22.409 [2024-04-25 03:28:56.846370] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.409 [2024-04-25 03:28:56.846580] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.409 [2024-04-25 03:28:56.846607] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.409 [2024-04-25 03:28:56.846622] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.409 [2024-04-25 03:28:56.846643] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:22.409 [2024-04-25 03:28:56.846675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:22.409 qpair failed and we were unable to recover it. 00:28:22.409 [2024-04-25 03:28:56.856445] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.409 [2024-04-25 03:28:56.856623] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.409 [2024-04-25 03:28:56.856658] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.409 [2024-04-25 03:28:56.856674] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.409 [2024-04-25 03:28:56.856686] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:22.410 [2024-04-25 03:28:56.856718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:22.410 qpair failed and we were unable to recover it. 00:28:22.410 [2024-04-25 03:28:56.866444] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.410 [2024-04-25 03:28:56.866617] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.410 [2024-04-25 03:28:56.866651] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.410 [2024-04-25 03:28:56.866667] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.410 [2024-04-25 03:28:56.866680] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:22.410 [2024-04-25 03:28:56.866710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:22.410 qpair failed and we were unable to recover it. 00:28:22.410 [2024-04-25 03:28:56.876466] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.410 [2024-04-25 03:28:56.876651] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.410 [2024-04-25 03:28:56.876681] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.410 [2024-04-25 03:28:56.876697] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.410 [2024-04-25 03:28:56.876710] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:22.410 [2024-04-25 03:28:56.876740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:22.410 qpair failed and we were unable to recover it. 00:28:22.410 [2024-04-25 03:28:56.886496] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.410 [2024-04-25 03:28:56.886687] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.410 [2024-04-25 03:28:56.886715] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.410 [2024-04-25 03:28:56.886730] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.410 [2024-04-25 03:28:56.886743] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:22.410 [2024-04-25 03:28:56.886772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:22.410 qpair failed and we were unable to recover it. 00:28:22.410 [2024-04-25 03:28:56.896502] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.410 [2024-04-25 03:28:56.896730] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.410 [2024-04-25 03:28:56.896757] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.410 [2024-04-25 03:28:56.896773] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.410 [2024-04-25 03:28:56.896785] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:22.410 [2024-04-25 03:28:56.896815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:22.410 qpair failed and we were unable to recover it. 00:28:22.410 [2024-04-25 03:28:56.906533] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.410 [2024-04-25 03:28:56.906721] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.410 [2024-04-25 03:28:56.906747] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.410 [2024-04-25 03:28:56.906762] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.410 [2024-04-25 03:28:56.906774] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:22.410 [2024-04-25 03:28:56.906808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:22.410 qpair failed and we were unable to recover it. 00:28:22.669 [2024-04-25 03:28:56.916565] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.669 [2024-04-25 03:28:56.916757] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.669 [2024-04-25 03:28:56.916783] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.669 [2024-04-25 03:28:56.916798] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.670 [2024-04-25 03:28:56.916811] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:22.670 [2024-04-25 03:28:56.916841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:22.670 qpair failed and we were unable to recover it. 00:28:22.670 [2024-04-25 03:28:56.926581] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.670 [2024-04-25 03:28:56.926770] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.670 [2024-04-25 03:28:56.926797] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.670 [2024-04-25 03:28:56.926817] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.670 [2024-04-25 03:28:56.926830] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:22.670 [2024-04-25 03:28:56.926872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:22.670 qpair failed and we were unable to recover it. 00:28:22.670 [2024-04-25 03:28:56.936617] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.670 [2024-04-25 03:28:56.936808] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.670 [2024-04-25 03:28:56.936834] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.670 [2024-04-25 03:28:56.936850] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.670 [2024-04-25 03:28:56.936863] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:22.670 [2024-04-25 03:28:56.936905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:22.670 qpair failed and we were unable to recover it. 00:28:22.670 [2024-04-25 03:28:56.946695] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.670 [2024-04-25 03:28:56.946862] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.670 [2024-04-25 03:28:56.946888] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.670 [2024-04-25 03:28:56.946903] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.670 [2024-04-25 03:28:56.946916] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:22.670 [2024-04-25 03:28:56.946946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:22.670 qpair failed and we were unable to recover it. 00:28:22.670 [2024-04-25 03:28:56.956804] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.670 [2024-04-25 03:28:56.957025] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.670 [2024-04-25 03:28:56.957052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.670 [2024-04-25 03:28:56.957067] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.670 [2024-04-25 03:28:56.957085] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:22.670 [2024-04-25 03:28:56.957115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:22.670 qpair failed and we were unable to recover it. 00:28:22.670 [2024-04-25 03:28:56.966754] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.670 [2024-04-25 03:28:56.966979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.670 [2024-04-25 03:28:56.967008] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.670 [2024-04-25 03:28:56.967031] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.670 [2024-04-25 03:28:56.967043] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:22.670 [2024-04-25 03:28:56.967075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:22.670 qpair failed and we were unable to recover it. 00:28:22.670 [2024-04-25 03:28:56.976729] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.670 [2024-04-25 03:28:56.976908] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.670 [2024-04-25 03:28:56.976934] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.670 [2024-04-25 03:28:56.976950] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.670 [2024-04-25 03:28:56.976963] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:22.670 [2024-04-25 03:28:56.976995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:22.670 qpair failed and we were unable to recover it. 00:28:22.670 [2024-04-25 03:28:56.986744] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.670 [2024-04-25 03:28:56.986927] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.670 [2024-04-25 03:28:56.986953] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.670 [2024-04-25 03:28:56.986968] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.670 [2024-04-25 03:28:56.986981] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:22.670 [2024-04-25 03:28:56.987020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:22.670 qpair failed and we were unable to recover it. 00:28:22.670 [2024-04-25 03:28:56.996813] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.670 [2024-04-25 03:28:56.997020] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.670 [2024-04-25 03:28:56.997046] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.670 [2024-04-25 03:28:56.997061] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.670 [2024-04-25 03:28:56.997074] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:22.670 [2024-04-25 03:28:56.997103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:22.670 qpair failed and we were unable to recover it. 00:28:22.670 [2024-04-25 03:28:57.006810] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.670 [2024-04-25 03:28:57.007002] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.670 [2024-04-25 03:28:57.007029] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.670 [2024-04-25 03:28:57.007047] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.670 [2024-04-25 03:28:57.007060] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:22.670 [2024-04-25 03:28:57.007091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:22.670 qpair failed and we were unable to recover it. 00:28:22.670 [2024-04-25 03:28:57.016865] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.670 [2024-04-25 03:28:57.017044] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.670 [2024-04-25 03:28:57.017071] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.670 [2024-04-25 03:28:57.017092] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.670 [2024-04-25 03:28:57.017106] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:22.670 [2024-04-25 03:28:57.017135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:22.670 qpair failed and we were unable to recover it. 00:28:22.670 [2024-04-25 03:28:57.026948] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.670 [2024-04-25 03:28:57.027122] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.670 [2024-04-25 03:28:57.027148] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.670 [2024-04-25 03:28:57.027163] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.670 [2024-04-25 03:28:57.027176] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:22.670 [2024-04-25 03:28:57.027205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:22.670 qpair failed and we were unable to recover it. 00:28:22.670 [2024-04-25 03:28:57.036979] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.670 [2024-04-25 03:28:57.037169] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.670 [2024-04-25 03:28:57.037195] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.670 [2024-04-25 03:28:57.037210] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.670 [2024-04-25 03:28:57.037222] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:22.670 [2024-04-25 03:28:57.037251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:22.670 qpair failed and we were unable to recover it. 00:28:22.670 [2024-04-25 03:28:57.046924] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.670 [2024-04-25 03:28:57.047092] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.670 [2024-04-25 03:28:57.047118] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.670 [2024-04-25 03:28:57.047133] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.671 [2024-04-25 03:28:57.047145] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:22.671 [2024-04-25 03:28:57.047175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:22.671 qpair failed and we were unable to recover it. 00:28:22.671 [2024-04-25 03:28:57.056949] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.671 [2024-04-25 03:28:57.057126] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.671 [2024-04-25 03:28:57.057153] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.671 [2024-04-25 03:28:57.057169] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.671 [2024-04-25 03:28:57.057181] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:22.671 [2024-04-25 03:28:57.057212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:22.671 qpair failed and we were unable to recover it. 00:28:22.671 [2024-04-25 03:28:57.066960] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.671 [2024-04-25 03:28:57.067132] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.671 [2024-04-25 03:28:57.067159] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.671 [2024-04-25 03:28:57.067174] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.671 [2024-04-25 03:28:57.067186] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:22.671 [2024-04-25 03:28:57.067216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:22.671 qpair failed and we were unable to recover it. 00:28:22.671 [2024-04-25 03:28:57.077041] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.671 [2024-04-25 03:28:57.077222] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.671 [2024-04-25 03:28:57.077248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.671 [2024-04-25 03:28:57.077264] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.671 [2024-04-25 03:28:57.077277] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:22.671 [2024-04-25 03:28:57.077307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:22.671 qpair failed and we were unable to recover it. 00:28:22.671 [2024-04-25 03:28:57.087052] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.671 [2024-04-25 03:28:57.087232] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.671 [2024-04-25 03:28:57.087258] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.671 [2024-04-25 03:28:57.087274] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.671 [2024-04-25 03:28:57.087286] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:22.671 [2024-04-25 03:28:57.087316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:22.671 qpair failed and we were unable to recover it. 00:28:22.671 [2024-04-25 03:28:57.097037] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.671 [2024-04-25 03:28:57.097214] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.671 [2024-04-25 03:28:57.097240] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.671 [2024-04-25 03:28:57.097256] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.671 [2024-04-25 03:28:57.097268] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:22.671 [2024-04-25 03:28:57.097297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:22.671 qpair failed and we were unable to recover it. 00:28:22.671 [2024-04-25 03:28:57.107107] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.671 [2024-04-25 03:28:57.107289] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.671 [2024-04-25 03:28:57.107322] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.671 [2024-04-25 03:28:57.107342] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.671 [2024-04-25 03:28:57.107355] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:22.671 [2024-04-25 03:28:57.107387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:22.671 qpair failed and we were unable to recover it. 00:28:22.671 [2024-04-25 03:28:57.117136] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.671 [2024-04-25 03:28:57.117313] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.671 [2024-04-25 03:28:57.117340] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.671 [2024-04-25 03:28:57.117355] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.671 [2024-04-25 03:28:57.117368] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:22.671 [2024-04-25 03:28:57.117398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:22.671 qpair failed and we were unable to recover it. 00:28:22.671 [2024-04-25 03:28:57.127117] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.671 [2024-04-25 03:28:57.127302] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.671 [2024-04-25 03:28:57.127328] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.671 [2024-04-25 03:28:57.127343] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.671 [2024-04-25 03:28:57.127356] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:22.671 [2024-04-25 03:28:57.127385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:22.671 qpair failed and we were unable to recover it. 00:28:22.671 [2024-04-25 03:28:57.137153] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.671 [2024-04-25 03:28:57.137325] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.671 [2024-04-25 03:28:57.137351] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.671 [2024-04-25 03:28:57.137367] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.671 [2024-04-25 03:28:57.137379] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:22.671 [2024-04-25 03:28:57.137408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:22.671 qpair failed and we were unable to recover it. 00:28:22.671 [2024-04-25 03:28:57.147184] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.671 [2024-04-25 03:28:57.147353] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.671 [2024-04-25 03:28:57.147380] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.671 [2024-04-25 03:28:57.147395] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.671 [2024-04-25 03:28:57.147408] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:22.671 [2024-04-25 03:28:57.147445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:22.671 qpair failed and we were unable to recover it. 00:28:22.671 [2024-04-25 03:28:57.157214] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.671 [2024-04-25 03:28:57.157403] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.671 [2024-04-25 03:28:57.157429] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.671 [2024-04-25 03:28:57.157445] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.671 [2024-04-25 03:28:57.157457] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:22.671 [2024-04-25 03:28:57.157486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:22.671 qpair failed and we were unable to recover it. 00:28:22.671 [2024-04-25 03:28:57.167289] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.671 [2024-04-25 03:28:57.167469] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.671 [2024-04-25 03:28:57.167496] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.671 [2024-04-25 03:28:57.167511] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.671 [2024-04-25 03:28:57.167524] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:22.671 [2024-04-25 03:28:57.167556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:22.671 qpair failed and we were unable to recover it. 00:28:22.931 [2024-04-25 03:28:57.177295] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.931 [2024-04-25 03:28:57.177473] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.931 [2024-04-25 03:28:57.177499] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.931 [2024-04-25 03:28:57.177529] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.931 [2024-04-25 03:28:57.177542] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:22.931 [2024-04-25 03:28:57.177598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:22.931 qpair failed and we were unable to recover it. 00:28:22.931 [2024-04-25 03:28:57.187309] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.931 [2024-04-25 03:28:57.187525] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.931 [2024-04-25 03:28:57.187552] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.931 [2024-04-25 03:28:57.187567] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.931 [2024-04-25 03:28:57.187580] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:22.931 [2024-04-25 03:28:57.187610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:22.931 qpair failed and we were unable to recover it. 00:28:22.931 [2024-04-25 03:28:57.197343] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.931 [2024-04-25 03:28:57.197525] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.931 [2024-04-25 03:28:57.197557] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.931 [2024-04-25 03:28:57.197572] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.931 [2024-04-25 03:28:57.197585] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:22.931 [2024-04-25 03:28:57.197614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:22.931 qpair failed and we were unable to recover it. 00:28:22.931 [2024-04-25 03:28:57.207362] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.931 [2024-04-25 03:28:57.207547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.931 [2024-04-25 03:28:57.207573] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.931 [2024-04-25 03:28:57.207588] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.931 [2024-04-25 03:28:57.207601] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:22.932 [2024-04-25 03:28:57.207636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:22.932 qpair failed and we were unable to recover it. 00:28:22.932 [2024-04-25 03:28:57.217402] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.932 [2024-04-25 03:28:57.217589] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.932 [2024-04-25 03:28:57.217616] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.932 [2024-04-25 03:28:57.217640] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.932 [2024-04-25 03:28:57.217655] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:22.932 [2024-04-25 03:28:57.217685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:22.932 qpair failed and we were unable to recover it. 00:28:22.932 [2024-04-25 03:28:57.227438] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.932 [2024-04-25 03:28:57.227621] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.932 [2024-04-25 03:28:57.227659] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.932 [2024-04-25 03:28:57.227674] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.932 [2024-04-25 03:28:57.227687] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:22.932 [2024-04-25 03:28:57.227716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:22.932 qpair failed and we were unable to recover it. 00:28:22.932 [2024-04-25 03:28:57.237456] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.932 [2024-04-25 03:28:57.237650] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.932 [2024-04-25 03:28:57.237676] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.932 [2024-04-25 03:28:57.237692] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.932 [2024-04-25 03:28:57.237710] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:22.932 [2024-04-25 03:28:57.237741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:22.932 qpair failed and we were unable to recover it. 00:28:22.932 [2024-04-25 03:28:57.247474] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.932 [2024-04-25 03:28:57.247653] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.932 [2024-04-25 03:28:57.247679] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.932 [2024-04-25 03:28:57.247696] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.932 [2024-04-25 03:28:57.247709] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:22.932 [2024-04-25 03:28:57.247738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:22.932 qpair failed and we were unable to recover it. 00:28:22.932 [2024-04-25 03:28:57.257489] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.932 [2024-04-25 03:28:57.257670] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.932 [2024-04-25 03:28:57.257696] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.932 [2024-04-25 03:28:57.257712] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.932 [2024-04-25 03:28:57.257724] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:22.932 [2024-04-25 03:28:57.257754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:22.932 qpair failed and we were unable to recover it. 00:28:22.932 [2024-04-25 03:28:57.267592] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.932 [2024-04-25 03:28:57.267794] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.932 [2024-04-25 03:28:57.267821] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.932 [2024-04-25 03:28:57.267836] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.932 [2024-04-25 03:28:57.267849] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:22.932 [2024-04-25 03:28:57.267878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:22.932 qpair failed and we were unable to recover it. 00:28:22.932 [2024-04-25 03:28:57.277551] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.932 [2024-04-25 03:28:57.277745] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.932 [2024-04-25 03:28:57.277771] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.932 [2024-04-25 03:28:57.277787] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.932 [2024-04-25 03:28:57.277799] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:22.932 [2024-04-25 03:28:57.277829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:22.932 qpair failed and we were unable to recover it. 00:28:22.932 [2024-04-25 03:28:57.287583] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.932 [2024-04-25 03:28:57.287782] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.932 [2024-04-25 03:28:57.287809] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.932 [2024-04-25 03:28:57.287824] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.932 [2024-04-25 03:28:57.287836] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:22.932 [2024-04-25 03:28:57.287866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:22.932 qpair failed and we were unable to recover it. 00:28:22.932 [2024-04-25 03:28:57.297605] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.932 [2024-04-25 03:28:57.297791] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.932 [2024-04-25 03:28:57.297818] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.932 [2024-04-25 03:28:57.297833] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.932 [2024-04-25 03:28:57.297846] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:22.932 [2024-04-25 03:28:57.297876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:22.932 qpair failed and we were unable to recover it. 00:28:22.932 [2024-04-25 03:28:57.307641] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.932 [2024-04-25 03:28:57.307827] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.932 [2024-04-25 03:28:57.307852] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.932 [2024-04-25 03:28:57.307868] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.932 [2024-04-25 03:28:57.307881] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:22.932 [2024-04-25 03:28:57.307910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:22.932 qpair failed and we were unable to recover it. 00:28:22.932 [2024-04-25 03:28:57.317663] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.932 [2024-04-25 03:28:57.317841] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.932 [2024-04-25 03:28:57.317867] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.932 [2024-04-25 03:28:57.317882] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.932 [2024-04-25 03:28:57.317895] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:22.932 [2024-04-25 03:28:57.317924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:22.932 qpair failed and we were unable to recover it. 00:28:22.932 [2024-04-25 03:28:57.327694] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.932 [2024-04-25 03:28:57.327869] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.932 [2024-04-25 03:28:57.327895] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.932 [2024-04-25 03:28:57.327917] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.932 [2024-04-25 03:28:57.327931] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:22.932 [2024-04-25 03:28:57.327960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:22.932 qpair failed and we were unable to recover it. 00:28:22.932 [2024-04-25 03:28:57.337737] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.932 [2024-04-25 03:28:57.337906] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.932 [2024-04-25 03:28:57.337931] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.932 [2024-04-25 03:28:57.337947] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.932 [2024-04-25 03:28:57.337960] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:22.932 [2024-04-25 03:28:57.337988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:22.932 qpair failed and we were unable to recover it. 00:28:22.932 [2024-04-25 03:28:57.347754] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.933 [2024-04-25 03:28:57.347931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.933 [2024-04-25 03:28:57.347957] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.933 [2024-04-25 03:28:57.347973] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.933 [2024-04-25 03:28:57.347985] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:22.933 [2024-04-25 03:28:57.348017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:22.933 qpair failed and we were unable to recover it. 00:28:22.933 [2024-04-25 03:28:57.357772] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.933 [2024-04-25 03:28:57.358000] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.933 [2024-04-25 03:28:57.358026] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.933 [2024-04-25 03:28:57.358041] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.933 [2024-04-25 03:28:57.358054] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:22.933 [2024-04-25 03:28:57.358084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:22.933 qpair failed and we were unable to recover it. 00:28:22.933 [2024-04-25 03:28:57.367806] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.933 [2024-04-25 03:28:57.368030] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.933 [2024-04-25 03:28:57.368056] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.933 [2024-04-25 03:28:57.368072] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.933 [2024-04-25 03:28:57.368084] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:22.933 [2024-04-25 03:28:57.368113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:22.933 qpair failed and we were unable to recover it. 00:28:22.933 [2024-04-25 03:28:57.377813] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.933 [2024-04-25 03:28:57.377980] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.933 [2024-04-25 03:28:57.378006] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.933 [2024-04-25 03:28:57.378021] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.933 [2024-04-25 03:28:57.378033] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:22.933 [2024-04-25 03:28:57.378063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:22.933 qpair failed and we were unable to recover it. 00:28:22.933 [2024-04-25 03:28:57.387872] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.933 [2024-04-25 03:28:57.388049] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.933 [2024-04-25 03:28:57.388075] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.933 [2024-04-25 03:28:57.388090] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.933 [2024-04-25 03:28:57.388103] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:22.933 [2024-04-25 03:28:57.388132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:22.933 qpair failed and we were unable to recover it. 00:28:22.933 [2024-04-25 03:28:57.397890] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.933 [2024-04-25 03:28:57.398067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.933 [2024-04-25 03:28:57.398093] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.933 [2024-04-25 03:28:57.398109] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.933 [2024-04-25 03:28:57.398122] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:22.933 [2024-04-25 03:28:57.398151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:22.933 qpair failed and we were unable to recover it. 00:28:22.933 [2024-04-25 03:28:57.407946] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.933 [2024-04-25 03:28:57.408128] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.933 [2024-04-25 03:28:57.408154] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.933 [2024-04-25 03:28:57.408169] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.933 [2024-04-25 03:28:57.408183] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:22.933 [2024-04-25 03:28:57.408212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:22.933 qpair failed and we were unable to recover it. 00:28:22.933 [2024-04-25 03:28:57.417940] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.933 [2024-04-25 03:28:57.418117] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.933 [2024-04-25 03:28:57.418143] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.933 [2024-04-25 03:28:57.418164] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.933 [2024-04-25 03:28:57.418177] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:22.933 [2024-04-25 03:28:57.418208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:22.933 qpair failed and we were unable to recover it. 00:28:22.933 [2024-04-25 03:28:57.428006] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.933 [2024-04-25 03:28:57.428182] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.933 [2024-04-25 03:28:57.428207] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.933 [2024-04-25 03:28:57.428222] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.933 [2024-04-25 03:28:57.428234] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:22.933 [2024-04-25 03:28:57.428264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:22.933 qpair failed and we were unable to recover it. 00:28:23.193 [2024-04-25 03:28:57.438014] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.193 [2024-04-25 03:28:57.438189] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.193 [2024-04-25 03:28:57.438214] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.193 [2024-04-25 03:28:57.438228] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.193 [2024-04-25 03:28:57.438241] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:23.193 [2024-04-25 03:28:57.438271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:23.193 qpair failed and we were unable to recover it. 00:28:23.193 [2024-04-25 03:28:57.448054] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.193 [2024-04-25 03:28:57.448223] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.193 [2024-04-25 03:28:57.448248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.193 [2024-04-25 03:28:57.448262] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.193 [2024-04-25 03:28:57.448275] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:23.193 [2024-04-25 03:28:57.448305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:23.193 qpair failed and we were unable to recover it. 00:28:23.193 [2024-04-25 03:28:57.458116] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.193 [2024-04-25 03:28:57.458293] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.193 [2024-04-25 03:28:57.458319] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.193 [2024-04-25 03:28:57.458333] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.193 [2024-04-25 03:28:57.458361] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:23.193 [2024-04-25 03:28:57.458393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:23.193 qpair failed and we were unable to recover it. 00:28:23.193 [2024-04-25 03:28:57.468087] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.193 [2024-04-25 03:28:57.468258] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.193 [2024-04-25 03:28:57.468283] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.193 [2024-04-25 03:28:57.468298] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.193 [2024-04-25 03:28:57.468311] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:23.193 [2024-04-25 03:28:57.468341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:23.193 qpair failed and we were unable to recover it. 00:28:23.193 [2024-04-25 03:28:57.478124] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.193 [2024-04-25 03:28:57.478312] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.193 [2024-04-25 03:28:57.478337] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.193 [2024-04-25 03:28:57.478352] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.193 [2024-04-25 03:28:57.478365] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:23.193 [2024-04-25 03:28:57.478396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:23.193 qpair failed and we were unable to recover it. 00:28:23.193 [2024-04-25 03:28:57.488152] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.193 [2024-04-25 03:28:57.488330] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.193 [2024-04-25 03:28:57.488356] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.193 [2024-04-25 03:28:57.488370] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.193 [2024-04-25 03:28:57.488383] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:23.193 [2024-04-25 03:28:57.488413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:23.193 qpair failed and we were unable to recover it. 00:28:23.193 [2024-04-25 03:28:57.498200] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.193 [2024-04-25 03:28:57.498420] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.193 [2024-04-25 03:28:57.498447] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.193 [2024-04-25 03:28:57.498464] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.193 [2024-04-25 03:28:57.498477] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:23.193 [2024-04-25 03:28:57.498522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:23.193 qpair failed and we were unable to recover it. 00:28:23.193 [2024-04-25 03:28:57.508208] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.193 [2024-04-25 03:28:57.508374] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.193 [2024-04-25 03:28:57.508405] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.193 [2024-04-25 03:28:57.508420] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.193 [2024-04-25 03:28:57.508433] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:23.193 [2024-04-25 03:28:57.508462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:23.193 qpair failed and we were unable to recover it. 00:28:23.193 [2024-04-25 03:28:57.518262] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.193 [2024-04-25 03:28:57.518434] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.193 [2024-04-25 03:28:57.518459] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.193 [2024-04-25 03:28:57.518473] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.193 [2024-04-25 03:28:57.518486] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:23.193 [2024-04-25 03:28:57.518514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:23.193 qpair failed and we were unable to recover it. 00:28:23.193 [2024-04-25 03:28:57.528280] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.193 [2024-04-25 03:28:57.528461] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.193 [2024-04-25 03:28:57.528486] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.193 [2024-04-25 03:28:57.528501] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.193 [2024-04-25 03:28:57.528514] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:23.193 [2024-04-25 03:28:57.528544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:23.193 qpair failed and we were unable to recover it. 00:28:23.193 [2024-04-25 03:28:57.538319] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.194 [2024-04-25 03:28:57.538497] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.194 [2024-04-25 03:28:57.538523] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.194 [2024-04-25 03:28:57.538537] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.194 [2024-04-25 03:28:57.538566] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:23.194 [2024-04-25 03:28:57.538596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:23.194 qpair failed and we were unable to recover it. 00:28:23.194 [2024-04-25 03:28:57.548328] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.194 [2024-04-25 03:28:57.548496] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.194 [2024-04-25 03:28:57.548522] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.194 [2024-04-25 03:28:57.548537] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.194 [2024-04-25 03:28:57.548550] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:23.194 [2024-04-25 03:28:57.548598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:23.194 qpair failed and we were unable to recover it. 00:28:23.194 [2024-04-25 03:28:57.558394] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.194 [2024-04-25 03:28:57.558566] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.194 [2024-04-25 03:28:57.558591] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.194 [2024-04-25 03:28:57.558606] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.194 [2024-04-25 03:28:57.558619] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:23.194 [2024-04-25 03:28:57.558656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:23.194 qpair failed and we were unable to recover it. 00:28:23.194 [2024-04-25 03:28:57.568420] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.194 [2024-04-25 03:28:57.568622] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.194 [2024-04-25 03:28:57.568655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.194 [2024-04-25 03:28:57.568671] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.194 [2024-04-25 03:28:57.568684] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:23.194 [2024-04-25 03:28:57.568715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:23.194 qpair failed and we were unable to recover it. 00:28:23.194 [2024-04-25 03:28:57.578405] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.194 [2024-04-25 03:28:57.578588] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.194 [2024-04-25 03:28:57.578614] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.194 [2024-04-25 03:28:57.578635] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.194 [2024-04-25 03:28:57.578650] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:23.194 [2024-04-25 03:28:57.578682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:23.194 qpair failed and we were unable to recover it. 00:28:23.194 [2024-04-25 03:28:57.588432] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.194 [2024-04-25 03:28:57.588601] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.194 [2024-04-25 03:28:57.588635] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.194 [2024-04-25 03:28:57.588652] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.194 [2024-04-25 03:28:57.588665] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:23.194 [2024-04-25 03:28:57.588697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:23.194 qpair failed and we were unable to recover it. 00:28:23.194 [2024-04-25 03:28:57.598468] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.194 [2024-04-25 03:28:57.598643] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.194 [2024-04-25 03:28:57.598674] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.194 [2024-04-25 03:28:57.598691] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.194 [2024-04-25 03:28:57.598704] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:23.194 [2024-04-25 03:28:57.598733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:23.194 qpair failed and we were unable to recover it. 00:28:23.194 [2024-04-25 03:28:57.608524] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.194 [2024-04-25 03:28:57.608706] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.194 [2024-04-25 03:28:57.608732] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.194 [2024-04-25 03:28:57.608747] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.194 [2024-04-25 03:28:57.608759] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:23.194 [2024-04-25 03:28:57.608791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:23.194 qpair failed and we were unable to recover it. 00:28:23.194 [2024-04-25 03:28:57.618534] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.194 [2024-04-25 03:28:57.618708] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.194 [2024-04-25 03:28:57.618734] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.194 [2024-04-25 03:28:57.618749] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.194 [2024-04-25 03:28:57.618762] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:23.194 [2024-04-25 03:28:57.618792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:23.194 qpair failed and we were unable to recover it. 00:28:23.194 [2024-04-25 03:28:57.628537] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.194 [2024-04-25 03:28:57.628716] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.194 [2024-04-25 03:28:57.628741] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.194 [2024-04-25 03:28:57.628756] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.194 [2024-04-25 03:28:57.628769] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:23.194 [2024-04-25 03:28:57.628799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:23.194 qpair failed and we were unable to recover it. 00:28:23.194 [2024-04-25 03:28:57.638587] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.194 [2024-04-25 03:28:57.638771] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.194 [2024-04-25 03:28:57.638797] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.194 [2024-04-25 03:28:57.638812] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.194 [2024-04-25 03:28:57.638834] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:23.194 [2024-04-25 03:28:57.638866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:23.194 qpair failed and we were unable to recover it. 00:28:23.194 [2024-04-25 03:28:57.648614] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.194 [2024-04-25 03:28:57.648794] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.194 [2024-04-25 03:28:57.648819] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.194 [2024-04-25 03:28:57.648833] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.194 [2024-04-25 03:28:57.648846] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:23.194 [2024-04-25 03:28:57.648876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:23.194 qpair failed and we were unable to recover it. 00:28:23.194 [2024-04-25 03:28:57.658647] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.194 [2024-04-25 03:28:57.658815] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.194 [2024-04-25 03:28:57.658840] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.194 [2024-04-25 03:28:57.658855] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.194 [2024-04-25 03:28:57.658867] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:23.194 [2024-04-25 03:28:57.658897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:23.194 qpair failed and we were unable to recover it. 00:28:23.194 [2024-04-25 03:28:57.668670] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.194 [2024-04-25 03:28:57.668832] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.195 [2024-04-25 03:28:57.668857] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.195 [2024-04-25 03:28:57.668872] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.195 [2024-04-25 03:28:57.668885] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:23.195 [2024-04-25 03:28:57.668914] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:23.195 qpair failed and we were unable to recover it. 00:28:23.195 [2024-04-25 03:28:57.678716] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.195 [2024-04-25 03:28:57.678891] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.195 [2024-04-25 03:28:57.678915] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.195 [2024-04-25 03:28:57.678930] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.195 [2024-04-25 03:28:57.678943] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:23.195 [2024-04-25 03:28:57.678973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:23.195 qpair failed and we were unable to recover it. 00:28:23.195 [2024-04-25 03:28:57.688761] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.195 [2024-04-25 03:28:57.688938] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.195 [2024-04-25 03:28:57.688964] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.195 [2024-04-25 03:28:57.688978] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.195 [2024-04-25 03:28:57.688991] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:23.195 [2024-04-25 03:28:57.689021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:23.195 qpair failed and we were unable to recover it. 00:28:23.454 [2024-04-25 03:28:57.698752] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.454 [2024-04-25 03:28:57.698925] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.454 [2024-04-25 03:28:57.698951] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.454 [2024-04-25 03:28:57.698965] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.454 [2024-04-25 03:28:57.698978] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:23.454 [2024-04-25 03:28:57.699008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:23.454 qpair failed and we were unable to recover it. 00:28:23.454 [2024-04-25 03:28:57.708852] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.454 [2024-04-25 03:28:57.709021] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.454 [2024-04-25 03:28:57.709047] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.454 [2024-04-25 03:28:57.709062] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.454 [2024-04-25 03:28:57.709075] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:23.454 [2024-04-25 03:28:57.709104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:23.454 qpair failed and we were unable to recover it. 00:28:23.454 [2024-04-25 03:28:57.718815] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.454 [2024-04-25 03:28:57.718994] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.454 [2024-04-25 03:28:57.719019] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.454 [2024-04-25 03:28:57.719034] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.454 [2024-04-25 03:28:57.719048] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:23.454 [2024-04-25 03:28:57.719079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:23.454 qpair failed and we were unable to recover it. 00:28:23.454 [2024-04-25 03:28:57.728817] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.454 [2024-04-25 03:28:57.728983] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.454 [2024-04-25 03:28:57.729008] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.454 [2024-04-25 03:28:57.729024] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.454 [2024-04-25 03:28:57.729043] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:23.454 [2024-04-25 03:28:57.729087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:23.454 qpair failed and we were unable to recover it. 00:28:23.454 [2024-04-25 03:28:57.738845] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.454 [2024-04-25 03:28:57.739017] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.454 [2024-04-25 03:28:57.739043] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.454 [2024-04-25 03:28:57.739058] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.454 [2024-04-25 03:28:57.739071] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:23.454 [2024-04-25 03:28:57.739101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:23.454 qpair failed and we were unable to recover it. 00:28:23.454 [2024-04-25 03:28:57.748904] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.454 [2024-04-25 03:28:57.749097] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.454 [2024-04-25 03:28:57.749122] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.454 [2024-04-25 03:28:57.749137] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.454 [2024-04-25 03:28:57.749151] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:23.454 [2024-04-25 03:28:57.749181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:23.454 qpair failed and we were unable to recover it. 00:28:23.454 [2024-04-25 03:28:57.758931] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.454 [2024-04-25 03:28:57.759110] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.454 [2024-04-25 03:28:57.759136] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.455 [2024-04-25 03:28:57.759150] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.455 [2024-04-25 03:28:57.759163] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:23.455 [2024-04-25 03:28:57.759194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:23.455 qpair failed and we were unable to recover it. 00:28:23.455 [2024-04-25 03:28:57.768931] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.455 [2024-04-25 03:28:57.769106] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.455 [2024-04-25 03:28:57.769131] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.455 [2024-04-25 03:28:57.769146] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.455 [2024-04-25 03:28:57.769159] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:23.455 [2024-04-25 03:28:57.769189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:23.455 qpair failed and we were unable to recover it. 00:28:23.455 [2024-04-25 03:28:57.779067] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.455 [2024-04-25 03:28:57.779253] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.455 [2024-04-25 03:28:57.779279] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.455 [2024-04-25 03:28:57.779310] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.455 [2024-04-25 03:28:57.779322] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:23.455 [2024-04-25 03:28:57.779367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:23.455 qpair failed and we were unable to recover it. 00:28:23.455 [2024-04-25 03:28:57.789026] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.455 [2024-04-25 03:28:57.789222] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.455 [2024-04-25 03:28:57.789248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.455 [2024-04-25 03:28:57.789263] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.455 [2024-04-25 03:28:57.789277] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:23.455 [2024-04-25 03:28:57.789306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:23.455 qpair failed and we were unable to recover it. 00:28:23.455 [2024-04-25 03:28:57.799065] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.455 [2024-04-25 03:28:57.799241] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.455 [2024-04-25 03:28:57.799265] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.455 [2024-04-25 03:28:57.799280] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.455 [2024-04-25 03:28:57.799293] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:23.455 [2024-04-25 03:28:57.799322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:23.455 qpair failed and we were unable to recover it. 00:28:23.455 [2024-04-25 03:28:57.809058] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.455 [2024-04-25 03:28:57.809226] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.455 [2024-04-25 03:28:57.809251] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.455 [2024-04-25 03:28:57.809266] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.455 [2024-04-25 03:28:57.809279] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:23.455 [2024-04-25 03:28:57.809311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:23.455 qpair failed and we were unable to recover it. 00:28:23.455 [2024-04-25 03:28:57.819111] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.455 [2024-04-25 03:28:57.819280] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.455 [2024-04-25 03:28:57.819306] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.455 [2024-04-25 03:28:57.819326] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.455 [2024-04-25 03:28:57.819355] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:23.455 [2024-04-25 03:28:57.819385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:23.455 qpair failed and we were unable to recover it. 00:28:23.455 [2024-04-25 03:28:57.829145] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.455 [2024-04-25 03:28:57.829313] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.455 [2024-04-25 03:28:57.829338] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.455 [2024-04-25 03:28:57.829353] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.455 [2024-04-25 03:28:57.829366] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:23.455 [2024-04-25 03:28:57.829396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:23.455 qpair failed and we were unable to recover it. 00:28:23.455 [2024-04-25 03:28:57.839146] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.455 [2024-04-25 03:28:57.839321] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.455 [2024-04-25 03:28:57.839346] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.455 [2024-04-25 03:28:57.839361] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.455 [2024-04-25 03:28:57.839373] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:23.455 [2024-04-25 03:28:57.839405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:23.455 qpair failed and we were unable to recover it. 00:28:23.455 [2024-04-25 03:28:57.849149] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.455 [2024-04-25 03:28:57.849319] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.455 [2024-04-25 03:28:57.849349] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.455 [2024-04-25 03:28:57.849365] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.455 [2024-04-25 03:28:57.849378] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:23.455 [2024-04-25 03:28:57.849407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:23.455 qpair failed and we were unable to recover it. 00:28:23.455 [2024-04-25 03:28:57.859214] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.455 [2024-04-25 03:28:57.859381] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.455 [2024-04-25 03:28:57.859407] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.455 [2024-04-25 03:28:57.859422] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.455 [2024-04-25 03:28:57.859435] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:23.455 [2024-04-25 03:28:57.859465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:23.455 qpair failed and we were unable to recover it. 00:28:23.455 [2024-04-25 03:28:57.869204] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.455 [2024-04-25 03:28:57.869367] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.455 [2024-04-25 03:28:57.869392] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.455 [2024-04-25 03:28:57.869407] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.455 [2024-04-25 03:28:57.869419] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:23.455 [2024-04-25 03:28:57.869449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:23.455 qpair failed and we were unable to recover it. 00:28:23.455 [2024-04-25 03:28:57.879294] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.455 [2024-04-25 03:28:57.879474] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.455 [2024-04-25 03:28:57.879499] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.455 [2024-04-25 03:28:57.879513] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.455 [2024-04-25 03:28:57.879527] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:23.455 [2024-04-25 03:28:57.879556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:23.455 qpair failed and we were unable to recover it. 00:28:23.455 [2024-04-25 03:28:57.889263] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.455 [2024-04-25 03:28:57.889433] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.455 [2024-04-25 03:28:57.889458] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.455 [2024-04-25 03:28:57.889473] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.455 [2024-04-25 03:28:57.889486] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:23.455 [2024-04-25 03:28:57.889515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:23.456 qpair failed and we were unable to recover it. 00:28:23.456 [2024-04-25 03:28:57.899300] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.456 [2024-04-25 03:28:57.899517] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.456 [2024-04-25 03:28:57.899544] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.456 [2024-04-25 03:28:57.899559] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.456 [2024-04-25 03:28:57.899573] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:23.456 [2024-04-25 03:28:57.899602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:23.456 qpair failed and we were unable to recover it. 00:28:23.456 [2024-04-25 03:28:57.909359] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.456 [2024-04-25 03:28:57.909529] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.456 [2024-04-25 03:28:57.909559] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.456 [2024-04-25 03:28:57.909575] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.456 [2024-04-25 03:28:57.909588] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:23.456 [2024-04-25 03:28:57.909617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:23.456 qpair failed and we were unable to recover it. 00:28:23.456 [2024-04-25 03:28:57.919368] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.456 [2024-04-25 03:28:57.919588] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.456 [2024-04-25 03:28:57.919625] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.456 [2024-04-25 03:28:57.919650] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.456 [2024-04-25 03:28:57.919664] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:23.456 [2024-04-25 03:28:57.919695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:23.456 qpair failed and we were unable to recover it. 00:28:23.456 [2024-04-25 03:28:57.929410] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.456 [2024-04-25 03:28:57.929582] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.456 [2024-04-25 03:28:57.929607] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.456 [2024-04-25 03:28:57.929622] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.456 [2024-04-25 03:28:57.929644] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:23.456 [2024-04-25 03:28:57.929674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:23.456 qpair failed and we were unable to recover it. 00:28:23.456 [2024-04-25 03:28:57.939468] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.456 [2024-04-25 03:28:57.939670] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.456 [2024-04-25 03:28:57.939697] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.456 [2024-04-25 03:28:57.939716] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.456 [2024-04-25 03:28:57.939729] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:23.456 [2024-04-25 03:28:57.939759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:23.456 qpair failed and we were unable to recover it. 00:28:23.456 [2024-04-25 03:28:57.949480] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.456 [2024-04-25 03:28:57.949669] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.456 [2024-04-25 03:28:57.949697] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.456 [2024-04-25 03:28:57.949715] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.456 [2024-04-25 03:28:57.949729] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:23.456 [2024-04-25 03:28:57.949778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:23.456 qpair failed and we were unable to recover it. 00:28:23.715 [2024-04-25 03:28:57.959668] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.715 [2024-04-25 03:28:57.959853] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.715 [2024-04-25 03:28:57.959879] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.715 [2024-04-25 03:28:57.959895] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.715 [2024-04-25 03:28:57.959908] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:23.715 [2024-04-25 03:28:57.959938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:23.715 qpair failed and we were unable to recover it. 00:28:23.715 [2024-04-25 03:28:57.969484] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.715 [2024-04-25 03:28:57.969710] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.715 [2024-04-25 03:28:57.969737] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.715 [2024-04-25 03:28:57.969752] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.715 [2024-04-25 03:28:57.969765] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:23.715 [2024-04-25 03:28:57.969795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:23.715 qpair failed and we were unable to recover it. 00:28:23.715 [2024-04-25 03:28:57.979521] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.715 [2024-04-25 03:28:57.979699] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.715 [2024-04-25 03:28:57.979727] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.715 [2024-04-25 03:28:57.979743] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.715 [2024-04-25 03:28:57.979756] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:23.715 [2024-04-25 03:28:57.979787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:23.715 qpair failed and we were unable to recover it. 00:28:23.715 [2024-04-25 03:28:57.989655] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.716 [2024-04-25 03:28:57.989832] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.716 [2024-04-25 03:28:57.989859] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.716 [2024-04-25 03:28:57.989874] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.716 [2024-04-25 03:28:57.989887] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:23.716 [2024-04-25 03:28:57.989918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:23.716 qpair failed and we were unable to recover it. 00:28:23.716 [2024-04-25 03:28:57.999690] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.716 [2024-04-25 03:28:57.999896] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.716 [2024-04-25 03:28:57.999927] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.716 [2024-04-25 03:28:57.999943] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.716 [2024-04-25 03:28:57.999957] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:23.716 [2024-04-25 03:28:57.999987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:23.716 qpair failed and we were unable to recover it. 00:28:23.716 [2024-04-25 03:28:58.009648] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.716 [2024-04-25 03:28:58.009826] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.716 [2024-04-25 03:28:58.009854] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.716 [2024-04-25 03:28:58.009871] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.716 [2024-04-25 03:28:58.009892] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:23.716 [2024-04-25 03:28:58.009924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:23.716 qpair failed and we were unable to recover it. 00:28:23.716 [2024-04-25 03:28:58.019702] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.716 [2024-04-25 03:28:58.019882] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.716 [2024-04-25 03:28:58.019909] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.716 [2024-04-25 03:28:58.019924] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.716 [2024-04-25 03:28:58.019937] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:23.716 [2024-04-25 03:28:58.019983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:23.716 qpair failed and we were unable to recover it. 00:28:23.716 [2024-04-25 03:28:58.029740] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.716 [2024-04-25 03:28:58.029929] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.716 [2024-04-25 03:28:58.029956] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.716 [2024-04-25 03:28:58.029971] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.716 [2024-04-25 03:28:58.029984] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:23.716 [2024-04-25 03:28:58.030014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:23.716 qpair failed and we were unable to recover it. 00:28:23.716 [2024-04-25 03:28:58.039713] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.716 [2024-04-25 03:28:58.039901] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.716 [2024-04-25 03:28:58.039928] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.716 [2024-04-25 03:28:58.039944] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.716 [2024-04-25 03:28:58.039957] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:23.716 [2024-04-25 03:28:58.039996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:23.716 qpair failed and we were unable to recover it. 00:28:23.716 [2024-04-25 03:28:58.049729] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.716 [2024-04-25 03:28:58.049897] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.716 [2024-04-25 03:28:58.049924] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.716 [2024-04-25 03:28:58.049939] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.716 [2024-04-25 03:28:58.049952] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:23.716 [2024-04-25 03:28:58.049982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:23.716 qpair failed and we were unable to recover it. 00:28:23.716 [2024-04-25 03:28:58.059791] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.716 [2024-04-25 03:28:58.059961] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.716 [2024-04-25 03:28:58.059987] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.716 [2024-04-25 03:28:58.060003] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.716 [2024-04-25 03:28:58.060016] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:23.716 [2024-04-25 03:28:58.060046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:23.716 qpair failed and we were unable to recover it. 00:28:23.716 [2024-04-25 03:28:58.069821] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.716 [2024-04-25 03:28:58.069989] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.716 [2024-04-25 03:28:58.070015] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.716 [2024-04-25 03:28:58.070030] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.716 [2024-04-25 03:28:58.070043] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:23.716 [2024-04-25 03:28:58.070072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:23.716 qpair failed and we were unable to recover it. 00:28:23.716 [2024-04-25 03:28:58.079850] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.716 [2024-04-25 03:28:58.080029] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.716 [2024-04-25 03:28:58.080055] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.716 [2024-04-25 03:28:58.080070] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.716 [2024-04-25 03:28:58.080083] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:23.716 [2024-04-25 03:28:58.080112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:23.716 qpair failed and we were unable to recover it. 00:28:23.716 [2024-04-25 03:28:58.089862] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.716 [2024-04-25 03:28:58.090081] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.716 [2024-04-25 03:28:58.090109] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.716 [2024-04-25 03:28:58.090124] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.716 [2024-04-25 03:28:58.090137] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:23.716 [2024-04-25 03:28:58.090166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:23.716 qpair failed and we were unable to recover it. 00:28:23.716 [2024-04-25 03:28:58.099877] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.716 [2024-04-25 03:28:58.100045] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.716 [2024-04-25 03:28:58.100072] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.716 [2024-04-25 03:28:58.100088] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.716 [2024-04-25 03:28:58.100100] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:23.716 [2024-04-25 03:28:58.100130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:23.716 qpair failed and we were unable to recover it. 00:28:23.716 [2024-04-25 03:28:58.109895] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.716 [2024-04-25 03:28:58.110067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.716 [2024-04-25 03:28:58.110093] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.716 [2024-04-25 03:28:58.110109] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.716 [2024-04-25 03:28:58.110122] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:23.716 [2024-04-25 03:28:58.110152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:23.716 qpair failed and we were unable to recover it. 00:28:23.716 [2024-04-25 03:28:58.119959] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.716 [2024-04-25 03:28:58.120135] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.716 [2024-04-25 03:28:58.120160] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.716 [2024-04-25 03:28:58.120176] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.717 [2024-04-25 03:28:58.120188] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:23.717 [2024-04-25 03:28:58.120218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:23.717 qpair failed and we were unable to recover it. 00:28:23.717 [2024-04-25 03:28:58.129976] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.717 [2024-04-25 03:28:58.130145] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.717 [2024-04-25 03:28:58.130171] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.717 [2024-04-25 03:28:58.130187] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.717 [2024-04-25 03:28:58.130206] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:23.717 [2024-04-25 03:28:58.130236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:23.717 qpair failed and we were unable to recover it. 00:28:23.717 [2024-04-25 03:28:58.139992] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.717 [2024-04-25 03:28:58.140178] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.717 [2024-04-25 03:28:58.140205] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.717 [2024-04-25 03:28:58.140221] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.717 [2024-04-25 03:28:58.140235] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:23.717 [2024-04-25 03:28:58.140264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:23.717 qpair failed and we were unable to recover it. 00:28:23.717 [2024-04-25 03:28:58.149992] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.717 [2024-04-25 03:28:58.150202] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.717 [2024-04-25 03:28:58.150231] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.717 [2024-04-25 03:28:58.150247] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.717 [2024-04-25 03:28:58.150261] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:23.717 [2024-04-25 03:28:58.150291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:23.717 qpair failed and we were unable to recover it. 00:28:23.717 [2024-04-25 03:28:58.160083] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.717 [2024-04-25 03:28:58.160282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.717 [2024-04-25 03:28:58.160309] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.717 [2024-04-25 03:28:58.160328] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.717 [2024-04-25 03:28:58.160342] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:23.717 [2024-04-25 03:28:58.160373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:23.717 qpair failed and we were unable to recover it. 00:28:23.717 [2024-04-25 03:28:58.170051] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.717 [2024-04-25 03:28:58.170220] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.717 [2024-04-25 03:28:58.170246] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.717 [2024-04-25 03:28:58.170260] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.717 [2024-04-25 03:28:58.170272] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:23.717 [2024-04-25 03:28:58.170303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:23.717 qpair failed and we were unable to recover it. 00:28:23.717 [2024-04-25 03:28:58.180087] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.717 [2024-04-25 03:28:58.180249] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.717 [2024-04-25 03:28:58.180274] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.717 [2024-04-25 03:28:58.180288] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.717 [2024-04-25 03:28:58.180300] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:23.717 [2024-04-25 03:28:58.180330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:23.717 qpair failed and we were unable to recover it. 00:28:23.717 [2024-04-25 03:28:58.190142] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.717 [2024-04-25 03:28:58.190338] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.717 [2024-04-25 03:28:58.190364] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.717 [2024-04-25 03:28:58.190379] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.717 [2024-04-25 03:28:58.190392] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:23.717 [2024-04-25 03:28:58.190421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:23.717 qpair failed and we were unable to recover it. 00:28:23.717 [2024-04-25 03:28:58.200144] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.717 [2024-04-25 03:28:58.200312] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.717 [2024-04-25 03:28:58.200336] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.717 [2024-04-25 03:28:58.200351] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.717 [2024-04-25 03:28:58.200364] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:23.717 [2024-04-25 03:28:58.200393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:23.717 qpair failed and we were unable to recover it. 00:28:23.717 [2024-04-25 03:28:58.210193] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.717 [2024-04-25 03:28:58.210443] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.717 [2024-04-25 03:28:58.210475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.717 [2024-04-25 03:28:58.210491] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.717 [2024-04-25 03:28:58.210505] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:23.717 [2024-04-25 03:28:58.210535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:23.717 qpair failed and we were unable to recover it. 00:28:23.976 [2024-04-25 03:28:58.220214] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.976 [2024-04-25 03:28:58.220454] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.976 [2024-04-25 03:28:58.220479] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.977 [2024-04-25 03:28:58.220503] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.977 [2024-04-25 03:28:58.220517] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:23.977 [2024-04-25 03:28:58.220563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:23.977 qpair failed and we were unable to recover it. 00:28:23.977 [2024-04-25 03:28:58.230358] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.977 [2024-04-25 03:28:58.230542] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.977 [2024-04-25 03:28:58.230567] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.977 [2024-04-25 03:28:58.230582] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.977 [2024-04-25 03:28:58.230596] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:23.977 [2024-04-25 03:28:58.230626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:23.977 qpair failed and we were unable to recover it. 00:28:23.977 [2024-04-25 03:28:58.240355] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.977 [2024-04-25 03:28:58.240583] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.977 [2024-04-25 03:28:58.240608] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.977 [2024-04-25 03:28:58.240623] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.977 [2024-04-25 03:28:58.240644] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:23.977 [2024-04-25 03:28:58.240676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:23.977 qpair failed and we were unable to recover it. 00:28:23.977 [2024-04-25 03:28:58.250336] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.977 [2024-04-25 03:28:58.250516] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.977 [2024-04-25 03:28:58.250542] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.977 [2024-04-25 03:28:58.250557] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.977 [2024-04-25 03:28:58.250569] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:23.977 [2024-04-25 03:28:58.250600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:23.977 qpair failed and we were unable to recover it. 00:28:23.977 [2024-04-25 03:28:58.260373] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.977 [2024-04-25 03:28:58.260550] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.977 [2024-04-25 03:28:58.260576] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.977 [2024-04-25 03:28:58.260590] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.977 [2024-04-25 03:28:58.260603] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:23.977 [2024-04-25 03:28:58.260645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:23.977 qpair failed and we were unable to recover it. 00:28:23.977 [2024-04-25 03:28:58.270366] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.977 [2024-04-25 03:28:58.270534] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.977 [2024-04-25 03:28:58.270559] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.977 [2024-04-25 03:28:58.270574] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.977 [2024-04-25 03:28:58.270587] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:23.977 [2024-04-25 03:28:58.270617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:23.977 qpair failed and we were unable to recover it. 00:28:23.977 [2024-04-25 03:28:58.280407] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.977 [2024-04-25 03:28:58.280592] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.977 [2024-04-25 03:28:58.280617] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.977 [2024-04-25 03:28:58.280637] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.977 [2024-04-25 03:28:58.280652] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:23.977 [2024-04-25 03:28:58.280682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:23.977 qpair failed and we were unable to recover it. 00:28:23.977 [2024-04-25 03:28:58.290441] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.977 [2024-04-25 03:28:58.290618] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.977 [2024-04-25 03:28:58.290650] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.977 [2024-04-25 03:28:58.290669] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.977 [2024-04-25 03:28:58.290682] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:23.977 [2024-04-25 03:28:58.290713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:23.977 qpair failed and we were unable to recover it. 00:28:23.977 [2024-04-25 03:28:58.300457] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.977 [2024-04-25 03:28:58.300641] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.977 [2024-04-25 03:28:58.300666] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.977 [2024-04-25 03:28:58.300681] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.977 [2024-04-25 03:28:58.300694] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:23.977 [2024-04-25 03:28:58.300725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:23.977 qpair failed and we were unable to recover it. 00:28:23.977 [2024-04-25 03:28:58.310523] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.977 [2024-04-25 03:28:58.310704] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.977 [2024-04-25 03:28:58.310734] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.977 [2024-04-25 03:28:58.310752] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.977 [2024-04-25 03:28:58.310766] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:23.977 [2024-04-25 03:28:58.310798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:23.977 qpair failed and we were unable to recover it. 00:28:23.977 [2024-04-25 03:28:58.320552] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.977 [2024-04-25 03:28:58.320740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.977 [2024-04-25 03:28:58.320765] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.977 [2024-04-25 03:28:58.320780] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.977 [2024-04-25 03:28:58.320793] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:23.977 [2024-04-25 03:28:58.320823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:23.977 qpair failed and we were unable to recover it. 00:28:23.977 [2024-04-25 03:28:58.330539] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.977 [2024-04-25 03:28:58.330711] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.977 [2024-04-25 03:28:58.330736] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.977 [2024-04-25 03:28:58.330751] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.977 [2024-04-25 03:28:58.330764] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:23.977 [2024-04-25 03:28:58.330793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:23.977 qpair failed and we were unable to recover it. 00:28:23.977 [2024-04-25 03:28:58.340602] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.977 [2024-04-25 03:28:58.340792] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.977 [2024-04-25 03:28:58.340818] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.977 [2024-04-25 03:28:58.340833] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.977 [2024-04-25 03:28:58.340847] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:23.977 [2024-04-25 03:28:58.340877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:23.977 qpair failed and we were unable to recover it. 00:28:23.977 [2024-04-25 03:28:58.350636] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.977 [2024-04-25 03:28:58.350802] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.977 [2024-04-25 03:28:58.350827] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.977 [2024-04-25 03:28:58.350842] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.977 [2024-04-25 03:28:58.350855] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:23.977 [2024-04-25 03:28:58.350904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:23.977 qpair failed and we were unable to recover it. 00:28:23.977 [2024-04-25 03:28:58.360672] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.978 [2024-04-25 03:28:58.360853] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.978 [2024-04-25 03:28:58.360882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.978 [2024-04-25 03:28:58.360898] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.978 [2024-04-25 03:28:58.360911] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:23.978 [2024-04-25 03:28:58.360941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:23.978 qpair failed and we were unable to recover it. 00:28:23.978 [2024-04-25 03:28:58.370747] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.978 [2024-04-25 03:28:58.370922] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.978 [2024-04-25 03:28:58.370947] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.978 [2024-04-25 03:28:58.370962] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.978 [2024-04-25 03:28:58.370975] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:23.978 [2024-04-25 03:28:58.371005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:23.978 qpair failed and we were unable to recover it. 00:28:23.978 [2024-04-25 03:28:58.380776] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.978 [2024-04-25 03:28:58.380962] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.978 [2024-04-25 03:28:58.380987] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.978 [2024-04-25 03:28:58.381002] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.978 [2024-04-25 03:28:58.381015] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:23.978 [2024-04-25 03:28:58.381045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:23.978 qpair failed and we were unable to recover it. 00:28:23.978 [2024-04-25 03:28:58.390736] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.978 [2024-04-25 03:28:58.390909] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.978 [2024-04-25 03:28:58.390933] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.978 [2024-04-25 03:28:58.390947] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.978 [2024-04-25 03:28:58.390960] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:23.978 [2024-04-25 03:28:58.390989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:23.978 qpair failed and we were unable to recover it. 00:28:23.978 [2024-04-25 03:28:58.400768] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.978 [2024-04-25 03:28:58.400944] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.978 [2024-04-25 03:28:58.400974] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.978 [2024-04-25 03:28:58.400989] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.978 [2024-04-25 03:28:58.401002] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:23.978 [2024-04-25 03:28:58.401032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:23.978 qpair failed and we were unable to recover it. 00:28:23.978 [2024-04-25 03:28:58.410784] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.978 [2024-04-25 03:28:58.411008] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.978 [2024-04-25 03:28:58.411037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.978 [2024-04-25 03:28:58.411053] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.978 [2024-04-25 03:28:58.411066] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:23.978 [2024-04-25 03:28:58.411096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:23.978 qpair failed and we were unable to recover it. 00:28:23.978 [2024-04-25 03:28:58.411235] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5a860 is same with the state(5) to be set 00:28:23.978 [2024-04-25 03:28:58.420861] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.978 [2024-04-25 03:28:58.421030] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.978 [2024-04-25 03:28:58.421062] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.978 [2024-04-25 03:28:58.421077] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.978 [2024-04-25 03:28:58.421090] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a70000b90 00:28:23.978 [2024-04-25 03:28:58.421134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:23.978 qpair failed and we were unable to recover it. 00:28:23.978 [2024-04-25 03:28:58.430902] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.978 [2024-04-25 03:28:58.431077] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.978 [2024-04-25 03:28:58.431106] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.978 [2024-04-25 03:28:58.431121] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.978 [2024-04-25 03:28:58.431134] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a70000b90 00:28:23.978 [2024-04-25 03:28:58.431163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:23.978 qpair failed and we were unable to recover it. 00:28:23.978 [2024-04-25 03:28:58.440909] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.978 [2024-04-25 03:28:58.441081] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.978 [2024-04-25 03:28:58.441109] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.978 [2024-04-25 03:28:58.441124] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.978 [2024-04-25 03:28:58.441144] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a70000b90 00:28:23.978 [2024-04-25 03:28:58.441175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:23.978 qpair failed and we were unable to recover it. 00:28:23.978 [2024-04-25 03:28:58.450979] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.978 [2024-04-25 03:28:58.451160] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.978 [2024-04-25 03:28:58.451187] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.978 [2024-04-25 03:28:58.451203] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.978 [2024-04-25 03:28:58.451215] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a70000b90 00:28:23.978 [2024-04-25 03:28:58.451245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:23.978 qpair failed and we were unable to recover it. 00:28:23.978 [2024-04-25 03:28:58.461003] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.978 [2024-04-25 03:28:58.461204] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.978 [2024-04-25 03:28:58.461232] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.978 [2024-04-25 03:28:58.461248] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.978 [2024-04-25 03:28:58.461262] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a70000b90 00:28:23.978 [2024-04-25 03:28:58.461293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:23.978 qpair failed and we were unable to recover it. 00:28:23.978 [2024-04-25 03:28:58.471015] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.978 [2024-04-25 03:28:58.471184] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.978 [2024-04-25 03:28:58.471210] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.978 [2024-04-25 03:28:58.471225] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.978 [2024-04-25 03:28:58.471237] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a70000b90 00:28:23.978 [2024-04-25 03:28:58.471267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:23.978 qpair failed and we were unable to recover it. 00:28:24.238 [2024-04-25 03:28:58.481081] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.238 [2024-04-25 03:28:58.481265] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.238 [2024-04-25 03:28:58.481292] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.238 [2024-04-25 03:28:58.481308] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.238 [2024-04-25 03:28:58.481320] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a70000b90 00:28:24.238 [2024-04-25 03:28:58.481351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:24.238 qpair failed and we were unable to recover it. 00:28:24.238 [2024-04-25 03:28:58.491035] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.238 [2024-04-25 03:28:58.491214] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.238 [2024-04-25 03:28:58.491241] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.238 [2024-04-25 03:28:58.491256] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.238 [2024-04-25 03:28:58.491269] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a70000b90 00:28:24.238 [2024-04-25 03:28:58.491299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:24.238 qpair failed and we were unable to recover it. 00:28:24.238 [2024-04-25 03:28:58.501066] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.238 [2024-04-25 03:28:58.501287] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.238 [2024-04-25 03:28:58.501314] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.238 [2024-04-25 03:28:58.501330] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.238 [2024-04-25 03:28:58.501343] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a70000b90 00:28:24.238 [2024-04-25 03:28:58.501375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:24.238 qpair failed and we were unable to recover it. 00:28:24.238 [2024-04-25 03:28:58.511134] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.238 [2024-04-25 03:28:58.511303] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.238 [2024-04-25 03:28:58.511330] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.238 [2024-04-25 03:28:58.511346] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.238 [2024-04-25 03:28:58.511359] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a70000b90 00:28:24.238 [2024-04-25 03:28:58.511389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:24.238 qpair failed and we were unable to recover it. 00:28:24.238 [2024-04-25 03:28:58.521160] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.238 [2024-04-25 03:28:58.521339] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.238 [2024-04-25 03:28:58.521366] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.238 [2024-04-25 03:28:58.521381] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.238 [2024-04-25 03:28:58.521394] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a70000b90 00:28:24.238 [2024-04-25 03:28:58.521424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:24.238 qpair failed and we were unable to recover it. 00:28:24.238 [2024-04-25 03:28:58.531145] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.238 [2024-04-25 03:28:58.531316] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.238 [2024-04-25 03:28:58.531343] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.238 [2024-04-25 03:28:58.531364] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.238 [2024-04-25 03:28:58.531378] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a70000b90 00:28:24.238 [2024-04-25 03:28:58.531407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:24.238 qpair failed and we were unable to recover it. 00:28:24.238 [2024-04-25 03:28:58.541196] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.238 [2024-04-25 03:28:58.541409] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.238 [2024-04-25 03:28:58.541435] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.238 [2024-04-25 03:28:58.541450] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.238 [2024-04-25 03:28:58.541463] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a70000b90 00:28:24.238 [2024-04-25 03:28:58.541493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:24.238 qpair failed and we were unable to recover it. 00:28:24.238 [2024-04-25 03:28:58.551207] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.238 [2024-04-25 03:28:58.551388] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.238 [2024-04-25 03:28:58.551414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.238 [2024-04-25 03:28:58.551429] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.238 [2024-04-25 03:28:58.551443] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a70000b90 00:28:24.238 [2024-04-25 03:28:58.551472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:24.238 qpair failed and we were unable to recover it. 00:28:24.238 [2024-04-25 03:28:58.561238] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.238 [2024-04-25 03:28:58.561454] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.238 [2024-04-25 03:28:58.561481] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.238 [2024-04-25 03:28:58.561497] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.239 [2024-04-25 03:28:58.561509] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a70000b90 00:28:24.239 [2024-04-25 03:28:58.561539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:24.239 qpair failed and we were unable to recover it. 00:28:24.239 [2024-04-25 03:28:58.571273] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.239 [2024-04-25 03:28:58.571447] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.239 [2024-04-25 03:28:58.571473] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.239 [2024-04-25 03:28:58.571488] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.239 [2024-04-25 03:28:58.571501] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a70000b90 00:28:24.239 [2024-04-25 03:28:58.571531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:24.239 qpair failed and we were unable to recover it. 00:28:24.239 [2024-04-25 03:28:58.581326] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.239 [2024-04-25 03:28:58.581497] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.239 [2024-04-25 03:28:58.581524] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.239 [2024-04-25 03:28:58.581539] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.239 [2024-04-25 03:28:58.581552] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a70000b90 00:28:24.239 [2024-04-25 03:28:58.581582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:24.239 qpair failed and we were unable to recover it. 00:28:24.239 [2024-04-25 03:28:58.591367] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.239 [2024-04-25 03:28:58.591560] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.239 [2024-04-25 03:28:58.591587] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.239 [2024-04-25 03:28:58.591602] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.239 [2024-04-25 03:28:58.591616] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a70000b90 00:28:24.239 [2024-04-25 03:28:58.591655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:24.239 qpair failed and we were unable to recover it. 00:28:24.239 [2024-04-25 03:28:58.601364] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.239 [2024-04-25 03:28:58.601537] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.239 [2024-04-25 03:28:58.601562] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.239 [2024-04-25 03:28:58.601577] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.239 [2024-04-25 03:28:58.601590] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a70000b90 00:28:24.239 [2024-04-25 03:28:58.601620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:24.239 qpair failed and we were unable to recover it. 00:28:24.239 [2024-04-25 03:28:58.611371] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.239 [2024-04-25 03:28:58.611538] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.239 [2024-04-25 03:28:58.611565] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.239 [2024-04-25 03:28:58.611580] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.239 [2024-04-25 03:28:58.611593] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a70000b90 00:28:24.239 [2024-04-25 03:28:58.611623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:24.239 qpair failed and we were unable to recover it. 00:28:24.239 [2024-04-25 03:28:58.621401] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.239 [2024-04-25 03:28:58.621569] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.239 [2024-04-25 03:28:58.621601] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.239 [2024-04-25 03:28:58.621617] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.239 [2024-04-25 03:28:58.621638] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a70000b90 00:28:24.239 [2024-04-25 03:28:58.621670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:24.239 qpair failed and we were unable to recover it. 00:28:24.239 [2024-04-25 03:28:58.631443] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.239 [2024-04-25 03:28:58.631692] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.239 [2024-04-25 03:28:58.631718] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.239 [2024-04-25 03:28:58.631733] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.239 [2024-04-25 03:28:58.631746] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a70000b90 00:28:24.239 [2024-04-25 03:28:58.631775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:24.239 qpair failed and we were unable to recover it. 00:28:24.239 [2024-04-25 03:28:58.641554] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.239 [2024-04-25 03:28:58.641731] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.239 [2024-04-25 03:28:58.641757] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.239 [2024-04-25 03:28:58.641772] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.239 [2024-04-25 03:28:58.641785] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a70000b90 00:28:24.239 [2024-04-25 03:28:58.641827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:24.239 qpair failed and we were unable to recover it. 00:28:24.239 [2024-04-25 03:28:58.651494] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.239 [2024-04-25 03:28:58.651717] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.239 [2024-04-25 03:28:58.651744] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.239 [2024-04-25 03:28:58.651760] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.239 [2024-04-25 03:28:58.651773] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a70000b90 00:28:24.239 [2024-04-25 03:28:58.651803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:24.239 qpair failed and we were unable to recover it. 00:28:24.239 [2024-04-25 03:28:58.661524] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.239 [2024-04-25 03:28:58.661705] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.239 [2024-04-25 03:28:58.661732] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.239 [2024-04-25 03:28:58.661747] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.239 [2024-04-25 03:28:58.661764] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a70000b90 00:28:24.239 [2024-04-25 03:28:58.661799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:24.239 qpair failed and we were unable to recover it. 00:28:24.239 [2024-04-25 03:28:58.671613] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.239 [2024-04-25 03:28:58.671790] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.239 [2024-04-25 03:28:58.671817] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.239 [2024-04-25 03:28:58.671832] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.239 [2024-04-25 03:28:58.671845] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a70000b90 00:28:24.239 [2024-04-25 03:28:58.671874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:24.239 qpair failed and we were unable to recover it. 00:28:24.239 [2024-04-25 03:28:58.681578] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.239 [2024-04-25 03:28:58.681761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.239 [2024-04-25 03:28:58.681787] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.239 [2024-04-25 03:28:58.681802] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.239 [2024-04-25 03:28:58.681815] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a70000b90 00:28:24.239 [2024-04-25 03:28:58.681845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:24.239 qpair failed and we were unable to recover it. 00:28:24.239 [2024-04-25 03:28:58.691617] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.239 [2024-04-25 03:28:58.691795] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.239 [2024-04-25 03:28:58.691822] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.239 [2024-04-25 03:28:58.691838] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.239 [2024-04-25 03:28:58.691851] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a70000b90 00:28:24.239 [2024-04-25 03:28:58.691882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:24.239 qpair failed and we were unable to recover it. 00:28:24.239 [2024-04-25 03:28:58.701664] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.240 [2024-04-25 03:28:58.701834] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.240 [2024-04-25 03:28:58.701860] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.240 [2024-04-25 03:28:58.701876] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.240 [2024-04-25 03:28:58.701888] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a70000b90 00:28:24.240 [2024-04-25 03:28:58.701918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:24.240 qpair failed and we were unable to recover it. 00:28:24.240 [2024-04-25 03:28:58.711695] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.240 [2024-04-25 03:28:58.711911] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.240 [2024-04-25 03:28:58.711943] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.240 [2024-04-25 03:28:58.711959] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.240 [2024-04-25 03:28:58.711972] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a70000b90 00:28:24.240 [2024-04-25 03:28:58.712002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:24.240 qpair failed and we were unable to recover it. 00:28:24.240 [2024-04-25 03:28:58.721792] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.240 [2024-04-25 03:28:58.721974] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.240 [2024-04-25 03:28:58.722000] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.240 [2024-04-25 03:28:58.722015] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.240 [2024-04-25 03:28:58.722027] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a70000b90 00:28:24.240 [2024-04-25 03:28:58.722057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:24.240 qpair failed and we were unable to recover it. 00:28:24.240 [2024-04-25 03:28:58.731716] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.240 [2024-04-25 03:28:58.731933] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.240 [2024-04-25 03:28:58.731958] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.240 [2024-04-25 03:28:58.731974] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.240 [2024-04-25 03:28:58.731987] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a70000b90 00:28:24.240 [2024-04-25 03:28:58.732017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:24.240 qpair failed and we were unable to recover it. 00:28:24.499 [2024-04-25 03:28:58.741767] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.499 [2024-04-25 03:28:58.741983] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.499 [2024-04-25 03:28:58.742011] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.499 [2024-04-25 03:28:58.742026] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.499 [2024-04-25 03:28:58.742039] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a70000b90 00:28:24.499 [2024-04-25 03:28:58.742071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:24.499 qpair failed and we were unable to recover it. 00:28:24.499 [2024-04-25 03:28:58.751793] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.499 [2024-04-25 03:28:58.751963] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.499 [2024-04-25 03:28:58.751991] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.499 [2024-04-25 03:28:58.752007] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.500 [2024-04-25 03:28:58.752025] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a70000b90 00:28:24.500 [2024-04-25 03:28:58.752058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:24.500 qpair failed and we were unable to recover it. 00:28:24.500 [2024-04-25 03:28:58.761853] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.500 [2024-04-25 03:28:58.762069] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.500 [2024-04-25 03:28:58.762096] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.500 [2024-04-25 03:28:58.762112] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.500 [2024-04-25 03:28:58.762124] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a70000b90 00:28:24.500 [2024-04-25 03:28:58.762154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:24.500 qpair failed and we were unable to recover it. 00:28:24.500 [2024-04-25 03:28:58.771938] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.500 [2024-04-25 03:28:58.772118] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.500 [2024-04-25 03:28:58.772145] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.500 [2024-04-25 03:28:58.772161] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.500 [2024-04-25 03:28:58.772174] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a70000b90 00:28:24.500 [2024-04-25 03:28:58.772216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:24.500 qpair failed and we were unable to recover it. 00:28:24.500 [2024-04-25 03:28:58.781863] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.500 [2024-04-25 03:28:58.782036] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.500 [2024-04-25 03:28:58.782062] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.500 [2024-04-25 03:28:58.782077] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.500 [2024-04-25 03:28:58.782090] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a70000b90 00:28:24.500 [2024-04-25 03:28:58.782121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:24.500 qpair failed and we were unable to recover it. 00:28:24.500 [2024-04-25 03:28:58.791898] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.500 [2024-04-25 03:28:58.792068] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.500 [2024-04-25 03:28:58.792095] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.500 [2024-04-25 03:28:58.792111] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.500 [2024-04-25 03:28:58.792123] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a70000b90 00:28:24.500 [2024-04-25 03:28:58.792153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:24.500 qpair failed and we were unable to recover it. 00:28:24.500 [2024-04-25 03:28:58.801963] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.500 [2024-04-25 03:28:58.802147] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.500 [2024-04-25 03:28:58.802174] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.500 [2024-04-25 03:28:58.802189] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.500 [2024-04-25 03:28:58.802206] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a70000b90 00:28:24.500 [2024-04-25 03:28:58.802236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:24.500 qpair failed and we were unable to recover it. 00:28:24.500 [2024-04-25 03:28:58.811984] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.500 [2024-04-25 03:28:58.812192] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.500 [2024-04-25 03:28:58.812219] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.500 [2024-04-25 03:28:58.812235] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.500 [2024-04-25 03:28:58.812248] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a70000b90 00:28:24.500 [2024-04-25 03:28:58.812277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:24.500 qpair failed and we were unable to recover it. 00:28:24.500 [2024-04-25 03:28:58.822015] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.500 [2024-04-25 03:28:58.822228] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.500 [2024-04-25 03:28:58.822253] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.500 [2024-04-25 03:28:58.822268] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.500 [2024-04-25 03:28:58.822281] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a70000b90 00:28:24.500 [2024-04-25 03:28:58.822310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:24.500 qpair failed and we were unable to recover it. 00:28:24.500 [2024-04-25 03:28:58.831989] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.500 [2024-04-25 03:28:58.832169] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.500 [2024-04-25 03:28:58.832195] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.500 [2024-04-25 03:28:58.832210] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.500 [2024-04-25 03:28:58.832223] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a70000b90 00:28:24.500 [2024-04-25 03:28:58.832253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:24.500 qpair failed and we were unable to recover it. 00:28:24.500 [2024-04-25 03:28:58.842041] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.500 [2024-04-25 03:28:58.842217] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.500 [2024-04-25 03:28:58.842245] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.500 [2024-04-25 03:28:58.842260] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.500 [2024-04-25 03:28:58.842278] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a70000b90 00:28:24.500 [2024-04-25 03:28:58.842309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:24.500 qpair failed and we were unable to recover it. 00:28:24.500 [2024-04-25 03:28:58.852061] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.500 [2024-04-25 03:28:58.852238] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.500 [2024-04-25 03:28:58.852264] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.500 [2024-04-25 03:28:58.852280] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.500 [2024-04-25 03:28:58.852293] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a70000b90 00:28:24.500 [2024-04-25 03:28:58.852323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:24.500 qpair failed and we were unable to recover it. 00:28:24.500 [2024-04-25 03:28:58.862144] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.500 [2024-04-25 03:28:58.862321] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.500 [2024-04-25 03:28:58.862349] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.500 [2024-04-25 03:28:58.862368] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.500 [2024-04-25 03:28:58.862381] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a70000b90 00:28:24.500 [2024-04-25 03:28:58.862413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:24.500 qpair failed and we were unable to recover it. 00:28:24.500 [2024-04-25 03:28:58.872096] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.500 [2024-04-25 03:28:58.872278] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.500 [2024-04-25 03:28:58.872306] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.500 [2024-04-25 03:28:58.872321] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.500 [2024-04-25 03:28:58.872334] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a70000b90 00:28:24.500 [2024-04-25 03:28:58.872364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:24.500 qpair failed and we were unable to recover it. 00:28:24.500 [2024-04-25 03:28:58.882163] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.500 [2024-04-25 03:28:58.882341] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.500 [2024-04-25 03:28:58.882368] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.500 [2024-04-25 03:28:58.882383] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.500 [2024-04-25 03:28:58.882396] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a70000b90 00:28:24.500 [2024-04-25 03:28:58.882425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:24.500 qpair failed and we were unable to recover it. 00:28:24.500 [2024-04-25 03:28:58.892152] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.501 [2024-04-25 03:28:58.892371] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.501 [2024-04-25 03:28:58.892398] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.501 [2024-04-25 03:28:58.892413] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.501 [2024-04-25 03:28:58.892426] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a70000b90 00:28:24.501 [2024-04-25 03:28:58.892456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:24.501 qpair failed and we were unable to recover it. 00:28:24.501 [2024-04-25 03:28:58.902210] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.501 [2024-04-25 03:28:58.902382] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.501 [2024-04-25 03:28:58.902408] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.501 [2024-04-25 03:28:58.902423] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.501 [2024-04-25 03:28:58.902436] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a70000b90 00:28:24.501 [2024-04-25 03:28:58.902466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:24.501 qpair failed and we were unable to recover it. 00:28:24.501 [2024-04-25 03:28:58.912208] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.501 [2024-04-25 03:28:58.912383] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.501 [2024-04-25 03:28:58.912409] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.501 [2024-04-25 03:28:58.912425] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.501 [2024-04-25 03:28:58.912437] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a70000b90 00:28:24.501 [2024-04-25 03:28:58.912466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:24.501 qpair failed and we were unable to recover it. 00:28:24.501 [2024-04-25 03:28:58.922275] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.501 [2024-04-25 03:28:58.922495] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.501 [2024-04-25 03:28:58.922522] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.501 [2024-04-25 03:28:58.922537] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.501 [2024-04-25 03:28:58.922550] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a70000b90 00:28:24.501 [2024-04-25 03:28:58.922579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:24.501 qpair failed and we were unable to recover it. 00:28:24.501 [2024-04-25 03:28:58.932276] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.501 [2024-04-25 03:28:58.932492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.501 [2024-04-25 03:28:58.932519] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.501 [2024-04-25 03:28:58.932540] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.501 [2024-04-25 03:28:58.932553] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a70000b90 00:28:24.501 [2024-04-25 03:28:58.932584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:24.501 qpair failed and we were unable to recover it. 00:28:24.501 [2024-04-25 03:28:58.942316] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.501 [2024-04-25 03:28:58.942488] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.501 [2024-04-25 03:28:58.942514] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.501 [2024-04-25 03:28:58.942529] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.501 [2024-04-25 03:28:58.942542] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a70000b90 00:28:24.501 [2024-04-25 03:28:58.942572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:24.501 qpair failed and we were unable to recover it. 00:28:24.501 [2024-04-25 03:28:58.952355] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.501 [2024-04-25 03:28:58.952575] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.501 [2024-04-25 03:28:58.952607] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.501 [2024-04-25 03:28:58.952623] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.501 [2024-04-25 03:28:58.952644] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:24.501 [2024-04-25 03:28:58.952677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:24.501 qpair failed and we were unable to recover it. 00:28:24.501 [2024-04-25 03:28:58.962503] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.501 [2024-04-25 03:28:58.962697] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.501 [2024-04-25 03:28:58.962726] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.501 [2024-04-25 03:28:58.962741] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.501 [2024-04-25 03:28:58.962754] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a80000b90 00:28:24.501 [2024-04-25 03:28:58.962785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:24.501 qpair failed and we were unable to recover it. 00:28:24.501 [2024-04-25 03:28:58.972428] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.501 [2024-04-25 03:28:58.972604] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.501 [2024-04-25 03:28:58.972644] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.501 [2024-04-25 03:28:58.972662] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.501 [2024-04-25 03:28:58.972674] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a78000b90 00:28:24.501 [2024-04-25 03:28:58.972706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:24.501 qpair failed and we were unable to recover it. 00:28:24.501 [2024-04-25 03:28:58.982445] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.501 [2024-04-25 03:28:58.982616] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.501 [2024-04-25 03:28:58.982652] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.501 [2024-04-25 03:28:58.982669] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.501 [2024-04-25 03:28:58.982682] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2a78000b90 00:28:24.501 [2024-04-25 03:28:58.982713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:24.501 qpair failed and we were unable to recover it. 00:28:24.501 [2024-04-25 03:28:58.992470] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.501 [2024-04-25 03:28:58.992644] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.501 [2024-04-25 03:28:58.992677] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.501 [2024-04-25 03:28:58.992693] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.501 [2024-04-25 03:28:58.992706] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:24.501 [2024-04-25 03:28:58.992736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:24.501 qpair failed and we were unable to recover it. 00:28:24.760 [2024-04-25 03:28:59.002509] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.760 [2024-04-25 03:28:59.002703] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.760 [2024-04-25 03:28:59.002731] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.760 [2024-04-25 03:28:59.002747] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.760 [2024-04-25 03:28:59.002760] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd4cf30 00:28:24.760 [2024-04-25 03:28:59.002789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:24.760 qpair failed and we were unable to recover it. 00:28:24.760 [2024-04-25 03:28:59.003052] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd5a860 (9): Bad file descriptor 00:28:24.760 Initializing NVMe Controllers 00:28:24.760 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:24.760 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:24.760 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:28:24.760 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:28:24.760 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:28:24.760 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:28:24.760 Initialization complete. Launching workers. 00:28:24.760 Starting thread on core 1 00:28:24.760 Starting thread on core 2 00:28:24.760 Starting thread on core 3 00:28:24.760 Starting thread on core 0 00:28:24.760 03:28:59 -- host/target_disconnect.sh@59 -- # sync 00:28:24.760 00:28:24.760 real 0m10.752s 00:28:24.760 user 0m16.946s 00:28:24.760 sys 0m5.600s 00:28:24.760 03:28:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:28:24.760 03:28:59 -- common/autotest_common.sh@10 -- # set +x 00:28:24.760 ************************************ 00:28:24.760 END TEST nvmf_target_disconnect_tc2 00:28:24.760 ************************************ 00:28:24.760 03:28:59 -- host/target_disconnect.sh@80 -- # '[' -n '' ']' 00:28:24.760 03:28:59 -- host/target_disconnect.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:28:24.760 03:28:59 -- host/target_disconnect.sh@85 -- # nvmftestfini 00:28:24.760 03:28:59 -- nvmf/common.sh@477 -- # nvmfcleanup 00:28:24.760 03:28:59 -- nvmf/common.sh@117 -- # sync 00:28:24.760 03:28:59 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:24.760 03:28:59 -- nvmf/common.sh@120 -- # set +e 00:28:24.760 03:28:59 -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:24.760 03:28:59 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:24.760 rmmod nvme_tcp 00:28:24.760 rmmod nvme_fabrics 00:28:24.760 rmmod nvme_keyring 00:28:24.760 03:28:59 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:24.760 03:28:59 -- nvmf/common.sh@124 -- # set -e 00:28:24.760 03:28:59 -- nvmf/common.sh@125 -- # return 0 00:28:24.761 03:28:59 -- nvmf/common.sh@478 -- # '[' -n 1623821 ']' 00:28:24.761 03:28:59 -- nvmf/common.sh@479 -- # killprocess 1623821 00:28:24.761 03:28:59 -- common/autotest_common.sh@936 -- # '[' -z 1623821 ']' 00:28:24.761 03:28:59 -- common/autotest_common.sh@940 -- # kill -0 1623821 00:28:24.761 03:28:59 -- common/autotest_common.sh@941 -- # uname 00:28:24.761 03:28:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:24.761 03:28:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1623821 00:28:24.761 03:28:59 -- common/autotest_common.sh@942 -- # process_name=reactor_4 00:28:24.761 03:28:59 -- common/autotest_common.sh@946 -- # '[' reactor_4 = sudo ']' 00:28:24.761 03:28:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1623821' 00:28:24.761 killing process with pid 1623821 00:28:24.761 03:28:59 -- common/autotest_common.sh@955 -- # kill 1623821 00:28:24.761 03:28:59 -- common/autotest_common.sh@960 -- # wait 1623821 00:28:25.020 03:28:59 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:28:25.020 03:28:59 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:28:25.020 03:28:59 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:28:25.020 03:28:59 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:25.020 03:28:59 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:25.020 03:28:59 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:25.020 03:28:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:25.020 03:28:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:26.924 03:29:01 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:26.924 00:28:26.924 real 0m15.710s 00:28:26.924 user 0m42.841s 00:28:26.924 sys 0m7.736s 00:28:26.924 03:29:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:28:26.924 03:29:01 -- common/autotest_common.sh@10 -- # set +x 00:28:26.924 ************************************ 00:28:26.924 END TEST nvmf_target_disconnect 00:28:26.924 ************************************ 00:28:27.183 03:29:01 -- nvmf/nvmf.sh@123 -- # timing_exit host 00:28:27.183 03:29:01 -- common/autotest_common.sh@716 -- # xtrace_disable 00:28:27.183 03:29:01 -- common/autotest_common.sh@10 -- # set +x 00:28:27.183 03:29:01 -- nvmf/nvmf.sh@125 -- # trap - SIGINT SIGTERM EXIT 00:28:27.183 00:28:27.183 real 21m54.453s 00:28:27.183 user 59m46.129s 00:28:27.183 sys 5m13.515s 00:28:27.183 03:29:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:28:27.183 03:29:01 -- common/autotest_common.sh@10 -- # set +x 00:28:27.183 ************************************ 00:28:27.183 END TEST nvmf_tcp 00:28:27.183 ************************************ 00:28:27.183 03:29:01 -- spdk/autotest.sh@286 -- # [[ 0 -eq 0 ]] 00:28:27.183 03:29:01 -- spdk/autotest.sh@287 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:28:27.183 03:29:01 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:28:27.183 03:29:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:27.183 03:29:01 -- common/autotest_common.sh@10 -- # set +x 00:28:27.183 ************************************ 00:28:27.183 START TEST spdkcli_nvmf_tcp 00:28:27.183 ************************************ 00:28:27.183 03:29:01 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:28:27.183 * Looking for test storage... 00:28:27.183 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:28:27.183 03:29:01 -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:28:27.183 03:29:01 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:28:27.183 03:29:01 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:28:27.183 03:29:01 -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:27.183 03:29:01 -- nvmf/common.sh@7 -- # uname -s 00:28:27.183 03:29:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:27.183 03:29:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:27.183 03:29:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:27.183 03:29:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:27.183 03:29:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:27.183 03:29:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:27.183 03:29:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:27.183 03:29:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:27.183 03:29:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:27.183 03:29:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:27.183 03:29:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:27.183 03:29:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:27.183 03:29:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:27.183 03:29:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:27.183 03:29:01 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:27.183 03:29:01 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:27.183 03:29:01 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:27.183 03:29:01 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:27.183 03:29:01 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:27.183 03:29:01 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:27.183 03:29:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:27.183 03:29:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:27.183 03:29:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:27.183 03:29:01 -- paths/export.sh@5 -- # export PATH 00:28:27.183 03:29:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:27.183 03:29:01 -- nvmf/common.sh@47 -- # : 0 00:28:27.183 03:29:01 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:27.183 03:29:01 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:27.183 03:29:01 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:27.183 03:29:01 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:27.183 03:29:01 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:27.183 03:29:01 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:27.183 03:29:01 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:27.183 03:29:01 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:27.183 03:29:01 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:28:27.183 03:29:01 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:28:27.183 03:29:01 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:28:27.183 03:29:01 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:28:27.183 03:29:01 -- common/autotest_common.sh@710 -- # xtrace_disable 00:28:27.183 03:29:01 -- common/autotest_common.sh@10 -- # set +x 00:28:27.183 03:29:01 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:28:27.183 03:29:01 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1625020 00:28:27.183 03:29:01 -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:28:27.183 03:29:01 -- spdkcli/common.sh@34 -- # waitforlisten 1625020 00:28:27.183 03:29:01 -- common/autotest_common.sh@817 -- # '[' -z 1625020 ']' 00:28:27.183 03:29:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:27.183 03:29:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:27.183 03:29:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:27.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:27.183 03:29:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:27.183 03:29:01 -- common/autotest_common.sh@10 -- # set +x 00:28:27.443 [2024-04-25 03:29:01.690098] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:28:27.443 [2024-04-25 03:29:01.690178] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1625020 ] 00:28:27.443 EAL: No free 2048 kB hugepages reported on node 1 00:28:27.443 [2024-04-25 03:29:01.753679] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:27.443 [2024-04-25 03:29:01.868775] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:27.443 [2024-04-25 03:29:01.868780] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:28.377 03:29:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:28.378 03:29:02 -- common/autotest_common.sh@850 -- # return 0 00:28:28.378 03:29:02 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:28:28.378 03:29:02 -- common/autotest_common.sh@716 -- # xtrace_disable 00:28:28.378 03:29:02 -- common/autotest_common.sh@10 -- # set +x 00:28:28.378 03:29:02 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:28:28.378 03:29:02 -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:28:28.378 03:29:02 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:28:28.378 03:29:02 -- common/autotest_common.sh@710 -- # xtrace_disable 00:28:28.378 03:29:02 -- common/autotest_common.sh@10 -- # set +x 00:28:28.378 03:29:02 -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:28:28.378 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:28:28.378 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:28:28.378 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:28:28.378 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:28:28.378 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:28:28.378 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:28:28.378 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:28:28.378 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:28:28.378 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:28:28.378 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:28:28.378 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:28:28.378 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:28:28.378 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:28:28.378 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:28:28.378 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:28:28.378 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:28:28.378 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:28:28.378 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:28:28.378 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:28:28.378 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:28:28.378 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:28:28.378 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:28:28.378 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:28:28.378 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:28:28.378 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:28:28.378 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:28:28.378 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:28:28.378 ' 00:28:28.635 [2024-04-25 03:29:03.032441] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:28:31.163 [2024-04-25 03:29:05.186318] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:32.095 [2024-04-25 03:29:06.430843] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:28:34.624 [2024-04-25 03:29:08.697840] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:28:36.525 [2024-04-25 03:29:10.640020] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:28:37.898 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:28:37.898 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:28:37.898 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:28:37.898 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:28:37.898 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:28:37.898 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:28:37.898 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:28:37.898 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:28:37.898 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:28:37.898 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:28:37.898 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:28:37.898 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:28:37.898 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:28:37.898 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:28:37.898 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:28:37.898 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:28:37.898 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:28:37.898 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:28:37.898 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:28:37.898 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:28:37.898 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:28:37.898 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:28:37.898 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:28:37.898 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:28:37.898 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:28:37.898 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:28:37.898 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:28:37.898 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:28:37.898 03:29:12 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:28:37.898 03:29:12 -- common/autotest_common.sh@716 -- # xtrace_disable 00:28:37.898 03:29:12 -- common/autotest_common.sh@10 -- # set +x 00:28:37.898 03:29:12 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:28:37.898 03:29:12 -- common/autotest_common.sh@710 -- # xtrace_disable 00:28:37.898 03:29:12 -- common/autotest_common.sh@10 -- # set +x 00:28:37.898 03:29:12 -- spdkcli/nvmf.sh@69 -- # check_match 00:28:37.898 03:29:12 -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:28:38.463 03:29:12 -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:28:38.463 03:29:12 -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:28:38.463 03:29:12 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:28:38.463 03:29:12 -- common/autotest_common.sh@716 -- # xtrace_disable 00:28:38.463 03:29:12 -- common/autotest_common.sh@10 -- # set +x 00:28:38.463 03:29:12 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:28:38.463 03:29:12 -- common/autotest_common.sh@710 -- # xtrace_disable 00:28:38.463 03:29:12 -- common/autotest_common.sh@10 -- # set +x 00:28:38.463 03:29:12 -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:28:38.463 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:28:38.463 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:28:38.463 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:28:38.463 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:28:38.463 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:28:38.463 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:28:38.463 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:28:38.463 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:28:38.463 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:28:38.463 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:28:38.463 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:28:38.463 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:28:38.463 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:28:38.463 ' 00:28:43.737 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:28:43.737 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:28:43.737 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:28:43.737 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:28:43.737 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:28:43.737 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:28:43.737 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:28:43.737 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:28:43.737 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:28:43.737 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:28:43.737 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:28:43.737 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:28:43.737 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:28:43.737 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:28:43.737 03:29:18 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:28:43.737 03:29:18 -- common/autotest_common.sh@716 -- # xtrace_disable 00:28:43.737 03:29:18 -- common/autotest_common.sh@10 -- # set +x 00:28:43.737 03:29:18 -- spdkcli/nvmf.sh@90 -- # killprocess 1625020 00:28:43.737 03:29:18 -- common/autotest_common.sh@936 -- # '[' -z 1625020 ']' 00:28:43.737 03:29:18 -- common/autotest_common.sh@940 -- # kill -0 1625020 00:28:43.737 03:29:18 -- common/autotest_common.sh@941 -- # uname 00:28:43.737 03:29:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:43.737 03:29:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1625020 00:28:43.737 03:29:18 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:28:43.737 03:29:18 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:28:43.737 03:29:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1625020' 00:28:43.737 killing process with pid 1625020 00:28:43.737 03:29:18 -- common/autotest_common.sh@955 -- # kill 1625020 00:28:43.737 [2024-04-25 03:29:18.066636] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:28:43.737 03:29:18 -- common/autotest_common.sh@960 -- # wait 1625020 00:28:43.996 03:29:18 -- spdkcli/nvmf.sh@1 -- # cleanup 00:28:43.996 03:29:18 -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:28:43.996 03:29:18 -- spdkcli/common.sh@13 -- # '[' -n 1625020 ']' 00:28:43.996 03:29:18 -- spdkcli/common.sh@14 -- # killprocess 1625020 00:28:43.996 03:29:18 -- common/autotest_common.sh@936 -- # '[' -z 1625020 ']' 00:28:43.996 03:29:18 -- common/autotest_common.sh@940 -- # kill -0 1625020 00:28:43.996 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (1625020) - No such process 00:28:43.996 03:29:18 -- common/autotest_common.sh@963 -- # echo 'Process with pid 1625020 is not found' 00:28:43.996 Process with pid 1625020 is not found 00:28:43.996 03:29:18 -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:28:43.996 03:29:18 -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:28:43.996 03:29:18 -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:28:43.996 00:28:43.996 real 0m16.770s 00:28:43.996 user 0m35.504s 00:28:43.996 sys 0m0.837s 00:28:43.996 03:29:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:28:43.996 03:29:18 -- common/autotest_common.sh@10 -- # set +x 00:28:43.996 ************************************ 00:28:43.996 END TEST spdkcli_nvmf_tcp 00:28:43.996 ************************************ 00:28:43.996 03:29:18 -- spdk/autotest.sh@288 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:28:43.996 03:29:18 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:28:43.996 03:29:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:43.996 03:29:18 -- common/autotest_common.sh@10 -- # set +x 00:28:43.996 ************************************ 00:28:43.996 START TEST nvmf_identify_passthru 00:28:43.996 ************************************ 00:28:43.996 03:29:18 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:28:44.255 * Looking for test storage... 00:28:44.255 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:44.255 03:29:18 -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:44.255 03:29:18 -- nvmf/common.sh@7 -- # uname -s 00:28:44.255 03:29:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:44.255 03:29:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:44.255 03:29:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:44.255 03:29:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:44.255 03:29:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:44.255 03:29:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:44.255 03:29:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:44.255 03:29:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:44.255 03:29:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:44.255 03:29:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:44.255 03:29:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:44.255 03:29:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:44.255 03:29:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:44.255 03:29:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:44.255 03:29:18 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:44.255 03:29:18 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:44.255 03:29:18 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:44.255 03:29:18 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:44.255 03:29:18 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:44.255 03:29:18 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:44.256 03:29:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:44.256 03:29:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:44.256 03:29:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:44.256 03:29:18 -- paths/export.sh@5 -- # export PATH 00:28:44.256 03:29:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:44.256 03:29:18 -- nvmf/common.sh@47 -- # : 0 00:28:44.256 03:29:18 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:44.256 03:29:18 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:44.256 03:29:18 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:44.256 03:29:18 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:44.256 03:29:18 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:44.256 03:29:18 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:44.256 03:29:18 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:44.256 03:29:18 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:44.256 03:29:18 -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:44.256 03:29:18 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:44.256 03:29:18 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:44.256 03:29:18 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:44.256 03:29:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:44.256 03:29:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:44.256 03:29:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:44.256 03:29:18 -- paths/export.sh@5 -- # export PATH 00:28:44.256 03:29:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:44.256 03:29:18 -- target/identify_passthru.sh@12 -- # nvmftestinit 00:28:44.256 03:29:18 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:28:44.256 03:29:18 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:44.256 03:29:18 -- nvmf/common.sh@437 -- # prepare_net_devs 00:28:44.256 03:29:18 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:28:44.256 03:29:18 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:28:44.256 03:29:18 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:44.256 03:29:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:44.256 03:29:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:44.256 03:29:18 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:28:44.256 03:29:18 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:28:44.256 03:29:18 -- nvmf/common.sh@285 -- # xtrace_disable 00:28:44.256 03:29:18 -- common/autotest_common.sh@10 -- # set +x 00:28:46.159 03:29:20 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:46.159 03:29:20 -- nvmf/common.sh@291 -- # pci_devs=() 00:28:46.159 03:29:20 -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:46.159 03:29:20 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:46.159 03:29:20 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:46.159 03:29:20 -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:46.159 03:29:20 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:46.159 03:29:20 -- nvmf/common.sh@295 -- # net_devs=() 00:28:46.159 03:29:20 -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:46.159 03:29:20 -- nvmf/common.sh@296 -- # e810=() 00:28:46.159 03:29:20 -- nvmf/common.sh@296 -- # local -ga e810 00:28:46.159 03:29:20 -- nvmf/common.sh@297 -- # x722=() 00:28:46.159 03:29:20 -- nvmf/common.sh@297 -- # local -ga x722 00:28:46.159 03:29:20 -- nvmf/common.sh@298 -- # mlx=() 00:28:46.159 03:29:20 -- nvmf/common.sh@298 -- # local -ga mlx 00:28:46.159 03:29:20 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:46.159 03:29:20 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:46.159 03:29:20 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:46.159 03:29:20 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:46.159 03:29:20 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:46.159 03:29:20 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:46.159 03:29:20 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:46.159 03:29:20 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:46.159 03:29:20 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:46.159 03:29:20 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:46.159 03:29:20 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:46.159 03:29:20 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:46.159 03:29:20 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:46.159 03:29:20 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:46.159 03:29:20 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:46.159 03:29:20 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:46.159 03:29:20 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:46.159 03:29:20 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:46.159 03:29:20 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:46.159 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:46.159 03:29:20 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:46.159 03:29:20 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:46.159 03:29:20 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:46.159 03:29:20 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:46.159 03:29:20 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:46.159 03:29:20 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:46.159 03:29:20 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:46.159 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:46.159 03:29:20 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:46.159 03:29:20 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:46.159 03:29:20 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:46.159 03:29:20 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:46.159 03:29:20 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:46.159 03:29:20 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:46.159 03:29:20 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:46.159 03:29:20 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:46.159 03:29:20 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:46.159 03:29:20 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:46.159 03:29:20 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:28:46.159 03:29:20 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:46.159 03:29:20 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:46.159 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:46.159 03:29:20 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:28:46.159 03:29:20 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:46.159 03:29:20 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:46.159 03:29:20 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:28:46.159 03:29:20 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:46.159 03:29:20 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:46.159 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:46.159 03:29:20 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:28:46.160 03:29:20 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:28:46.160 03:29:20 -- nvmf/common.sh@403 -- # is_hw=yes 00:28:46.160 03:29:20 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:28:46.160 03:29:20 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:28:46.160 03:29:20 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:28:46.160 03:29:20 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:46.160 03:29:20 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:46.160 03:29:20 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:46.160 03:29:20 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:46.160 03:29:20 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:46.160 03:29:20 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:46.160 03:29:20 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:46.160 03:29:20 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:46.160 03:29:20 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:46.160 03:29:20 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:46.160 03:29:20 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:46.160 03:29:20 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:46.160 03:29:20 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:46.160 03:29:20 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:46.160 03:29:20 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:46.160 03:29:20 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:46.160 03:29:20 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:46.160 03:29:20 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:46.160 03:29:20 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:46.160 03:29:20 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:46.160 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:46.160 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.175 ms 00:28:46.160 00:28:46.160 --- 10.0.0.2 ping statistics --- 00:28:46.160 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:46.160 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:28:46.160 03:29:20 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:46.160 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:46.160 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:28:46.160 00:28:46.160 --- 10.0.0.1 ping statistics --- 00:28:46.160 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:46.160 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:28:46.160 03:29:20 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:46.160 03:29:20 -- nvmf/common.sh@411 -- # return 0 00:28:46.160 03:29:20 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:28:46.160 03:29:20 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:46.160 03:29:20 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:28:46.160 03:29:20 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:28:46.160 03:29:20 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:46.160 03:29:20 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:28:46.160 03:29:20 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:28:46.160 03:29:20 -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:28:46.160 03:29:20 -- common/autotest_common.sh@710 -- # xtrace_disable 00:28:46.160 03:29:20 -- common/autotest_common.sh@10 -- # set +x 00:28:46.160 03:29:20 -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:28:46.160 03:29:20 -- common/autotest_common.sh@1510 -- # bdfs=() 00:28:46.160 03:29:20 -- common/autotest_common.sh@1510 -- # local bdfs 00:28:46.160 03:29:20 -- common/autotest_common.sh@1511 -- # bdfs=($(get_nvme_bdfs)) 00:28:46.160 03:29:20 -- common/autotest_common.sh@1511 -- # get_nvme_bdfs 00:28:46.160 03:29:20 -- common/autotest_common.sh@1499 -- # bdfs=() 00:28:46.160 03:29:20 -- common/autotest_common.sh@1499 -- # local bdfs 00:28:46.160 03:29:20 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:28:46.160 03:29:20 -- common/autotest_common.sh@1500 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:28:46.160 03:29:20 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:28:46.160 03:29:20 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:28:46.160 03:29:20 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:88:00.0 00:28:46.160 03:29:20 -- common/autotest_common.sh@1513 -- # echo 0000:88:00.0 00:28:46.160 03:29:20 -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:28:46.160 03:29:20 -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:28:46.160 03:29:20 -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:28:46.160 03:29:20 -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:28:46.160 03:29:20 -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:28:46.160 EAL: No free 2048 kB hugepages reported on node 1 00:28:50.366 03:29:24 -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:28:50.366 03:29:24 -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:28:50.366 03:29:24 -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:28:50.366 03:29:24 -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:28:50.366 EAL: No free 2048 kB hugepages reported on node 1 00:28:54.548 03:29:28 -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:28:54.548 03:29:28 -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:28:54.548 03:29:28 -- common/autotest_common.sh@716 -- # xtrace_disable 00:28:54.548 03:29:28 -- common/autotest_common.sh@10 -- # set +x 00:28:54.548 03:29:29 -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:28:54.548 03:29:29 -- common/autotest_common.sh@710 -- # xtrace_disable 00:28:54.548 03:29:29 -- common/autotest_common.sh@10 -- # set +x 00:28:54.548 03:29:29 -- target/identify_passthru.sh@31 -- # nvmfpid=1629654 00:28:54.548 03:29:29 -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:28:54.548 03:29:29 -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:54.548 03:29:29 -- target/identify_passthru.sh@35 -- # waitforlisten 1629654 00:28:54.548 03:29:29 -- common/autotest_common.sh@817 -- # '[' -z 1629654 ']' 00:28:54.548 03:29:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:54.548 03:29:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:54.548 03:29:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:54.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:54.548 03:29:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:54.548 03:29:29 -- common/autotest_common.sh@10 -- # set +x 00:28:54.806 [2024-04-25 03:29:29.062413] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:28:54.806 [2024-04-25 03:29:29.062488] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:54.806 EAL: No free 2048 kB hugepages reported on node 1 00:28:54.806 [2024-04-25 03:29:29.127257] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:54.806 [2024-04-25 03:29:29.237511] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:54.806 [2024-04-25 03:29:29.237565] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:54.806 [2024-04-25 03:29:29.237579] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:54.806 [2024-04-25 03:29:29.237591] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:54.806 [2024-04-25 03:29:29.237601] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:54.806 [2024-04-25 03:29:29.237694] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:54.806 [2024-04-25 03:29:29.237750] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:54.806 [2024-04-25 03:29:29.237817] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:54.806 [2024-04-25 03:29:29.237820] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:54.806 03:29:29 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:54.806 03:29:29 -- common/autotest_common.sh@850 -- # return 0 00:28:54.806 03:29:29 -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:28:54.806 03:29:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:54.806 03:29:29 -- common/autotest_common.sh@10 -- # set +x 00:28:54.806 INFO: Log level set to 20 00:28:54.806 INFO: Requests: 00:28:54.806 { 00:28:54.806 "jsonrpc": "2.0", 00:28:54.806 "method": "nvmf_set_config", 00:28:54.806 "id": 1, 00:28:54.806 "params": { 00:28:54.806 "admin_cmd_passthru": { 00:28:54.806 "identify_ctrlr": true 00:28:54.806 } 00:28:54.806 } 00:28:54.806 } 00:28:54.806 00:28:54.806 INFO: response: 00:28:54.806 { 00:28:54.806 "jsonrpc": "2.0", 00:28:54.806 "id": 1, 00:28:54.806 "result": true 00:28:54.806 } 00:28:54.806 00:28:54.806 03:29:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:54.806 03:29:29 -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:28:54.806 03:29:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:54.806 03:29:29 -- common/autotest_common.sh@10 -- # set +x 00:28:54.806 INFO: Setting log level to 20 00:28:54.806 INFO: Setting log level to 20 00:28:54.806 INFO: Log level set to 20 00:28:54.806 INFO: Log level set to 20 00:28:54.807 INFO: Requests: 00:28:54.807 { 00:28:54.807 "jsonrpc": "2.0", 00:28:54.807 "method": "framework_start_init", 00:28:54.807 "id": 1 00:28:54.807 } 00:28:54.807 00:28:54.807 INFO: Requests: 00:28:54.807 { 00:28:54.807 "jsonrpc": "2.0", 00:28:54.807 "method": "framework_start_init", 00:28:54.807 "id": 1 00:28:54.807 } 00:28:54.807 00:28:55.065 [2024-04-25 03:29:29.375868] nvmf_tgt.c: 453:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:28:55.065 INFO: response: 00:28:55.065 { 00:28:55.065 "jsonrpc": "2.0", 00:28:55.065 "id": 1, 00:28:55.065 "result": true 00:28:55.065 } 00:28:55.065 00:28:55.065 INFO: response: 00:28:55.065 { 00:28:55.065 "jsonrpc": "2.0", 00:28:55.065 "id": 1, 00:28:55.065 "result": true 00:28:55.065 } 00:28:55.065 00:28:55.065 03:29:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:55.065 03:29:29 -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:55.065 03:29:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:55.065 03:29:29 -- common/autotest_common.sh@10 -- # set +x 00:28:55.065 INFO: Setting log level to 40 00:28:55.065 INFO: Setting log level to 40 00:28:55.065 INFO: Setting log level to 40 00:28:55.065 [2024-04-25 03:29:29.385791] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:55.065 03:29:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:55.065 03:29:29 -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:28:55.065 03:29:29 -- common/autotest_common.sh@716 -- # xtrace_disable 00:28:55.065 03:29:29 -- common/autotest_common.sh@10 -- # set +x 00:28:55.065 03:29:29 -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:28:55.065 03:29:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:55.065 03:29:29 -- common/autotest_common.sh@10 -- # set +x 00:28:58.345 Nvme0n1 00:28:58.345 03:29:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:58.345 03:29:32 -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:28:58.345 03:29:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:58.345 03:29:32 -- common/autotest_common.sh@10 -- # set +x 00:28:58.345 03:29:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:58.345 03:29:32 -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:28:58.345 03:29:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:58.345 03:29:32 -- common/autotest_common.sh@10 -- # set +x 00:28:58.345 03:29:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:58.345 03:29:32 -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:58.345 03:29:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:58.345 03:29:32 -- common/autotest_common.sh@10 -- # set +x 00:28:58.345 [2024-04-25 03:29:32.271229] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:58.345 03:29:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:58.345 03:29:32 -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:28:58.345 03:29:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:58.345 03:29:32 -- common/autotest_common.sh@10 -- # set +x 00:28:58.345 [2024-04-25 03:29:32.279003] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:28:58.345 [ 00:28:58.345 { 00:28:58.345 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:58.345 "subtype": "Discovery", 00:28:58.345 "listen_addresses": [], 00:28:58.345 "allow_any_host": true, 00:28:58.345 "hosts": [] 00:28:58.345 }, 00:28:58.345 { 00:28:58.345 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:58.345 "subtype": "NVMe", 00:28:58.345 "listen_addresses": [ 00:28:58.345 { 00:28:58.345 "transport": "TCP", 00:28:58.345 "trtype": "TCP", 00:28:58.345 "adrfam": "IPv4", 00:28:58.345 "traddr": "10.0.0.2", 00:28:58.345 "trsvcid": "4420" 00:28:58.345 } 00:28:58.345 ], 00:28:58.345 "allow_any_host": true, 00:28:58.345 "hosts": [], 00:28:58.345 "serial_number": "SPDK00000000000001", 00:28:58.345 "model_number": "SPDK bdev Controller", 00:28:58.345 "max_namespaces": 1, 00:28:58.345 "min_cntlid": 1, 00:28:58.345 "max_cntlid": 65519, 00:28:58.345 "namespaces": [ 00:28:58.345 { 00:28:58.345 "nsid": 1, 00:28:58.345 "bdev_name": "Nvme0n1", 00:28:58.345 "name": "Nvme0n1", 00:28:58.345 "nguid": "EB22DA4738624A1892F1F05C8ED1742C", 00:28:58.345 "uuid": "eb22da47-3862-4a18-92f1-f05c8ed1742c" 00:28:58.345 } 00:28:58.345 ] 00:28:58.345 } 00:28:58.345 ] 00:28:58.345 03:29:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:58.345 03:29:32 -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:58.345 03:29:32 -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:28:58.345 03:29:32 -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:28:58.345 EAL: No free 2048 kB hugepages reported on node 1 00:28:58.346 03:29:32 -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:28:58.346 03:29:32 -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:58.346 03:29:32 -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:28:58.346 03:29:32 -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:28:58.346 EAL: No free 2048 kB hugepages reported on node 1 00:28:58.346 03:29:32 -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:28:58.346 03:29:32 -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:28:58.346 03:29:32 -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:28:58.346 03:29:32 -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:58.346 03:29:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:58.346 03:29:32 -- common/autotest_common.sh@10 -- # set +x 00:28:58.346 03:29:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:58.346 03:29:32 -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:28:58.346 03:29:32 -- target/identify_passthru.sh@77 -- # nvmftestfini 00:28:58.346 03:29:32 -- nvmf/common.sh@477 -- # nvmfcleanup 00:28:58.346 03:29:32 -- nvmf/common.sh@117 -- # sync 00:28:58.346 03:29:32 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:58.346 03:29:32 -- nvmf/common.sh@120 -- # set +e 00:28:58.346 03:29:32 -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:58.346 03:29:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:58.346 rmmod nvme_tcp 00:28:58.346 rmmod nvme_fabrics 00:28:58.346 rmmod nvme_keyring 00:28:58.346 03:29:32 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:58.346 03:29:32 -- nvmf/common.sh@124 -- # set -e 00:28:58.346 03:29:32 -- nvmf/common.sh@125 -- # return 0 00:28:58.346 03:29:32 -- nvmf/common.sh@478 -- # '[' -n 1629654 ']' 00:28:58.346 03:29:32 -- nvmf/common.sh@479 -- # killprocess 1629654 00:28:58.346 03:29:32 -- common/autotest_common.sh@936 -- # '[' -z 1629654 ']' 00:28:58.346 03:29:32 -- common/autotest_common.sh@940 -- # kill -0 1629654 00:28:58.346 03:29:32 -- common/autotest_common.sh@941 -- # uname 00:28:58.346 03:29:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:58.346 03:29:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1629654 00:28:58.346 03:29:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:28:58.346 03:29:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:28:58.346 03:29:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1629654' 00:28:58.346 killing process with pid 1629654 00:28:58.346 03:29:32 -- common/autotest_common.sh@955 -- # kill 1629654 00:28:58.346 [2024-04-25 03:29:32.755907] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:28:58.346 03:29:32 -- common/autotest_common.sh@960 -- # wait 1629654 00:29:00.246 03:29:34 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:29:00.246 03:29:34 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:29:00.246 03:29:34 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:29:00.246 03:29:34 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:00.246 03:29:34 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:00.246 03:29:34 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:00.246 03:29:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:00.246 03:29:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:02.149 03:29:36 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:02.149 00:29:02.149 real 0m17.919s 00:29:02.149 user 0m26.653s 00:29:02.149 sys 0m2.268s 00:29:02.149 03:29:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:02.149 03:29:36 -- common/autotest_common.sh@10 -- # set +x 00:29:02.149 ************************************ 00:29:02.149 END TEST nvmf_identify_passthru 00:29:02.149 ************************************ 00:29:02.149 03:29:36 -- spdk/autotest.sh@290 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:29:02.149 03:29:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:29:02.149 03:29:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:02.149 03:29:36 -- common/autotest_common.sh@10 -- # set +x 00:29:02.149 ************************************ 00:29:02.149 START TEST nvmf_dif 00:29:02.149 ************************************ 00:29:02.149 03:29:36 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:29:02.149 * Looking for test storage... 00:29:02.149 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:02.149 03:29:36 -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:02.149 03:29:36 -- nvmf/common.sh@7 -- # uname -s 00:29:02.149 03:29:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:02.149 03:29:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:02.149 03:29:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:02.149 03:29:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:02.149 03:29:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:02.149 03:29:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:02.149 03:29:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:02.149 03:29:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:02.149 03:29:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:02.149 03:29:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:02.149 03:29:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:02.149 03:29:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:02.149 03:29:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:02.149 03:29:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:02.149 03:29:36 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:02.149 03:29:36 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:02.149 03:29:36 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:02.149 03:29:36 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:02.149 03:29:36 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:02.149 03:29:36 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:02.149 03:29:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:02.149 03:29:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:02.149 03:29:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:02.149 03:29:36 -- paths/export.sh@5 -- # export PATH 00:29:02.149 03:29:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:02.149 03:29:36 -- nvmf/common.sh@47 -- # : 0 00:29:02.149 03:29:36 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:02.149 03:29:36 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:02.149 03:29:36 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:02.149 03:29:36 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:02.149 03:29:36 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:02.149 03:29:36 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:02.149 03:29:36 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:02.149 03:29:36 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:02.149 03:29:36 -- target/dif.sh@15 -- # NULL_META=16 00:29:02.149 03:29:36 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:29:02.149 03:29:36 -- target/dif.sh@15 -- # NULL_SIZE=64 00:29:02.149 03:29:36 -- target/dif.sh@15 -- # NULL_DIF=1 00:29:02.149 03:29:36 -- target/dif.sh@135 -- # nvmftestinit 00:29:02.149 03:29:36 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:29:02.149 03:29:36 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:02.149 03:29:36 -- nvmf/common.sh@437 -- # prepare_net_devs 00:29:02.149 03:29:36 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:29:02.149 03:29:36 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:29:02.149 03:29:36 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:02.149 03:29:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:02.149 03:29:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:02.149 03:29:36 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:29:02.149 03:29:36 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:29:02.149 03:29:36 -- nvmf/common.sh@285 -- # xtrace_disable 00:29:02.149 03:29:36 -- common/autotest_common.sh@10 -- # set +x 00:29:04.052 03:29:38 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:29:04.052 03:29:38 -- nvmf/common.sh@291 -- # pci_devs=() 00:29:04.052 03:29:38 -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:04.052 03:29:38 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:04.052 03:29:38 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:04.052 03:29:38 -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:04.052 03:29:38 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:04.052 03:29:38 -- nvmf/common.sh@295 -- # net_devs=() 00:29:04.052 03:29:38 -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:04.052 03:29:38 -- nvmf/common.sh@296 -- # e810=() 00:29:04.052 03:29:38 -- nvmf/common.sh@296 -- # local -ga e810 00:29:04.053 03:29:38 -- nvmf/common.sh@297 -- # x722=() 00:29:04.053 03:29:38 -- nvmf/common.sh@297 -- # local -ga x722 00:29:04.053 03:29:38 -- nvmf/common.sh@298 -- # mlx=() 00:29:04.053 03:29:38 -- nvmf/common.sh@298 -- # local -ga mlx 00:29:04.053 03:29:38 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:04.053 03:29:38 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:04.053 03:29:38 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:04.053 03:29:38 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:04.053 03:29:38 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:04.053 03:29:38 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:04.053 03:29:38 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:04.053 03:29:38 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:04.053 03:29:38 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:04.053 03:29:38 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:04.053 03:29:38 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:04.053 03:29:38 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:04.053 03:29:38 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:04.053 03:29:38 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:04.053 03:29:38 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:04.053 03:29:38 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:04.053 03:29:38 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:04.053 03:29:38 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:04.053 03:29:38 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:04.053 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:04.053 03:29:38 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:04.053 03:29:38 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:04.053 03:29:38 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:04.053 03:29:38 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:04.053 03:29:38 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:04.053 03:29:38 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:04.053 03:29:38 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:04.053 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:04.053 03:29:38 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:04.053 03:29:38 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:04.053 03:29:38 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:04.053 03:29:38 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:04.053 03:29:38 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:04.053 03:29:38 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:04.053 03:29:38 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:04.053 03:29:38 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:04.053 03:29:38 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:04.053 03:29:38 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:04.053 03:29:38 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:29:04.053 03:29:38 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:04.053 03:29:38 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:04.053 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:04.053 03:29:38 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:29:04.053 03:29:38 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:04.053 03:29:38 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:04.053 03:29:38 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:29:04.053 03:29:38 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:04.053 03:29:38 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:04.053 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:04.053 03:29:38 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:29:04.053 03:29:38 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:29:04.053 03:29:38 -- nvmf/common.sh@403 -- # is_hw=yes 00:29:04.053 03:29:38 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:29:04.053 03:29:38 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:29:04.053 03:29:38 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:29:04.053 03:29:38 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:04.053 03:29:38 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:04.053 03:29:38 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:04.053 03:29:38 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:04.053 03:29:38 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:04.053 03:29:38 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:04.053 03:29:38 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:04.053 03:29:38 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:04.053 03:29:38 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:04.053 03:29:38 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:04.053 03:29:38 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:04.053 03:29:38 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:04.053 03:29:38 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:04.053 03:29:38 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:04.053 03:29:38 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:04.053 03:29:38 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:04.053 03:29:38 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:04.312 03:29:38 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:04.312 03:29:38 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:04.312 03:29:38 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:04.312 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:04.312 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.119 ms 00:29:04.312 00:29:04.312 --- 10.0.0.2 ping statistics --- 00:29:04.312 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:04.312 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:29:04.312 03:29:38 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:04.312 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:04.312 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:29:04.312 00:29:04.312 --- 10.0.0.1 ping statistics --- 00:29:04.312 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:04.312 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:29:04.312 03:29:38 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:04.312 03:29:38 -- nvmf/common.sh@411 -- # return 0 00:29:04.312 03:29:38 -- nvmf/common.sh@439 -- # '[' iso == iso ']' 00:29:04.312 03:29:38 -- nvmf/common.sh@440 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:05.246 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:29:05.247 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:29:05.247 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:29:05.247 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:29:05.247 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:29:05.247 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:29:05.247 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:29:05.247 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:29:05.247 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:29:05.247 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:29:05.247 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:29:05.247 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:29:05.247 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:29:05.247 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:29:05.247 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:29:05.247 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:29:05.247 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:29:05.505 03:29:39 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:05.505 03:29:39 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:29:05.505 03:29:39 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:29:05.505 03:29:39 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:05.505 03:29:39 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:29:05.505 03:29:39 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:29:05.505 03:29:39 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:29:05.505 03:29:39 -- target/dif.sh@137 -- # nvmfappstart 00:29:05.505 03:29:39 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:29:05.505 03:29:39 -- common/autotest_common.sh@710 -- # xtrace_disable 00:29:05.505 03:29:39 -- common/autotest_common.sh@10 -- # set +x 00:29:05.505 03:29:39 -- nvmf/common.sh@470 -- # nvmfpid=1632810 00:29:05.505 03:29:39 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:29:05.505 03:29:39 -- nvmf/common.sh@471 -- # waitforlisten 1632810 00:29:05.505 03:29:39 -- common/autotest_common.sh@817 -- # '[' -z 1632810 ']' 00:29:05.505 03:29:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:05.505 03:29:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:05.505 03:29:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:05.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:05.505 03:29:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:05.505 03:29:39 -- common/autotest_common.sh@10 -- # set +x 00:29:05.505 [2024-04-25 03:29:39.976221] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:29:05.505 [2024-04-25 03:29:39.976293] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:05.764 EAL: No free 2048 kB hugepages reported on node 1 00:29:05.764 [2024-04-25 03:29:40.042319] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:05.764 [2024-04-25 03:29:40.146875] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:05.764 [2024-04-25 03:29:40.146934] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:05.764 [2024-04-25 03:29:40.146958] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:05.764 [2024-04-25 03:29:40.146969] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:05.764 [2024-04-25 03:29:40.146979] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:05.764 [2024-04-25 03:29:40.147022] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:05.764 03:29:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:05.764 03:29:40 -- common/autotest_common.sh@850 -- # return 0 00:29:05.764 03:29:40 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:29:05.764 03:29:40 -- common/autotest_common.sh@716 -- # xtrace_disable 00:29:05.764 03:29:40 -- common/autotest_common.sh@10 -- # set +x 00:29:06.024 03:29:40 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:06.024 03:29:40 -- target/dif.sh@139 -- # create_transport 00:29:06.024 03:29:40 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:29:06.024 03:29:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:06.024 03:29:40 -- common/autotest_common.sh@10 -- # set +x 00:29:06.024 [2024-04-25 03:29:40.285358] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:06.024 03:29:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:06.024 03:29:40 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:29:06.024 03:29:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:29:06.024 03:29:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:06.024 03:29:40 -- common/autotest_common.sh@10 -- # set +x 00:29:06.024 ************************************ 00:29:06.024 START TEST fio_dif_1_default 00:29:06.024 ************************************ 00:29:06.024 03:29:40 -- common/autotest_common.sh@1111 -- # fio_dif_1 00:29:06.024 03:29:40 -- target/dif.sh@86 -- # create_subsystems 0 00:29:06.024 03:29:40 -- target/dif.sh@28 -- # local sub 00:29:06.024 03:29:40 -- target/dif.sh@30 -- # for sub in "$@" 00:29:06.024 03:29:40 -- target/dif.sh@31 -- # create_subsystem 0 00:29:06.024 03:29:40 -- target/dif.sh@18 -- # local sub_id=0 00:29:06.024 03:29:40 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:29:06.024 03:29:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:06.024 03:29:40 -- common/autotest_common.sh@10 -- # set +x 00:29:06.024 bdev_null0 00:29:06.024 03:29:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:06.024 03:29:40 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:06.024 03:29:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:06.024 03:29:40 -- common/autotest_common.sh@10 -- # set +x 00:29:06.024 03:29:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:06.024 03:29:40 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:06.024 03:29:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:06.024 03:29:40 -- common/autotest_common.sh@10 -- # set +x 00:29:06.024 03:29:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:06.024 03:29:40 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:06.024 03:29:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:06.024 03:29:40 -- common/autotest_common.sh@10 -- # set +x 00:29:06.024 [2024-04-25 03:29:40.405827] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:06.024 03:29:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:06.024 03:29:40 -- target/dif.sh@87 -- # fio /dev/fd/62 00:29:06.024 03:29:40 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:29:06.024 03:29:40 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:29:06.024 03:29:40 -- nvmf/common.sh@521 -- # config=() 00:29:06.024 03:29:40 -- nvmf/common.sh@521 -- # local subsystem config 00:29:06.024 03:29:40 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:29:06.024 03:29:40 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:06.024 03:29:40 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:29:06.024 { 00:29:06.024 "params": { 00:29:06.024 "name": "Nvme$subsystem", 00:29:06.024 "trtype": "$TEST_TRANSPORT", 00:29:06.024 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:06.024 "adrfam": "ipv4", 00:29:06.024 "trsvcid": "$NVMF_PORT", 00:29:06.024 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:06.024 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:06.024 "hdgst": ${hdgst:-false}, 00:29:06.024 "ddgst": ${ddgst:-false} 00:29:06.024 }, 00:29:06.024 "method": "bdev_nvme_attach_controller" 00:29:06.024 } 00:29:06.024 EOF 00:29:06.024 )") 00:29:06.024 03:29:40 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:06.024 03:29:40 -- target/dif.sh@82 -- # gen_fio_conf 00:29:06.024 03:29:40 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:29:06.024 03:29:40 -- target/dif.sh@54 -- # local file 00:29:06.024 03:29:40 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:06.024 03:29:40 -- target/dif.sh@56 -- # cat 00:29:06.024 03:29:40 -- common/autotest_common.sh@1325 -- # local sanitizers 00:29:06.024 03:29:40 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:06.024 03:29:40 -- common/autotest_common.sh@1327 -- # shift 00:29:06.024 03:29:40 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:29:06.024 03:29:40 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:29:06.024 03:29:40 -- nvmf/common.sh@543 -- # cat 00:29:06.024 03:29:40 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:06.024 03:29:40 -- target/dif.sh@72 -- # (( file = 1 )) 00:29:06.024 03:29:40 -- target/dif.sh@72 -- # (( file <= files )) 00:29:06.024 03:29:40 -- common/autotest_common.sh@1331 -- # grep libasan 00:29:06.024 03:29:40 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:29:06.024 03:29:40 -- nvmf/common.sh@545 -- # jq . 00:29:06.024 03:29:40 -- nvmf/common.sh@546 -- # IFS=, 00:29:06.024 03:29:40 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:29:06.024 "params": { 00:29:06.024 "name": "Nvme0", 00:29:06.024 "trtype": "tcp", 00:29:06.024 "traddr": "10.0.0.2", 00:29:06.024 "adrfam": "ipv4", 00:29:06.024 "trsvcid": "4420", 00:29:06.024 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:06.024 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:06.024 "hdgst": false, 00:29:06.024 "ddgst": false 00:29:06.024 }, 00:29:06.024 "method": "bdev_nvme_attach_controller" 00:29:06.024 }' 00:29:06.024 03:29:40 -- common/autotest_common.sh@1331 -- # asan_lib= 00:29:06.024 03:29:40 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:29:06.024 03:29:40 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:29:06.024 03:29:40 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:06.024 03:29:40 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:29:06.024 03:29:40 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:29:06.024 03:29:40 -- common/autotest_common.sh@1331 -- # asan_lib= 00:29:06.024 03:29:40 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:29:06.024 03:29:40 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:29:06.024 03:29:40 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:06.289 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:29:06.289 fio-3.35 00:29:06.289 Starting 1 thread 00:29:06.289 EAL: No free 2048 kB hugepages reported on node 1 00:29:18.486 00:29:18.486 filename0: (groupid=0, jobs=1): err= 0: pid=1633045: Thu Apr 25 03:29:51 2024 00:29:18.486 read: IOPS=184, BW=738KiB/s (756kB/s)(7392KiB/10018msec) 00:29:18.486 slat (nsec): min=6006, max=72041, avg=9454.38, stdev=5076.23 00:29:18.486 clat (usec): min=906, max=44217, avg=21652.44, stdev=20447.64 00:29:18.486 lat (usec): min=914, max=44248, avg=21661.89, stdev=20446.70 00:29:18.486 clat percentiles (usec): 00:29:18.486 | 1.00th=[ 947], 5.00th=[ 979], 10.00th=[ 1004], 20.00th=[ 1029], 00:29:18.486 | 30.00th=[ 1045], 40.00th=[ 1057], 50.00th=[41681], 60.00th=[41681], 00:29:18.486 | 70.00th=[41681], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:29:18.486 | 99.00th=[42206], 99.50th=[42206], 99.90th=[44303], 99.95th=[44303], 00:29:18.486 | 99.99th=[44303] 00:29:18.486 bw ( KiB/s): min= 640, max= 768, per=99.88%, avg=737.60, stdev=39.50, samples=20 00:29:18.486 iops : min= 160, max= 192, avg=184.40, stdev= 9.88, samples=20 00:29:18.486 lat (usec) : 1000=8.28% 00:29:18.486 lat (msec) : 2=41.29%, 50=50.43% 00:29:18.487 cpu : usr=89.78%, sys=9.93%, ctx=11, majf=0, minf=238 00:29:18.487 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:18.487 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:18.487 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:18.487 issued rwts: total=1848,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:18.487 latency : target=0, window=0, percentile=100.00%, depth=4 00:29:18.487 00:29:18.487 Run status group 0 (all jobs): 00:29:18.487 READ: bw=738KiB/s (756kB/s), 738KiB/s-738KiB/s (756kB/s-756kB/s), io=7392KiB (7569kB), run=10018-10018msec 00:29:18.487 03:29:51 -- target/dif.sh@88 -- # destroy_subsystems 0 00:29:18.487 03:29:51 -- target/dif.sh@43 -- # local sub 00:29:18.487 03:29:51 -- target/dif.sh@45 -- # for sub in "$@" 00:29:18.487 03:29:51 -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:18.487 03:29:51 -- target/dif.sh@36 -- # local sub_id=0 00:29:18.487 03:29:51 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:18.487 03:29:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:18.487 03:29:51 -- common/autotest_common.sh@10 -- # set +x 00:29:18.487 03:29:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:18.487 03:29:51 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:18.487 03:29:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:18.487 03:29:51 -- common/autotest_common.sh@10 -- # set +x 00:29:18.487 03:29:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:18.487 00:29:18.487 real 0m11.138s 00:29:18.487 user 0m10.121s 00:29:18.487 sys 0m1.270s 00:29:18.487 03:29:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:18.487 03:29:51 -- common/autotest_common.sh@10 -- # set +x 00:29:18.487 ************************************ 00:29:18.487 END TEST fio_dif_1_default 00:29:18.487 ************************************ 00:29:18.487 03:29:51 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:29:18.487 03:29:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:29:18.487 03:29:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:18.487 03:29:51 -- common/autotest_common.sh@10 -- # set +x 00:29:18.487 ************************************ 00:29:18.487 START TEST fio_dif_1_multi_subsystems 00:29:18.487 ************************************ 00:29:18.487 03:29:51 -- common/autotest_common.sh@1111 -- # fio_dif_1_multi_subsystems 00:29:18.487 03:29:51 -- target/dif.sh@92 -- # local files=1 00:29:18.487 03:29:51 -- target/dif.sh@94 -- # create_subsystems 0 1 00:29:18.487 03:29:51 -- target/dif.sh@28 -- # local sub 00:29:18.487 03:29:51 -- target/dif.sh@30 -- # for sub in "$@" 00:29:18.487 03:29:51 -- target/dif.sh@31 -- # create_subsystem 0 00:29:18.487 03:29:51 -- target/dif.sh@18 -- # local sub_id=0 00:29:18.487 03:29:51 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:29:18.487 03:29:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:18.487 03:29:51 -- common/autotest_common.sh@10 -- # set +x 00:29:18.487 bdev_null0 00:29:18.487 03:29:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:18.487 03:29:51 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:18.487 03:29:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:18.487 03:29:51 -- common/autotest_common.sh@10 -- # set +x 00:29:18.487 03:29:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:18.487 03:29:51 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:18.487 03:29:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:18.487 03:29:51 -- common/autotest_common.sh@10 -- # set +x 00:29:18.487 03:29:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:18.487 03:29:51 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:18.487 03:29:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:18.487 03:29:51 -- common/autotest_common.sh@10 -- # set +x 00:29:18.487 [2024-04-25 03:29:51.667619] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:18.487 03:29:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:18.487 03:29:51 -- target/dif.sh@30 -- # for sub in "$@" 00:29:18.487 03:29:51 -- target/dif.sh@31 -- # create_subsystem 1 00:29:18.487 03:29:51 -- target/dif.sh@18 -- # local sub_id=1 00:29:18.487 03:29:51 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:29:18.487 03:29:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:18.487 03:29:51 -- common/autotest_common.sh@10 -- # set +x 00:29:18.487 bdev_null1 00:29:18.487 03:29:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:18.487 03:29:51 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:29:18.487 03:29:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:18.487 03:29:51 -- common/autotest_common.sh@10 -- # set +x 00:29:18.487 03:29:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:18.487 03:29:51 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:29:18.487 03:29:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:18.487 03:29:51 -- common/autotest_common.sh@10 -- # set +x 00:29:18.487 03:29:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:18.487 03:29:51 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:18.487 03:29:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:18.487 03:29:51 -- common/autotest_common.sh@10 -- # set +x 00:29:18.487 03:29:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:18.487 03:29:51 -- target/dif.sh@95 -- # fio /dev/fd/62 00:29:18.487 03:29:51 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:29:18.487 03:29:51 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:18.487 03:29:51 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:29:18.487 03:29:51 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:18.487 03:29:51 -- nvmf/common.sh@521 -- # config=() 00:29:18.487 03:29:51 -- target/dif.sh@82 -- # gen_fio_conf 00:29:18.487 03:29:51 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:29:18.487 03:29:51 -- nvmf/common.sh@521 -- # local subsystem config 00:29:18.487 03:29:51 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:18.487 03:29:51 -- target/dif.sh@54 -- # local file 00:29:18.487 03:29:51 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:29:18.487 03:29:51 -- common/autotest_common.sh@1325 -- # local sanitizers 00:29:18.487 03:29:51 -- target/dif.sh@56 -- # cat 00:29:18.487 03:29:51 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:29:18.487 { 00:29:18.487 "params": { 00:29:18.487 "name": "Nvme$subsystem", 00:29:18.487 "trtype": "$TEST_TRANSPORT", 00:29:18.487 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:18.487 "adrfam": "ipv4", 00:29:18.487 "trsvcid": "$NVMF_PORT", 00:29:18.487 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:18.487 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:18.487 "hdgst": ${hdgst:-false}, 00:29:18.487 "ddgst": ${ddgst:-false} 00:29:18.487 }, 00:29:18.487 "method": "bdev_nvme_attach_controller" 00:29:18.487 } 00:29:18.487 EOF 00:29:18.487 )") 00:29:18.487 03:29:51 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:18.487 03:29:51 -- common/autotest_common.sh@1327 -- # shift 00:29:18.487 03:29:51 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:29:18.487 03:29:51 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:29:18.487 03:29:51 -- nvmf/common.sh@543 -- # cat 00:29:18.487 03:29:51 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:18.487 03:29:51 -- target/dif.sh@72 -- # (( file = 1 )) 00:29:18.487 03:29:51 -- common/autotest_common.sh@1331 -- # grep libasan 00:29:18.487 03:29:51 -- target/dif.sh@72 -- # (( file <= files )) 00:29:18.487 03:29:51 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:29:18.487 03:29:51 -- target/dif.sh@73 -- # cat 00:29:18.487 03:29:51 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:29:18.487 03:29:51 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:29:18.487 { 00:29:18.487 "params": { 00:29:18.487 "name": "Nvme$subsystem", 00:29:18.487 "trtype": "$TEST_TRANSPORT", 00:29:18.487 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:18.487 "adrfam": "ipv4", 00:29:18.487 "trsvcid": "$NVMF_PORT", 00:29:18.487 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:18.487 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:18.487 "hdgst": ${hdgst:-false}, 00:29:18.487 "ddgst": ${ddgst:-false} 00:29:18.487 }, 00:29:18.487 "method": "bdev_nvme_attach_controller" 00:29:18.487 } 00:29:18.487 EOF 00:29:18.487 )") 00:29:18.487 03:29:51 -- target/dif.sh@72 -- # (( file++ )) 00:29:18.487 03:29:51 -- target/dif.sh@72 -- # (( file <= files )) 00:29:18.487 03:29:51 -- nvmf/common.sh@543 -- # cat 00:29:18.487 03:29:51 -- nvmf/common.sh@545 -- # jq . 00:29:18.487 03:29:51 -- nvmf/common.sh@546 -- # IFS=, 00:29:18.487 03:29:51 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:29:18.487 "params": { 00:29:18.487 "name": "Nvme0", 00:29:18.487 "trtype": "tcp", 00:29:18.487 "traddr": "10.0.0.2", 00:29:18.487 "adrfam": "ipv4", 00:29:18.487 "trsvcid": "4420", 00:29:18.487 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:18.487 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:18.487 "hdgst": false, 00:29:18.487 "ddgst": false 00:29:18.487 }, 00:29:18.487 "method": "bdev_nvme_attach_controller" 00:29:18.487 },{ 00:29:18.487 "params": { 00:29:18.487 "name": "Nvme1", 00:29:18.487 "trtype": "tcp", 00:29:18.487 "traddr": "10.0.0.2", 00:29:18.487 "adrfam": "ipv4", 00:29:18.487 "trsvcid": "4420", 00:29:18.487 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:18.487 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:18.487 "hdgst": false, 00:29:18.487 "ddgst": false 00:29:18.487 }, 00:29:18.487 "method": "bdev_nvme_attach_controller" 00:29:18.487 }' 00:29:18.487 03:29:51 -- common/autotest_common.sh@1331 -- # asan_lib= 00:29:18.487 03:29:51 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:29:18.488 03:29:51 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:29:18.488 03:29:51 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:18.488 03:29:51 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:29:18.488 03:29:51 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:29:18.488 03:29:51 -- common/autotest_common.sh@1331 -- # asan_lib= 00:29:18.488 03:29:51 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:29:18.488 03:29:51 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:29:18.488 03:29:51 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:18.488 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:29:18.488 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:29:18.488 fio-3.35 00:29:18.488 Starting 2 threads 00:29:18.488 EAL: No free 2048 kB hugepages reported on node 1 00:29:28.457 00:29:28.457 filename0: (groupid=0, jobs=1): err= 0: pid=1634573: Thu Apr 25 03:30:02 2024 00:29:28.457 read: IOPS=140, BW=560KiB/s (573kB/s)(5616KiB/10028msec) 00:29:28.457 slat (nsec): min=4393, max=36536, avg=9500.97, stdev=4920.81 00:29:28.457 clat (usec): min=950, max=44152, avg=28537.22, stdev=19194.22 00:29:28.457 lat (usec): min=957, max=44163, avg=28546.72, stdev=19194.05 00:29:28.457 clat percentiles (usec): 00:29:28.457 | 1.00th=[ 979], 5.00th=[ 1012], 10.00th=[ 1029], 20.00th=[ 1057], 00:29:28.458 | 30.00th=[ 1106], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:29:28.458 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:29:28.458 | 99.00th=[42206], 99.50th=[42730], 99.90th=[44303], 99.95th=[44303], 00:29:28.458 | 99.99th=[44303] 00:29:28.458 bw ( KiB/s): min= 352, max= 768, per=43.07%, avg=560.00, stdev=179.97, samples=20 00:29:28.458 iops : min= 88, max= 192, avg=140.00, stdev=44.99, samples=20 00:29:28.458 lat (usec) : 1000=3.06% 00:29:28.458 lat (msec) : 2=29.70%, 50=67.24% 00:29:28.458 cpu : usr=94.16%, sys=5.54%, ctx=17, majf=0, minf=145 00:29:28.458 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:28.458 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:28.458 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:28.458 issued rwts: total=1404,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:28.458 latency : target=0, window=0, percentile=100.00%, depth=4 00:29:28.458 filename1: (groupid=0, jobs=1): err= 0: pid=1634574: Thu Apr 25 03:30:02 2024 00:29:28.458 read: IOPS=185, BW=741KiB/s (759kB/s)(7424KiB/10019msec) 00:29:28.458 slat (nsec): min=6636, max=39153, avg=9241.60, stdev=4353.30 00:29:28.458 clat (usec): min=994, max=43270, avg=21562.76, stdev=20351.81 00:29:28.458 lat (usec): min=1000, max=43289, avg=21572.00, stdev=20350.76 00:29:28.458 clat percentiles (usec): 00:29:28.458 | 1.00th=[ 1020], 5.00th=[ 1057], 10.00th=[ 1090], 20.00th=[ 1106], 00:29:28.458 | 30.00th=[ 1139], 40.00th=[ 1172], 50.00th=[41681], 60.00th=[41681], 00:29:28.458 | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:29:28.458 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43254], 99.95th=[43254], 00:29:28.458 | 99.99th=[43254] 00:29:28.458 bw ( KiB/s): min= 672, max= 768, per=56.91%, avg=740.80, stdev=33.28, samples=20 00:29:28.458 iops : min= 168, max= 192, avg=185.20, stdev= 8.32, samples=20 00:29:28.458 lat (usec) : 1000=0.16% 00:29:28.458 lat (msec) : 2=49.62%, 50=50.22% 00:29:28.458 cpu : usr=94.13%, sys=5.57%, ctx=15, majf=0, minf=126 00:29:28.458 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:28.458 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:28.458 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:28.458 issued rwts: total=1856,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:28.458 latency : target=0, window=0, percentile=100.00%, depth=4 00:29:28.458 00:29:28.458 Run status group 0 (all jobs): 00:29:28.458 READ: bw=1300KiB/s (1332kB/s), 560KiB/s-741KiB/s (573kB/s-759kB/s), io=12.7MiB (13.4MB), run=10019-10028msec 00:29:28.715 03:30:03 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:29:28.715 03:30:03 -- target/dif.sh@43 -- # local sub 00:29:28.715 03:30:03 -- target/dif.sh@45 -- # for sub in "$@" 00:29:28.715 03:30:03 -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:28.715 03:30:03 -- target/dif.sh@36 -- # local sub_id=0 00:29:28.715 03:30:03 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:28.715 03:30:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:28.715 03:30:03 -- common/autotest_common.sh@10 -- # set +x 00:29:28.715 03:30:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:28.715 03:30:03 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:28.715 03:30:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:28.715 03:30:03 -- common/autotest_common.sh@10 -- # set +x 00:29:28.715 03:30:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:28.715 03:30:03 -- target/dif.sh@45 -- # for sub in "$@" 00:29:28.715 03:30:03 -- target/dif.sh@46 -- # destroy_subsystem 1 00:29:28.715 03:30:03 -- target/dif.sh@36 -- # local sub_id=1 00:29:28.715 03:30:03 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:28.715 03:30:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:28.715 03:30:03 -- common/autotest_common.sh@10 -- # set +x 00:29:28.715 03:30:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:28.715 03:30:03 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:29:28.715 03:30:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:28.715 03:30:03 -- common/autotest_common.sh@10 -- # set +x 00:29:28.715 03:30:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:28.715 00:29:28.715 real 0m11.497s 00:29:28.715 user 0m20.314s 00:29:28.715 sys 0m1.415s 00:29:28.715 03:30:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:28.715 03:30:03 -- common/autotest_common.sh@10 -- # set +x 00:29:28.715 ************************************ 00:29:28.715 END TEST fio_dif_1_multi_subsystems 00:29:28.715 ************************************ 00:29:28.715 03:30:03 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:29:28.715 03:30:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:29:28.715 03:30:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:28.715 03:30:03 -- common/autotest_common.sh@10 -- # set +x 00:29:28.973 ************************************ 00:29:28.973 START TEST fio_dif_rand_params 00:29:28.973 ************************************ 00:29:28.973 03:30:03 -- common/autotest_common.sh@1111 -- # fio_dif_rand_params 00:29:28.973 03:30:03 -- target/dif.sh@100 -- # local NULL_DIF 00:29:28.973 03:30:03 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:29:28.973 03:30:03 -- target/dif.sh@103 -- # NULL_DIF=3 00:29:28.973 03:30:03 -- target/dif.sh@103 -- # bs=128k 00:29:28.973 03:30:03 -- target/dif.sh@103 -- # numjobs=3 00:29:28.973 03:30:03 -- target/dif.sh@103 -- # iodepth=3 00:29:28.973 03:30:03 -- target/dif.sh@103 -- # runtime=5 00:29:28.973 03:30:03 -- target/dif.sh@105 -- # create_subsystems 0 00:29:28.973 03:30:03 -- target/dif.sh@28 -- # local sub 00:29:28.973 03:30:03 -- target/dif.sh@30 -- # for sub in "$@" 00:29:28.973 03:30:03 -- target/dif.sh@31 -- # create_subsystem 0 00:29:28.973 03:30:03 -- target/dif.sh@18 -- # local sub_id=0 00:29:28.973 03:30:03 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:29:28.974 03:30:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:28.974 03:30:03 -- common/autotest_common.sh@10 -- # set +x 00:29:28.974 bdev_null0 00:29:28.974 03:30:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:28.974 03:30:03 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:28.974 03:30:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:28.974 03:30:03 -- common/autotest_common.sh@10 -- # set +x 00:29:28.974 03:30:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:28.974 03:30:03 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:28.974 03:30:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:28.974 03:30:03 -- common/autotest_common.sh@10 -- # set +x 00:29:28.974 03:30:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:28.974 03:30:03 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:28.974 03:30:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:28.974 03:30:03 -- common/autotest_common.sh@10 -- # set +x 00:29:28.974 [2024-04-25 03:30:03.293423] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:28.974 03:30:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:28.974 03:30:03 -- target/dif.sh@106 -- # fio /dev/fd/62 00:29:28.974 03:30:03 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:29:28.974 03:30:03 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:29:28.974 03:30:03 -- nvmf/common.sh@521 -- # config=() 00:29:28.974 03:30:03 -- nvmf/common.sh@521 -- # local subsystem config 00:29:28.974 03:30:03 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:29:28.974 03:30:03 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:29:28.974 { 00:29:28.974 "params": { 00:29:28.974 "name": "Nvme$subsystem", 00:29:28.974 "trtype": "$TEST_TRANSPORT", 00:29:28.974 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:28.974 "adrfam": "ipv4", 00:29:28.974 "trsvcid": "$NVMF_PORT", 00:29:28.974 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:28.974 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:28.974 "hdgst": ${hdgst:-false}, 00:29:28.974 "ddgst": ${ddgst:-false} 00:29:28.974 }, 00:29:28.974 "method": "bdev_nvme_attach_controller" 00:29:28.974 } 00:29:28.974 EOF 00:29:28.974 )") 00:29:28.974 03:30:03 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:28.974 03:30:03 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:28.974 03:30:03 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:29:28.974 03:30:03 -- target/dif.sh@82 -- # gen_fio_conf 00:29:28.974 03:30:03 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:28.974 03:30:03 -- common/autotest_common.sh@1325 -- # local sanitizers 00:29:28.974 03:30:03 -- target/dif.sh@54 -- # local file 00:29:28.974 03:30:03 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:28.974 03:30:03 -- target/dif.sh@56 -- # cat 00:29:28.974 03:30:03 -- common/autotest_common.sh@1327 -- # shift 00:29:28.974 03:30:03 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:29:28.974 03:30:03 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:29:28.974 03:30:03 -- nvmf/common.sh@543 -- # cat 00:29:28.974 03:30:03 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:28.974 03:30:03 -- target/dif.sh@72 -- # (( file = 1 )) 00:29:28.974 03:30:03 -- target/dif.sh@72 -- # (( file <= files )) 00:29:28.974 03:30:03 -- common/autotest_common.sh@1331 -- # grep libasan 00:29:28.974 03:30:03 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:29:28.974 03:30:03 -- nvmf/common.sh@545 -- # jq . 00:29:28.974 03:30:03 -- nvmf/common.sh@546 -- # IFS=, 00:29:28.974 03:30:03 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:29:28.974 "params": { 00:29:28.974 "name": "Nvme0", 00:29:28.974 "trtype": "tcp", 00:29:28.974 "traddr": "10.0.0.2", 00:29:28.974 "adrfam": "ipv4", 00:29:28.974 "trsvcid": "4420", 00:29:28.974 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:28.974 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:28.974 "hdgst": false, 00:29:28.974 "ddgst": false 00:29:28.974 }, 00:29:28.974 "method": "bdev_nvme_attach_controller" 00:29:28.974 }' 00:29:28.974 03:30:03 -- common/autotest_common.sh@1331 -- # asan_lib= 00:29:28.974 03:30:03 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:29:28.974 03:30:03 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:29:28.974 03:30:03 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:28.974 03:30:03 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:29:28.974 03:30:03 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:29:28.974 03:30:03 -- common/autotest_common.sh@1331 -- # asan_lib= 00:29:28.974 03:30:03 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:29:28.974 03:30:03 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:29:28.974 03:30:03 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:29.231 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:29:29.231 ... 00:29:29.231 fio-3.35 00:29:29.231 Starting 3 threads 00:29:29.231 EAL: No free 2048 kB hugepages reported on node 1 00:29:35.786 00:29:35.786 filename0: (groupid=0, jobs=1): err= 0: pid=1636090: Thu Apr 25 03:30:09 2024 00:29:35.786 read: IOPS=187, BW=23.4MiB/s (24.6MB/s)(118MiB/5048msec) 00:29:35.786 slat (usec): min=5, max=121, avg=15.36, stdev= 6.55 00:29:35.786 clat (usec): min=6422, max=55157, avg=15939.35, stdev=14044.54 00:29:35.786 lat (usec): min=6434, max=55170, avg=15954.71, stdev=14044.48 00:29:35.786 clat percentiles (usec): 00:29:35.786 | 1.00th=[ 6980], 5.00th=[ 7373], 10.00th=[ 7832], 20.00th=[ 8979], 00:29:35.786 | 30.00th=[ 9372], 40.00th=[ 9896], 50.00th=[10552], 60.00th=[11731], 00:29:35.786 | 70.00th=[12780], 80.00th=[13698], 90.00th=[50594], 95.00th=[51643], 00:29:35.786 | 99.00th=[53216], 99.50th=[53740], 99.90th=[55313], 99.95th=[55313], 00:29:35.786 | 99.99th=[55313] 00:29:35.786 bw ( KiB/s): min=19968, max=30720, per=32.55%, avg=24140.80, stdev=3567.81, samples=10 00:29:35.786 iops : min= 156, max= 240, avg=188.60, stdev=27.87, samples=10 00:29:35.786 lat (msec) : 10=43.55%, 20=43.23%, 50=2.54%, 100=10.68% 00:29:35.786 cpu : usr=94.43%, sys=5.03%, ctx=7, majf=0, minf=137 00:29:35.786 IO depths : 1=2.7%, 2=97.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:35.786 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:35.786 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:35.786 issued rwts: total=946,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:35.786 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:35.786 filename0: (groupid=0, jobs=1): err= 0: pid=1636091: Thu Apr 25 03:30:09 2024 00:29:35.786 read: IOPS=206, BW=25.8MiB/s (27.1MB/s)(130MiB/5047msec) 00:29:35.786 slat (nsec): min=4775, max=39541, avg=15771.63, stdev=4249.05 00:29:35.786 clat (usec): min=5897, max=60225, avg=14450.14, stdev=13083.87 00:29:35.786 lat (usec): min=5924, max=60238, avg=14465.92, stdev=13083.92 00:29:35.786 clat percentiles (usec): 00:29:35.786 | 1.00th=[ 6390], 5.00th=[ 6783], 10.00th=[ 7111], 20.00th=[ 8291], 00:29:35.786 | 30.00th=[ 9110], 40.00th=[ 9503], 50.00th=[10028], 60.00th=[10945], 00:29:35.786 | 70.00th=[12256], 80.00th=[13173], 90.00th=[49546], 95.00th=[52691], 00:29:35.786 | 99.00th=[54789], 99.50th=[55313], 99.90th=[60031], 99.95th=[60031], 00:29:35.786 | 99.99th=[60031] 00:29:35.786 bw ( KiB/s): min=18981, max=33024, per=35.90%, avg=26627.70, stdev=5370.81, samples=10 00:29:35.786 iops : min= 148, max= 258, avg=208.00, stdev=42.01, samples=10 00:29:35.786 lat (msec) : 10=50.62%, 20=39.12%, 50=0.86%, 100=9.40% 00:29:35.786 cpu : usr=94.61%, sys=4.78%, ctx=11, majf=0, minf=102 00:29:35.786 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:35.786 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:35.786 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:35.786 issued rwts: total=1043,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:35.786 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:35.786 filename0: (groupid=0, jobs=1): err= 0: pid=1636092: Thu Apr 25 03:30:09 2024 00:29:35.786 read: IOPS=186, BW=23.4MiB/s (24.5MB/s)(117MiB/5006msec) 00:29:35.786 slat (nsec): min=4106, max=40923, avg=13545.47, stdev=5025.31 00:29:35.786 clat (usec): min=6820, max=90926, avg=16019.40, stdev=14532.45 00:29:35.786 lat (usec): min=6828, max=90949, avg=16032.94, stdev=14532.57 00:29:35.786 clat percentiles (usec): 00:29:35.786 | 1.00th=[ 6915], 5.00th=[ 7308], 10.00th=[ 8029], 20.00th=[ 8979], 00:29:35.786 | 30.00th=[ 9503], 40.00th=[ 9765], 50.00th=[10683], 60.00th=[11469], 00:29:35.786 | 70.00th=[12387], 80.00th=[13435], 90.00th=[50594], 95.00th=[52691], 00:29:35.786 | 99.00th=[55313], 99.50th=[56361], 99.90th=[90702], 99.95th=[90702], 00:29:35.786 | 99.99th=[90702] 00:29:35.786 bw ( KiB/s): min=16896, max=35328, per=32.20%, avg=23884.80, stdev=5692.51, samples=10 00:29:35.786 iops : min= 132, max= 276, avg=186.60, stdev=44.47, samples=10 00:29:35.786 lat (msec) : 10=43.06%, 20=43.70%, 50=1.60%, 100=11.65% 00:29:35.786 cpu : usr=93.45%, sys=5.25%, ctx=191, majf=0, minf=88 00:29:35.786 IO depths : 1=5.9%, 2=94.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:35.786 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:35.786 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:35.786 issued rwts: total=936,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:35.786 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:35.786 00:29:35.786 Run status group 0 (all jobs): 00:29:35.786 READ: bw=72.4MiB/s (75.9MB/s), 23.4MiB/s-25.8MiB/s (24.5MB/s-27.1MB/s), io=366MiB (383MB), run=5006-5048msec 00:29:35.786 03:30:09 -- target/dif.sh@107 -- # destroy_subsystems 0 00:29:35.786 03:30:09 -- target/dif.sh@43 -- # local sub 00:29:35.786 03:30:09 -- target/dif.sh@45 -- # for sub in "$@" 00:29:35.786 03:30:09 -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:35.786 03:30:09 -- target/dif.sh@36 -- # local sub_id=0 00:29:35.786 03:30:09 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:35.786 03:30:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:35.786 03:30:09 -- common/autotest_common.sh@10 -- # set +x 00:29:35.786 03:30:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:35.786 03:30:09 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:35.786 03:30:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:35.786 03:30:09 -- common/autotest_common.sh@10 -- # set +x 00:29:35.786 03:30:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:35.786 03:30:09 -- target/dif.sh@109 -- # NULL_DIF=2 00:29:35.786 03:30:09 -- target/dif.sh@109 -- # bs=4k 00:29:35.786 03:30:09 -- target/dif.sh@109 -- # numjobs=8 00:29:35.786 03:30:09 -- target/dif.sh@109 -- # iodepth=16 00:29:35.786 03:30:09 -- target/dif.sh@109 -- # runtime= 00:29:35.786 03:30:09 -- target/dif.sh@109 -- # files=2 00:29:35.786 03:30:09 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:29:35.786 03:30:09 -- target/dif.sh@28 -- # local sub 00:29:35.786 03:30:09 -- target/dif.sh@30 -- # for sub in "$@" 00:29:35.786 03:30:09 -- target/dif.sh@31 -- # create_subsystem 0 00:29:35.786 03:30:09 -- target/dif.sh@18 -- # local sub_id=0 00:29:35.786 03:30:09 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:29:35.786 03:30:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:35.786 03:30:09 -- common/autotest_common.sh@10 -- # set +x 00:29:35.786 bdev_null0 00:29:35.786 03:30:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:35.786 03:30:09 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:35.786 03:30:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:35.786 03:30:09 -- common/autotest_common.sh@10 -- # set +x 00:29:35.786 03:30:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:35.786 03:30:09 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:35.786 03:30:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:35.786 03:30:09 -- common/autotest_common.sh@10 -- # set +x 00:29:35.786 03:30:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:35.786 03:30:09 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:35.786 03:30:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:35.786 03:30:09 -- common/autotest_common.sh@10 -- # set +x 00:29:35.787 [2024-04-25 03:30:09.396437] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:35.787 03:30:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:35.787 03:30:09 -- target/dif.sh@30 -- # for sub in "$@" 00:29:35.787 03:30:09 -- target/dif.sh@31 -- # create_subsystem 1 00:29:35.787 03:30:09 -- target/dif.sh@18 -- # local sub_id=1 00:29:35.787 03:30:09 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:29:35.787 03:30:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:35.787 03:30:09 -- common/autotest_common.sh@10 -- # set +x 00:29:35.787 bdev_null1 00:29:35.787 03:30:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:35.787 03:30:09 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:29:35.787 03:30:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:35.787 03:30:09 -- common/autotest_common.sh@10 -- # set +x 00:29:35.787 03:30:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:35.787 03:30:09 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:29:35.787 03:30:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:35.787 03:30:09 -- common/autotest_common.sh@10 -- # set +x 00:29:35.787 03:30:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:35.787 03:30:09 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:35.787 03:30:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:35.787 03:30:09 -- common/autotest_common.sh@10 -- # set +x 00:29:35.787 03:30:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:35.787 03:30:09 -- target/dif.sh@30 -- # for sub in "$@" 00:29:35.787 03:30:09 -- target/dif.sh@31 -- # create_subsystem 2 00:29:35.787 03:30:09 -- target/dif.sh@18 -- # local sub_id=2 00:29:35.787 03:30:09 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:29:35.787 03:30:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:35.787 03:30:09 -- common/autotest_common.sh@10 -- # set +x 00:29:35.787 bdev_null2 00:29:35.787 03:30:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:35.787 03:30:09 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:29:35.787 03:30:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:35.787 03:30:09 -- common/autotest_common.sh@10 -- # set +x 00:29:35.787 03:30:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:35.787 03:30:09 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:29:35.787 03:30:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:35.787 03:30:09 -- common/autotest_common.sh@10 -- # set +x 00:29:35.787 03:30:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:35.787 03:30:09 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:29:35.787 03:30:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:35.787 03:30:09 -- common/autotest_common.sh@10 -- # set +x 00:29:35.787 03:30:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:35.787 03:30:09 -- target/dif.sh@112 -- # fio /dev/fd/62 00:29:35.787 03:30:09 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:29:35.787 03:30:09 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:29:35.787 03:30:09 -- nvmf/common.sh@521 -- # config=() 00:29:35.787 03:30:09 -- nvmf/common.sh@521 -- # local subsystem config 00:29:35.787 03:30:09 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:29:35.787 03:30:09 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:35.787 03:30:09 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:29:35.787 { 00:29:35.787 "params": { 00:29:35.787 "name": "Nvme$subsystem", 00:29:35.787 "trtype": "$TEST_TRANSPORT", 00:29:35.787 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:35.787 "adrfam": "ipv4", 00:29:35.787 "trsvcid": "$NVMF_PORT", 00:29:35.787 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:35.787 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:35.787 "hdgst": ${hdgst:-false}, 00:29:35.787 "ddgst": ${ddgst:-false} 00:29:35.787 }, 00:29:35.787 "method": "bdev_nvme_attach_controller" 00:29:35.787 } 00:29:35.787 EOF 00:29:35.787 )") 00:29:35.787 03:30:09 -- target/dif.sh@82 -- # gen_fio_conf 00:29:35.787 03:30:09 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:35.787 03:30:09 -- target/dif.sh@54 -- # local file 00:29:35.787 03:30:09 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:29:35.787 03:30:09 -- target/dif.sh@56 -- # cat 00:29:35.787 03:30:09 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:35.787 03:30:09 -- common/autotest_common.sh@1325 -- # local sanitizers 00:29:35.787 03:30:09 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:35.787 03:30:09 -- common/autotest_common.sh@1327 -- # shift 00:29:35.787 03:30:09 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:29:35.787 03:30:09 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:29:35.787 03:30:09 -- nvmf/common.sh@543 -- # cat 00:29:35.787 03:30:09 -- target/dif.sh@72 -- # (( file = 1 )) 00:29:35.787 03:30:09 -- target/dif.sh@72 -- # (( file <= files )) 00:29:35.787 03:30:09 -- target/dif.sh@73 -- # cat 00:29:35.787 03:30:09 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:35.787 03:30:09 -- common/autotest_common.sh@1331 -- # grep libasan 00:29:35.787 03:30:09 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:29:35.787 03:30:09 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:29:35.787 03:30:09 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:29:35.787 { 00:29:35.787 "params": { 00:29:35.787 "name": "Nvme$subsystem", 00:29:35.787 "trtype": "$TEST_TRANSPORT", 00:29:35.787 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:35.787 "adrfam": "ipv4", 00:29:35.787 "trsvcid": "$NVMF_PORT", 00:29:35.787 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:35.787 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:35.787 "hdgst": ${hdgst:-false}, 00:29:35.787 "ddgst": ${ddgst:-false} 00:29:35.787 }, 00:29:35.787 "method": "bdev_nvme_attach_controller" 00:29:35.787 } 00:29:35.787 EOF 00:29:35.787 )") 00:29:35.787 03:30:09 -- target/dif.sh@72 -- # (( file++ )) 00:29:35.787 03:30:09 -- target/dif.sh@72 -- # (( file <= files )) 00:29:35.787 03:30:09 -- target/dif.sh@73 -- # cat 00:29:35.787 03:30:09 -- nvmf/common.sh@543 -- # cat 00:29:35.787 03:30:09 -- target/dif.sh@72 -- # (( file++ )) 00:29:35.787 03:30:09 -- target/dif.sh@72 -- # (( file <= files )) 00:29:35.787 03:30:09 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:29:35.787 03:30:09 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:29:35.787 { 00:29:35.787 "params": { 00:29:35.787 "name": "Nvme$subsystem", 00:29:35.787 "trtype": "$TEST_TRANSPORT", 00:29:35.787 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:35.787 "adrfam": "ipv4", 00:29:35.787 "trsvcid": "$NVMF_PORT", 00:29:35.787 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:35.787 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:35.787 "hdgst": ${hdgst:-false}, 00:29:35.787 "ddgst": ${ddgst:-false} 00:29:35.787 }, 00:29:35.787 "method": "bdev_nvme_attach_controller" 00:29:35.787 } 00:29:35.787 EOF 00:29:35.787 )") 00:29:35.787 03:30:09 -- nvmf/common.sh@543 -- # cat 00:29:35.787 03:30:09 -- nvmf/common.sh@545 -- # jq . 00:29:35.787 03:30:09 -- nvmf/common.sh@546 -- # IFS=, 00:29:35.787 03:30:09 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:29:35.787 "params": { 00:29:35.787 "name": "Nvme0", 00:29:35.787 "trtype": "tcp", 00:29:35.787 "traddr": "10.0.0.2", 00:29:35.787 "adrfam": "ipv4", 00:29:35.787 "trsvcid": "4420", 00:29:35.787 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:35.787 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:35.787 "hdgst": false, 00:29:35.787 "ddgst": false 00:29:35.787 }, 00:29:35.787 "method": "bdev_nvme_attach_controller" 00:29:35.787 },{ 00:29:35.787 "params": { 00:29:35.787 "name": "Nvme1", 00:29:35.787 "trtype": "tcp", 00:29:35.787 "traddr": "10.0.0.2", 00:29:35.787 "adrfam": "ipv4", 00:29:35.787 "trsvcid": "4420", 00:29:35.787 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:35.787 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:35.787 "hdgst": false, 00:29:35.787 "ddgst": false 00:29:35.787 }, 00:29:35.787 "method": "bdev_nvme_attach_controller" 00:29:35.787 },{ 00:29:35.787 "params": { 00:29:35.787 "name": "Nvme2", 00:29:35.787 "trtype": "tcp", 00:29:35.787 "traddr": "10.0.0.2", 00:29:35.787 "adrfam": "ipv4", 00:29:35.787 "trsvcid": "4420", 00:29:35.787 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:35.787 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:35.787 "hdgst": false, 00:29:35.787 "ddgst": false 00:29:35.787 }, 00:29:35.787 "method": "bdev_nvme_attach_controller" 00:29:35.787 }' 00:29:35.787 03:30:09 -- common/autotest_common.sh@1331 -- # asan_lib= 00:29:35.787 03:30:09 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:29:35.787 03:30:09 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:29:35.787 03:30:09 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:35.787 03:30:09 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:29:35.787 03:30:09 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:29:35.787 03:30:09 -- common/autotest_common.sh@1331 -- # asan_lib= 00:29:35.787 03:30:09 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:29:35.787 03:30:09 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:29:35.787 03:30:09 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:35.787 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:29:35.787 ... 00:29:35.787 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:29:35.787 ... 00:29:35.787 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:29:35.787 ... 00:29:35.787 fio-3.35 00:29:35.787 Starting 24 threads 00:29:35.788 EAL: No free 2048 kB hugepages reported on node 1 00:29:47.997 00:29:47.997 filename0: (groupid=0, jobs=1): err= 0: pid=1637427: Thu Apr 25 03:30:20 2024 00:29:47.997 read: IOPS=459, BW=1836KiB/s (1881kB/s)(17.9MiB/10002msec) 00:29:47.997 slat (usec): min=7, max=329, avg=32.22, stdev=16.65 00:29:47.997 clat (usec): min=9268, max=71975, avg=34584.54, stdev=6593.86 00:29:47.997 lat (usec): min=9276, max=71983, avg=34616.76, stdev=6590.77 00:29:47.997 clat percentiles (usec): 00:29:47.997 | 1.00th=[13960], 5.00th=[31065], 10.00th=[32637], 20.00th=[32900], 00:29:47.997 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:29:47.997 | 70.00th=[33817], 80.00th=[34866], 90.00th=[42730], 95.00th=[43779], 00:29:47.997 | 99.00th=[68682], 99.50th=[70779], 99.90th=[71828], 99.95th=[71828], 00:29:47.997 | 99.99th=[71828] 00:29:47.997 bw ( KiB/s): min= 1408, max= 2048, per=4.18%, avg=1832.42, stdev=161.80, samples=19 00:29:47.997 iops : min= 352, max= 512, avg=458.11, stdev=40.45, samples=19 00:29:47.997 lat (msec) : 10=0.35%, 20=3.27%, 50=94.32%, 100=2.07% 00:29:47.998 cpu : usr=89.14%, sys=4.73%, ctx=183, majf=0, minf=59 00:29:47.998 IO depths : 1=4.5%, 2=10.1%, 4=22.8%, 8=54.2%, 16=8.4%, 32=0.0%, >=64=0.0% 00:29:47.998 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:47.998 complete : 0=0.0%, 4=93.7%, 8=0.8%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:47.998 issued rwts: total=4592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:47.998 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:47.998 filename0: (groupid=0, jobs=1): err= 0: pid=1637428: Thu Apr 25 03:30:20 2024 00:29:47.998 read: IOPS=455, BW=1823KiB/s (1867kB/s)(17.8MiB/10006msec) 00:29:47.998 slat (usec): min=13, max=119, avg=41.37, stdev=15.36 00:29:47.998 clat (usec): min=15393, max=59925, avg=34714.16, stdev=3848.93 00:29:47.998 lat (usec): min=15422, max=59943, avg=34755.53, stdev=3848.96 00:29:47.998 clat percentiles (usec): 00:29:47.998 | 1.00th=[31851], 5.00th=[32637], 10.00th=[32900], 20.00th=[32900], 00:29:47.998 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:29:47.998 | 70.00th=[33817], 80.00th=[34341], 90.00th=[42730], 95.00th=[42730], 00:29:47.998 | 99.00th=[43779], 99.50th=[46400], 99.90th=[59507], 99.95th=[60031], 00:29:47.998 | 99.99th=[60031] 00:29:47.998 bw ( KiB/s): min= 1408, max= 1920, per=4.13%, avg=1812.21, stdev=161.14, samples=19 00:29:47.998 iops : min= 352, max= 480, avg=453.05, stdev=40.28, samples=19 00:29:47.998 lat (msec) : 20=0.35%, 50=99.30%, 100=0.35% 00:29:47.998 cpu : usr=98.37%, sys=1.24%, ctx=15, majf=0, minf=42 00:29:47.998 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:29:47.998 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:47.998 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:47.998 issued rwts: total=4560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:47.998 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:47.998 filename0: (groupid=0, jobs=1): err= 0: pid=1637429: Thu Apr 25 03:30:20 2024 00:29:47.998 read: IOPS=455, BW=1823KiB/s (1867kB/s)(17.8MiB/10004msec) 00:29:47.998 slat (usec): min=8, max=149, avg=37.52, stdev=22.35 00:29:47.998 clat (usec): min=20194, max=54679, avg=34794.35, stdev=3716.41 00:29:47.998 lat (usec): min=20202, max=54711, avg=34831.87, stdev=3715.11 00:29:47.998 clat percentiles (usec): 00:29:47.998 | 1.00th=[28181], 5.00th=[31589], 10.00th=[32375], 20.00th=[32900], 00:29:47.998 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:29:47.998 | 70.00th=[33817], 80.00th=[35390], 90.00th=[42730], 95.00th=[43254], 00:29:47.998 | 99.00th=[44303], 99.50th=[46400], 99.90th=[53740], 99.95th=[54789], 00:29:47.998 | 99.99th=[54789] 00:29:47.998 bw ( KiB/s): min= 1408, max= 1920, per=4.15%, avg=1818.95, stdev=173.15, samples=19 00:29:47.998 iops : min= 352, max= 480, avg=454.74, stdev=43.29, samples=19 00:29:47.998 lat (msec) : 50=99.82%, 100=0.18% 00:29:47.998 cpu : usr=98.14%, sys=1.40%, ctx=25, majf=0, minf=43 00:29:47.998 IO depths : 1=4.2%, 2=10.1%, 4=24.1%, 8=53.2%, 16=8.3%, 32=0.0%, >=64=0.0% 00:29:47.998 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:47.998 complete : 0=0.0%, 4=94.1%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:47.998 issued rwts: total=4560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:47.998 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:47.998 filename0: (groupid=0, jobs=1): err= 0: pid=1637431: Thu Apr 25 03:30:20 2024 00:29:47.998 read: IOPS=455, BW=1822KiB/s (1865kB/s)(17.8MiB/10009msec) 00:29:47.998 slat (usec): min=4, max=174, avg=45.79, stdev=25.69 00:29:47.998 clat (usec): min=9554, max=63039, avg=34694.43, stdev=4140.69 00:29:47.998 lat (usec): min=9641, max=63055, avg=34740.22, stdev=4137.25 00:29:47.998 clat percentiles (usec): 00:29:47.998 | 1.00th=[29230], 5.00th=[32113], 10.00th=[32637], 20.00th=[32900], 00:29:47.998 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:29:47.998 | 70.00th=[33817], 80.00th=[34866], 90.00th=[42730], 95.00th=[43254], 00:29:47.998 | 99.00th=[44303], 99.50th=[47449], 99.90th=[63177], 99.95th=[63177], 00:29:47.998 | 99.99th=[63177] 00:29:47.998 bw ( KiB/s): min= 1408, max= 1920, per=4.13%, avg=1811.37, stdev=154.17, samples=19 00:29:47.998 iops : min= 352, max= 480, avg=452.84, stdev=38.54, samples=19 00:29:47.998 lat (msec) : 10=0.09%, 20=0.35%, 50=99.12%, 100=0.44% 00:29:47.998 cpu : usr=98.21%, sys=1.26%, ctx=47, majf=0, minf=47 00:29:47.998 IO depths : 1=4.7%, 2=10.9%, 4=25.0%, 8=51.6%, 16=7.8%, 32=0.0%, >=64=0.0% 00:29:47.998 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:47.998 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:47.998 issued rwts: total=4558,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:47.998 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:47.998 filename0: (groupid=0, jobs=1): err= 0: pid=1637432: Thu Apr 25 03:30:20 2024 00:29:47.998 read: IOPS=453, BW=1815KiB/s (1858kB/s)(17.7MiB/10007msec) 00:29:47.998 slat (usec): min=7, max=158, avg=39.86, stdev=24.05 00:29:47.998 clat (usec): min=9730, max=61963, avg=34927.00, stdev=4781.69 00:29:47.998 lat (usec): min=9777, max=61972, avg=34966.86, stdev=4780.76 00:29:47.998 clat percentiles (usec): 00:29:47.998 | 1.00th=[18744], 5.00th=[31851], 10.00th=[32637], 20.00th=[32900], 00:29:47.998 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33817], 00:29:47.998 | 70.00th=[33817], 80.00th=[35390], 90.00th=[42730], 95.00th=[43254], 00:29:47.998 | 99.00th=[53216], 99.50th=[55313], 99.90th=[60031], 99.95th=[60031], 00:29:47.998 | 99.99th=[62129] 00:29:47.998 bw ( KiB/s): min= 1408, max= 1936, per=4.13%, avg=1810.53, stdev=174.48, samples=19 00:29:47.998 iops : min= 352, max= 484, avg=452.63, stdev=43.62, samples=19 00:29:47.998 lat (msec) : 10=0.09%, 20=0.99%, 50=97.58%, 100=1.34% 00:29:47.998 cpu : usr=95.97%, sys=2.36%, ctx=76, majf=0, minf=44 00:29:47.998 IO depths : 1=4.1%, 2=10.0%, 4=24.1%, 8=53.4%, 16=8.4%, 32=0.0%, >=64=0.0% 00:29:47.998 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:47.998 complete : 0=0.0%, 4=94.0%, 8=0.3%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:47.998 issued rwts: total=4540,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:47.998 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:47.998 filename0: (groupid=0, jobs=1): err= 0: pid=1637433: Thu Apr 25 03:30:20 2024 00:29:47.998 read: IOPS=455, BW=1822KiB/s (1866kB/s)(17.8MiB/10010msec) 00:29:47.998 slat (usec): min=8, max=145, avg=41.54, stdev=22.07 00:29:47.998 clat (usec): min=11039, max=56060, avg=34759.30, stdev=4532.81 00:29:47.998 lat (usec): min=11146, max=56105, avg=34800.83, stdev=4530.22 00:29:47.998 clat percentiles (usec): 00:29:47.998 | 1.00th=[18220], 5.00th=[32113], 10.00th=[32637], 20.00th=[32900], 00:29:47.998 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33817], 00:29:47.998 | 70.00th=[33817], 80.00th=[34866], 90.00th=[42730], 95.00th=[43254], 00:29:47.998 | 99.00th=[48497], 99.50th=[52167], 99.90th=[55837], 99.95th=[55837], 00:29:47.998 | 99.99th=[55837] 00:29:47.998 bw ( KiB/s): min= 1408, max= 1936, per=4.13%, avg=1812.21, stdev=161.23, samples=19 00:29:47.998 iops : min= 352, max= 484, avg=453.05, stdev=40.31, samples=19 00:29:47.998 lat (msec) : 20=1.12%, 50=98.00%, 100=0.88% 00:29:47.998 cpu : usr=95.34%, sys=2.36%, ctx=69, majf=0, minf=41 00:29:47.998 IO depths : 1=4.9%, 2=11.0%, 4=24.6%, 8=51.8%, 16=7.7%, 32=0.0%, >=64=0.0% 00:29:47.998 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:47.998 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:47.998 issued rwts: total=4560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:47.998 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:47.998 filename0: (groupid=0, jobs=1): err= 0: pid=1637434: Thu Apr 25 03:30:20 2024 00:29:47.998 read: IOPS=453, BW=1813KiB/s (1857kB/s)(17.7MiB/10006msec) 00:29:47.998 slat (usec): min=7, max=103, avg=29.88, stdev=16.90 00:29:47.998 clat (usec): min=6696, max=99778, avg=35092.46, stdev=6240.37 00:29:47.998 lat (usec): min=6706, max=99845, avg=35122.33, stdev=6241.85 00:29:47.998 clat percentiles (msec): 00:29:47.998 | 1.00th=[ 20], 5.00th=[ 32], 10.00th=[ 33], 20.00th=[ 33], 00:29:47.998 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:29:47.998 | 70.00th=[ 34], 80.00th=[ 37], 90.00th=[ 43], 95.00th=[ 44], 00:29:47.998 | 99.00th=[ 56], 99.50th=[ 61], 99.90th=[ 86], 99.95th=[ 100], 00:29:47.998 | 99.99th=[ 101] 00:29:47.998 bw ( KiB/s): min= 1424, max= 1968, per=4.10%, avg=1797.89, stdev=163.60, samples=19 00:29:47.998 iops : min= 356, max= 492, avg=449.47, stdev=40.90, samples=19 00:29:47.998 lat (msec) : 10=0.60%, 20=0.64%, 50=96.85%, 100=1.92% 00:29:47.998 cpu : usr=98.13%, sys=1.46%, ctx=21, majf=0, minf=61 00:29:47.998 IO depths : 1=0.8%, 2=4.0%, 4=14.8%, 8=66.4%, 16=13.9%, 32=0.0%, >=64=0.0% 00:29:47.998 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:47.998 complete : 0=0.0%, 4=92.3%, 8=4.1%, 16=3.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:47.998 issued rwts: total=4536,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:47.998 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:47.998 filename0: (groupid=0, jobs=1): err= 0: pid=1637435: Thu Apr 25 03:30:20 2024 00:29:47.998 read: IOPS=459, BW=1840KiB/s (1884kB/s)(18.0MiB/10005msec) 00:29:47.998 slat (usec): min=5, max=379, avg=33.58, stdev=23.38 00:29:47.998 clat (usec): min=8949, max=48598, avg=34502.55, stdev=4483.37 00:29:47.998 lat (usec): min=8965, max=48613, avg=34536.13, stdev=4480.35 00:29:47.998 clat percentiles (usec): 00:29:47.998 | 1.00th=[17957], 5.00th=[31327], 10.00th=[32375], 20.00th=[32900], 00:29:47.998 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:29:47.998 | 70.00th=[33817], 80.00th=[35390], 90.00th=[42730], 95.00th=[43254], 00:29:47.998 | 99.00th=[46924], 99.50th=[47449], 99.90th=[48497], 99.95th=[48497], 00:29:47.998 | 99.99th=[48497] 00:29:47.998 bw ( KiB/s): min= 1408, max= 2052, per=4.19%, avg=1836.84, stdev=165.58, samples=19 00:29:47.998 iops : min= 352, max= 513, avg=459.21, stdev=41.40, samples=19 00:29:47.998 lat (msec) : 10=0.30%, 20=1.00%, 50=98.70% 00:29:47.998 cpu : usr=90.04%, sys=4.12%, ctx=94, majf=0, minf=71 00:29:47.998 IO depths : 1=4.5%, 2=10.2%, 4=23.6%, 8=53.7%, 16=8.0%, 32=0.0%, >=64=0.0% 00:29:47.998 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:47.998 complete : 0=0.0%, 4=93.8%, 8=0.4%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:47.998 issued rwts: total=4602,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:47.998 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:47.998 filename1: (groupid=0, jobs=1): err= 0: pid=1637436: Thu Apr 25 03:30:20 2024 00:29:47.998 read: IOPS=476, BW=1906KiB/s (1951kB/s)(18.6MiB/10010msec) 00:29:47.998 slat (usec): min=4, max=140, avg=17.89, stdev=18.01 00:29:47.998 clat (usec): min=3879, max=60915, avg=33422.57, stdev=6565.72 00:29:47.998 lat (usec): min=3888, max=60939, avg=33440.46, stdev=6565.44 00:29:47.998 clat percentiles (usec): 00:29:47.999 | 1.00th=[ 8848], 5.00th=[19006], 10.00th=[30802], 20.00th=[32637], 00:29:47.999 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33817], 00:29:47.999 | 70.00th=[33817], 80.00th=[34341], 90.00th=[42730], 95.00th=[43254], 00:29:47.999 | 99.00th=[46924], 99.50th=[50070], 99.90th=[57410], 99.95th=[57410], 00:29:47.999 | 99.99th=[61080] 00:29:47.999 bw ( KiB/s): min= 1408, max= 2496, per=4.34%, avg=1901.20, stdev=262.95, samples=20 00:29:47.999 iops : min= 352, max= 624, avg=475.30, stdev=65.74, samples=20 00:29:47.999 lat (msec) : 4=0.15%, 10=1.64%, 20=4.09%, 50=93.48%, 100=0.65% 00:29:47.999 cpu : usr=98.13%, sys=1.44%, ctx=16, majf=0, minf=43 00:29:47.999 IO depths : 1=4.7%, 2=10.0%, 4=22.2%, 8=55.0%, 16=8.1%, 32=0.0%, >=64=0.0% 00:29:47.999 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:47.999 complete : 0=0.0%, 4=93.5%, 8=0.8%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:47.999 issued rwts: total=4769,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:47.999 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:47.999 filename1: (groupid=0, jobs=1): err= 0: pid=1637437: Thu Apr 25 03:30:20 2024 00:29:47.999 read: IOPS=456, BW=1828KiB/s (1871kB/s)(17.9MiB/10007msec) 00:29:47.999 slat (usec): min=7, max=1065, avg=24.25, stdev=27.55 00:29:47.999 clat (usec): min=18591, max=47677, avg=34888.48, stdev=3917.28 00:29:47.999 lat (usec): min=18624, max=47698, avg=34912.72, stdev=3915.80 00:29:47.999 clat percentiles (usec): 00:29:47.999 | 1.00th=[24249], 5.00th=[32113], 10.00th=[32637], 20.00th=[33162], 00:29:47.999 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:29:47.999 | 70.00th=[33817], 80.00th=[35914], 90.00th=[42730], 95.00th=[43254], 00:29:47.999 | 99.00th=[44303], 99.50th=[46400], 99.90th=[47449], 99.95th=[47449], 00:29:47.999 | 99.99th=[47449] 00:29:47.999 bw ( KiB/s): min= 1408, max= 2000, per=4.15%, avg=1821.47, stdev=170.27, samples=19 00:29:47.999 iops : min= 352, max= 500, avg=455.37, stdev=42.57, samples=19 00:29:47.999 lat (msec) : 20=0.52%, 50=99.48% 00:29:47.999 cpu : usr=92.51%, sys=3.59%, ctx=117, majf=0, minf=62 00:29:47.999 IO depths : 1=0.3%, 2=0.5%, 4=5.5%, 8=80.0%, 16=13.7%, 32=0.0%, >=64=0.0% 00:29:47.999 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:47.999 complete : 0=0.0%, 4=89.2%, 8=6.6%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:47.999 issued rwts: total=4572,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:47.999 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:47.999 filename1: (groupid=0, jobs=1): err= 0: pid=1637438: Thu Apr 25 03:30:20 2024 00:29:47.999 read: IOPS=455, BW=1823KiB/s (1867kB/s)(17.8MiB/10006msec) 00:29:47.999 slat (usec): min=9, max=121, avg=41.29, stdev=15.35 00:29:47.999 clat (usec): min=15536, max=60159, avg=34721.70, stdev=3855.97 00:29:47.999 lat (usec): min=15570, max=60178, avg=34762.99, stdev=3855.71 00:29:47.999 clat percentiles (usec): 00:29:47.999 | 1.00th=[31851], 5.00th=[32637], 10.00th=[32637], 20.00th=[32900], 00:29:47.999 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33817], 00:29:47.999 | 70.00th=[33817], 80.00th=[34341], 90.00th=[42730], 95.00th=[43254], 00:29:47.999 | 99.00th=[43779], 99.50th=[46400], 99.90th=[60031], 99.95th=[60031], 00:29:47.999 | 99.99th=[60031] 00:29:47.999 bw ( KiB/s): min= 1408, max= 1920, per=4.13%, avg=1812.21, stdev=161.14, samples=19 00:29:47.999 iops : min= 352, max= 480, avg=453.05, stdev=40.28, samples=19 00:29:47.999 lat (msec) : 20=0.35%, 50=99.30%, 100=0.35% 00:29:47.999 cpu : usr=98.33%, sys=1.26%, ctx=17, majf=0, minf=41 00:29:47.999 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:29:47.999 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:47.999 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:47.999 issued rwts: total=4560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:47.999 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:47.999 filename1: (groupid=0, jobs=1): err= 0: pid=1637439: Thu Apr 25 03:30:20 2024 00:29:47.999 read: IOPS=453, BW=1814KiB/s (1858kB/s)(17.7MiB/10006msec) 00:29:47.999 slat (usec): min=8, max=191, avg=45.19, stdev=26.47 00:29:47.999 clat (usec): min=11298, max=62073, avg=34874.87, stdev=4653.09 00:29:47.999 lat (usec): min=11334, max=62089, avg=34920.06, stdev=4649.76 00:29:47.999 clat percentiles (usec): 00:29:47.999 | 1.00th=[24773], 5.00th=[32113], 10.00th=[32375], 20.00th=[32900], 00:29:47.999 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33817], 00:29:47.999 | 70.00th=[33817], 80.00th=[35390], 90.00th=[42730], 95.00th=[43254], 00:29:47.999 | 99.00th=[50070], 99.50th=[60031], 99.90th=[61080], 99.95th=[61080], 00:29:47.999 | 99.99th=[62129] 00:29:47.999 bw ( KiB/s): min= 1408, max= 1920, per=4.11%, avg=1802.95, stdev=156.04, samples=19 00:29:47.999 iops : min= 352, max= 480, avg=450.74, stdev=39.01, samples=19 00:29:47.999 lat (msec) : 20=0.71%, 50=98.33%, 100=0.97% 00:29:47.999 cpu : usr=98.39%, sys=1.20%, ctx=15, majf=0, minf=49 00:29:47.999 IO depths : 1=4.1%, 2=9.8%, 4=23.4%, 8=54.1%, 16=8.6%, 32=0.0%, >=64=0.0% 00:29:47.999 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:47.999 complete : 0=0.0%, 4=93.9%, 8=0.5%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:47.999 issued rwts: total=4538,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:47.999 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:47.999 filename1: (groupid=0, jobs=1): err= 0: pid=1637441: Thu Apr 25 03:30:20 2024 00:29:47.999 read: IOPS=460, BW=1844KiB/s (1888kB/s)(18.0MiB/10002msec) 00:29:47.999 slat (usec): min=7, max=148, avg=34.94, stdev=23.58 00:29:47.999 clat (usec): min=6661, max=56655, avg=34408.84, stdev=4679.51 00:29:47.999 lat (usec): min=6669, max=56665, avg=34443.78, stdev=4676.20 00:29:47.999 clat percentiles (usec): 00:29:47.999 | 1.00th=[15533], 5.00th=[31589], 10.00th=[32637], 20.00th=[32900], 00:29:47.999 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33817], 00:29:47.999 | 70.00th=[33817], 80.00th=[34866], 90.00th=[42730], 95.00th=[43254], 00:29:47.999 | 99.00th=[44303], 99.50th=[47973], 99.90th=[56361], 99.95th=[56886], 00:29:47.999 | 99.99th=[56886] 00:29:47.999 bw ( KiB/s): min= 1408, max= 2144, per=4.20%, avg=1840.00, stdev=179.84, samples=19 00:29:47.999 iops : min= 352, max= 536, avg=460.00, stdev=44.96, samples=19 00:29:47.999 lat (msec) : 10=0.80%, 20=0.78%, 50=98.20%, 100=0.22% 00:29:47.999 cpu : usr=98.30%, sys=1.27%, ctx=17, majf=0, minf=36 00:29:47.999 IO depths : 1=5.4%, 2=11.1%, 4=23.5%, 8=52.8%, 16=7.2%, 32=0.0%, >=64=0.0% 00:29:47.999 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:47.999 complete : 0=0.0%, 4=93.8%, 8=0.4%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:47.999 issued rwts: total=4610,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:47.999 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:47.999 filename1: (groupid=0, jobs=1): err= 0: pid=1637442: Thu Apr 25 03:30:20 2024 00:29:47.999 read: IOPS=454, BW=1818KiB/s (1862kB/s)(17.8MiB/10005msec) 00:29:47.999 slat (usec): min=7, max=277, avg=34.04, stdev=22.28 00:29:47.999 clat (usec): min=4688, max=71289, avg=34951.92, stdev=6264.16 00:29:47.999 lat (usec): min=4711, max=71299, avg=34985.97, stdev=6261.86 00:29:47.999 clat percentiles (usec): 00:29:47.999 | 1.00th=[13435], 5.00th=[31065], 10.00th=[32375], 20.00th=[32900], 00:29:47.999 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:29:47.999 | 70.00th=[34341], 80.00th=[35390], 90.00th=[43254], 95.00th=[43779], 00:29:47.999 | 99.00th=[57410], 99.50th=[70779], 99.90th=[70779], 99.95th=[70779], 00:29:47.999 | 99.99th=[70779] 00:29:47.999 bw ( KiB/s): min= 1424, max= 1976, per=4.12%, avg=1806.32, stdev=169.68, samples=19 00:29:47.999 iops : min= 356, max= 494, avg=451.58, stdev=42.42, samples=19 00:29:47.999 lat (msec) : 10=0.51%, 20=1.41%, 50=95.82%, 100=2.26% 00:29:47.999 cpu : usr=93.48%, sys=3.14%, ctx=96, majf=0, minf=56 00:29:47.999 IO depths : 1=1.2%, 2=5.5%, 4=18.8%, 8=62.2%, 16=12.3%, 32=0.0%, >=64=0.0% 00:29:47.999 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:47.999 complete : 0=0.0%, 4=92.8%, 8=2.5%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:47.999 issued rwts: total=4548,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:47.999 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:47.999 filename1: (groupid=0, jobs=1): err= 0: pid=1637443: Thu Apr 25 03:30:20 2024 00:29:47.999 read: IOPS=455, BW=1823KiB/s (1867kB/s)(17.8MiB/10006msec) 00:29:47.999 slat (usec): min=7, max=125, avg=48.38, stdev=19.80 00:29:47.999 clat (usec): min=18744, max=60329, avg=34681.14, stdev=3838.97 00:29:47.999 lat (usec): min=18753, max=60379, avg=34729.53, stdev=3838.28 00:29:47.999 clat percentiles (usec): 00:29:47.999 | 1.00th=[30802], 5.00th=[32113], 10.00th=[32637], 20.00th=[32900], 00:29:47.999 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:29:47.999 | 70.00th=[33817], 80.00th=[34341], 90.00th=[42730], 95.00th=[43254], 00:29:47.999 | 99.00th=[44303], 99.50th=[46400], 99.90th=[60031], 99.95th=[60031], 00:29:47.999 | 99.99th=[60556] 00:29:47.999 bw ( KiB/s): min= 1408, max= 1920, per=4.13%, avg=1811.79, stdev=159.65, samples=19 00:29:47.999 iops : min= 352, max= 480, avg=453.05, stdev=39.95, samples=19 00:29:47.999 lat (msec) : 20=0.35%, 50=99.30%, 100=0.35% 00:29:47.999 cpu : usr=98.46%, sys=1.14%, ctx=11, majf=0, minf=42 00:29:47.999 IO depths : 1=6.1%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:29:47.999 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:47.999 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:47.999 issued rwts: total=4560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:47.999 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:47.999 filename1: (groupid=0, jobs=1): err= 0: pid=1637444: Thu Apr 25 03:30:20 2024 00:29:47.999 read: IOPS=455, BW=1823KiB/s (1866kB/s)(17.8MiB/10007msec) 00:29:47.999 slat (usec): min=6, max=101, avg=26.65, stdev=15.23 00:29:47.999 clat (usec): min=21243, max=49017, avg=34911.67, stdev=3723.67 00:29:47.999 lat (usec): min=21251, max=49033, avg=34938.32, stdev=3722.27 00:29:47.999 clat percentiles (usec): 00:29:47.999 | 1.00th=[27919], 5.00th=[32113], 10.00th=[32900], 20.00th=[33162], 00:29:47.999 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:29:47.999 | 70.00th=[33817], 80.00th=[35390], 90.00th=[42730], 95.00th=[43254], 00:29:47.999 | 99.00th=[45876], 99.50th=[46924], 99.90th=[47449], 99.95th=[48497], 00:29:47.999 | 99.99th=[49021] 00:29:47.999 bw ( KiB/s): min= 1408, max= 1936, per=4.15%, avg=1818.95, stdev=157.32, samples=19 00:29:47.999 iops : min= 352, max= 484, avg=454.74, stdev=39.33, samples=19 00:29:47.999 lat (msec) : 50=100.00% 00:29:47.999 cpu : usr=98.10%, sys=1.46%, ctx=18, majf=0, minf=47 00:29:47.999 IO depths : 1=2.5%, 2=7.2%, 4=21.1%, 8=59.2%, 16=10.0%, 32=0.0%, >=64=0.0% 00:29:47.999 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:47.999 complete : 0=0.0%, 4=93.3%, 8=1.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:47.999 issued rwts: total=4560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:48.000 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:48.000 filename2: (groupid=0, jobs=1): err= 0: pid=1637445: Thu Apr 25 03:30:20 2024 00:29:48.000 read: IOPS=455, BW=1823KiB/s (1867kB/s)(17.8MiB/10004msec) 00:29:48.000 slat (usec): min=7, max=125, avg=40.78, stdev=18.26 00:29:48.000 clat (usec): min=19551, max=55696, avg=34762.11, stdev=3752.29 00:29:48.000 lat (usec): min=19624, max=55739, avg=34802.88, stdev=3748.38 00:29:48.000 clat percentiles (usec): 00:29:48.000 | 1.00th=[27919], 5.00th=[32113], 10.00th=[32637], 20.00th=[32900], 00:29:48.000 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:29:48.000 | 70.00th=[33817], 80.00th=[34866], 90.00th=[42730], 95.00th=[43254], 00:29:48.000 | 99.00th=[45876], 99.50th=[46924], 99.90th=[51643], 99.95th=[52691], 00:29:48.000 | 99.99th=[55837] 00:29:48.000 bw ( KiB/s): min= 1408, max= 1920, per=4.15%, avg=1818.95, stdev=173.73, samples=19 00:29:48.000 iops : min= 352, max= 480, avg=454.74, stdev=43.43, samples=19 00:29:48.000 lat (msec) : 20=0.04%, 50=99.74%, 100=0.22% 00:29:48.000 cpu : usr=98.34%, sys=1.25%, ctx=18, majf=0, minf=35 00:29:48.000 IO depths : 1=5.3%, 2=11.3%, 4=24.6%, 8=51.6%, 16=7.2%, 32=0.0%, >=64=0.0% 00:29:48.000 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.000 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.000 issued rwts: total=4560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:48.000 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:48.000 filename2: (groupid=0, jobs=1): err= 0: pid=1637446: Thu Apr 25 03:30:20 2024 00:29:48.000 read: IOPS=450, BW=1804KiB/s (1847kB/s)(17.6MiB/10005msec) 00:29:48.000 slat (usec): min=7, max=107, avg=27.89, stdev=19.48 00:29:48.000 clat (usec): min=4528, max=82017, avg=35277.21, stdev=7582.29 00:29:48.000 lat (usec): min=4537, max=82079, avg=35305.10, stdev=7581.45 00:29:48.000 clat percentiles (usec): 00:29:48.000 | 1.00th=[10683], 5.00th=[28967], 10.00th=[32113], 20.00th=[32637], 00:29:48.000 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:29:48.000 | 70.00th=[33817], 80.00th=[38011], 90.00th=[43254], 95.00th=[49021], 00:29:48.000 | 99.00th=[60556], 99.50th=[69731], 99.90th=[81265], 99.95th=[82314], 00:29:48.000 | 99.99th=[82314] 00:29:48.000 bw ( KiB/s): min= 1504, max= 1976, per=4.08%, avg=1790.89, stdev=140.04, samples=19 00:29:48.000 iops : min= 376, max= 494, avg=447.68, stdev=35.03, samples=19 00:29:48.000 lat (msec) : 10=0.91%, 20=1.95%, 50=92.35%, 100=4.79% 00:29:48.000 cpu : usr=98.21%, sys=1.38%, ctx=17, majf=0, minf=43 00:29:48.000 IO depths : 1=1.0%, 2=4.7%, 4=17.5%, 8=64.5%, 16=12.3%, 32=0.0%, >=64=0.0% 00:29:48.000 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.000 complete : 0=0.0%, 4=92.6%, 8=2.6%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.000 issued rwts: total=4512,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:48.000 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:48.000 filename2: (groupid=0, jobs=1): err= 0: pid=1637447: Thu Apr 25 03:30:20 2024 00:29:48.000 read: IOPS=460, BW=1842KiB/s (1886kB/s)(18.0MiB/10008msec) 00:29:48.000 slat (usec): min=4, max=129, avg=21.60, stdev=11.60 00:29:48.000 clat (usec): min=4153, max=48670, avg=34567.22, stdev=4685.74 00:29:48.000 lat (usec): min=4162, max=48683, avg=34588.82, stdev=4685.67 00:29:48.000 clat percentiles (usec): 00:29:48.000 | 1.00th=[10552], 5.00th=[32113], 10.00th=[32637], 20.00th=[33162], 00:29:48.000 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:29:48.000 | 70.00th=[33817], 80.00th=[35390], 90.00th=[42730], 95.00th=[43254], 00:29:48.000 | 99.00th=[44303], 99.50th=[44303], 99.90th=[46924], 99.95th=[46924], 00:29:48.000 | 99.99th=[48497] 00:29:48.000 bw ( KiB/s): min= 1408, max= 2304, per=4.19%, avg=1836.80, stdev=185.86, samples=20 00:29:48.000 iops : min= 352, max= 576, avg=459.20, stdev=46.46, samples=20 00:29:48.000 lat (msec) : 10=1.00%, 20=0.39%, 50=98.61% 00:29:48.000 cpu : usr=94.85%, sys=2.88%, ctx=356, majf=0, minf=55 00:29:48.000 IO depths : 1=5.1%, 2=10.9%, 4=24.3%, 8=52.2%, 16=7.4%, 32=0.0%, >=64=0.0% 00:29:48.000 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.000 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.000 issued rwts: total=4608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:48.000 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:48.000 filename2: (groupid=0, jobs=1): err= 0: pid=1637449: Thu Apr 25 03:30:20 2024 00:29:48.000 read: IOPS=458, BW=1835KiB/s (1879kB/s)(17.9MiB/10006msec) 00:29:48.000 slat (usec): min=8, max=498, avg=30.91, stdev=25.32 00:29:48.000 clat (usec): min=9075, max=59024, avg=34620.84, stdev=4298.11 00:29:48.000 lat (usec): min=9098, max=59049, avg=34651.75, stdev=4297.51 00:29:48.000 clat percentiles (usec): 00:29:48.000 | 1.00th=[16057], 5.00th=[31851], 10.00th=[32637], 20.00th=[32900], 00:29:48.000 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:29:48.000 | 70.00th=[33817], 80.00th=[35390], 90.00th=[42730], 95.00th=[43254], 00:29:48.000 | 99.00th=[44303], 99.50th=[47973], 99.90th=[52167], 99.95th=[52691], 00:29:48.000 | 99.99th=[58983] 00:29:48.000 bw ( KiB/s): min= 1408, max= 2048, per=4.18%, avg=1831.58, stdev=175.30, samples=19 00:29:48.000 iops : min= 352, max= 512, avg=457.89, stdev=43.83, samples=19 00:29:48.000 lat (msec) : 10=0.35%, 20=0.65%, 50=98.74%, 100=0.26% 00:29:48.000 cpu : usr=88.80%, sys=4.93%, ctx=218, majf=0, minf=48 00:29:48.000 IO depths : 1=3.0%, 2=8.8%, 4=23.9%, 8=54.7%, 16=9.5%, 32=0.0%, >=64=0.0% 00:29:48.000 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.000 complete : 0=0.0%, 4=94.1%, 8=0.2%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.000 issued rwts: total=4590,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:48.000 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:48.000 filename2: (groupid=0, jobs=1): err= 0: pid=1637450: Thu Apr 25 03:30:20 2024 00:29:48.000 read: IOPS=454, BW=1817KiB/s (1861kB/s)(17.8MiB/10002msec) 00:29:48.000 slat (nsec): min=3790, max=84032, avg=38322.39, stdev=10069.46 00:29:48.000 clat (usec): min=19387, max=71902, avg=34876.11, stdev=4082.97 00:29:48.000 lat (usec): min=19409, max=71917, avg=34914.43, stdev=4081.92 00:29:48.000 clat percentiles (usec): 00:29:48.000 | 1.00th=[31851], 5.00th=[32637], 10.00th=[32900], 20.00th=[32900], 00:29:48.000 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:29:48.000 | 70.00th=[33817], 80.00th=[34341], 90.00th=[42730], 95.00th=[43254], 00:29:48.000 | 99.00th=[44303], 99.50th=[46400], 99.90th=[71828], 99.95th=[71828], 00:29:48.000 | 99.99th=[71828] 00:29:48.000 bw ( KiB/s): min= 1408, max= 1920, per=4.13%, avg=1812.21, stdev=161.14, samples=19 00:29:48.000 iops : min= 352, max= 480, avg=453.05, stdev=40.28, samples=19 00:29:48.000 lat (msec) : 20=0.04%, 50=99.60%, 100=0.35% 00:29:48.000 cpu : usr=97.48%, sys=1.79%, ctx=68, majf=0, minf=36 00:29:48.000 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:29:48.000 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.000 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.000 issued rwts: total=4544,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:48.000 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:48.000 filename2: (groupid=0, jobs=1): err= 0: pid=1637451: Thu Apr 25 03:30:20 2024 00:29:48.000 read: IOPS=455, BW=1823KiB/s (1867kB/s)(17.8MiB/10004msec) 00:29:48.000 slat (usec): min=7, max=332, avg=28.93, stdev=14.13 00:29:48.000 clat (usec): min=14336, max=60928, avg=34868.40, stdev=4159.00 00:29:48.000 lat (usec): min=14345, max=60943, avg=34897.33, stdev=4160.93 00:29:48.000 clat percentiles (usec): 00:29:48.000 | 1.00th=[26608], 5.00th=[31851], 10.00th=[32637], 20.00th=[32900], 00:29:48.000 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:29:48.000 | 70.00th=[33817], 80.00th=[35914], 90.00th=[42730], 95.00th=[43254], 00:29:48.000 | 99.00th=[46924], 99.50th=[47449], 99.90th=[60556], 99.95th=[61080], 00:29:48.000 | 99.99th=[61080] 00:29:48.000 bw ( KiB/s): min= 1408, max= 2000, per=4.15%, avg=1818.95, stdev=175.76, samples=19 00:29:48.000 iops : min= 352, max= 500, avg=454.74, stdev=43.94, samples=19 00:29:48.000 lat (msec) : 20=0.44%, 50=99.30%, 100=0.26% 00:29:48.000 cpu : usr=92.72%, sys=3.63%, ctx=334, majf=0, minf=32 00:29:48.000 IO depths : 1=3.7%, 2=9.2%, 4=23.4%, 8=54.9%, 16=8.9%, 32=0.0%, >=64=0.0% 00:29:48.000 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.000 complete : 0=0.0%, 4=94.0%, 8=0.3%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.000 issued rwts: total=4560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:48.000 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:48.000 filename2: (groupid=0, jobs=1): err= 0: pid=1637452: Thu Apr 25 03:30:20 2024 00:29:48.000 read: IOPS=459, BW=1836KiB/s (1880kB/s)(18.0MiB/10011msec) 00:29:48.000 slat (nsec): min=7887, max=99022, avg=33157.42, stdev=13646.73 00:29:48.000 clat (usec): min=18001, max=55999, avg=34581.21, stdev=4426.79 00:29:48.000 lat (usec): min=18034, max=56014, avg=34614.36, stdev=4428.53 00:29:48.000 clat percentiles (usec): 00:29:48.000 | 1.00th=[19530], 5.00th=[31327], 10.00th=[32637], 20.00th=[32900], 00:29:48.000 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:29:48.000 | 70.00th=[33817], 80.00th=[35390], 90.00th=[42730], 95.00th=[43254], 00:29:48.000 | 99.00th=[46400], 99.50th=[48497], 99.90th=[55837], 99.95th=[55837], 00:29:48.000 | 99.99th=[55837] 00:29:48.000 bw ( KiB/s): min= 1408, max= 2000, per=4.18%, avg=1832.80, stdev=177.45, samples=20 00:29:48.000 iops : min= 352, max= 500, avg=458.20, stdev=44.36, samples=20 00:29:48.000 lat (msec) : 20=1.31%, 50=98.43%, 100=0.26% 00:29:48.000 cpu : usr=97.03%, sys=1.81%, ctx=91, majf=0, minf=52 00:29:48.000 IO depths : 1=4.0%, 2=9.5%, 4=22.7%, 8=55.3%, 16=8.6%, 32=0.0%, >=64=0.0% 00:29:48.000 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.000 complete : 0=0.0%, 4=93.6%, 8=0.7%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.000 issued rwts: total=4596,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:48.000 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:48.000 filename2: (groupid=0, jobs=1): err= 0: pid=1637453: Thu Apr 25 03:30:20 2024 00:29:48.000 read: IOPS=455, BW=1821KiB/s (1865kB/s)(17.8MiB/10016msec) 00:29:48.000 slat (usec): min=4, max=166, avg=38.14, stdev=12.37 00:29:48.000 clat (usec): min=7805, max=81668, avg=34796.74, stdev=4429.11 00:29:48.000 lat (usec): min=7827, max=81681, avg=34834.89, stdev=4428.71 00:29:48.000 clat percentiles (usec): 00:29:48.000 | 1.00th=[30016], 5.00th=[32375], 10.00th=[32900], 20.00th=[32900], 00:29:48.000 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:29:48.001 | 70.00th=[33817], 80.00th=[34341], 90.00th=[42730], 95.00th=[43254], 00:29:48.001 | 99.00th=[44303], 99.50th=[46924], 99.90th=[69731], 99.95th=[81265], 00:29:48.001 | 99.99th=[81265] 00:29:48.001 bw ( KiB/s): min= 1408, max= 1920, per=4.15%, avg=1817.60, stdev=158.68, samples=20 00:29:48.001 iops : min= 352, max= 480, avg=454.40, stdev=39.67, samples=20 00:29:48.001 lat (msec) : 10=0.13%, 20=0.35%, 50=99.04%, 100=0.48% 00:29:48.001 cpu : usr=91.02%, sys=4.48%, ctx=223, majf=0, minf=33 00:29:48.001 IO depths : 1=5.4%, 2=11.6%, 4=24.9%, 8=51.0%, 16=7.1%, 32=0.0%, >=64=0.0% 00:29:48.001 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.001 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.001 issued rwts: total=4560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:48.001 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:48.001 00:29:48.001 Run status group 0 (all jobs): 00:29:48.001 READ: bw=42.8MiB/s (44.9MB/s), 1804KiB/s-1906KiB/s (1847kB/s-1951kB/s), io=429MiB (450MB), run=10002-10016msec 00:29:48.001 03:30:21 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:29:48.001 03:30:21 -- target/dif.sh@43 -- # local sub 00:29:48.001 03:30:21 -- target/dif.sh@45 -- # for sub in "$@" 00:29:48.001 03:30:21 -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:48.001 03:30:21 -- target/dif.sh@36 -- # local sub_id=0 00:29:48.001 03:30:21 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:48.001 03:30:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:48.001 03:30:21 -- common/autotest_common.sh@10 -- # set +x 00:29:48.001 03:30:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:48.001 03:30:21 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:48.001 03:30:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:48.001 03:30:21 -- common/autotest_common.sh@10 -- # set +x 00:29:48.001 03:30:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:48.001 03:30:21 -- target/dif.sh@45 -- # for sub in "$@" 00:29:48.001 03:30:21 -- target/dif.sh@46 -- # destroy_subsystem 1 00:29:48.001 03:30:21 -- target/dif.sh@36 -- # local sub_id=1 00:29:48.001 03:30:21 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:48.001 03:30:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:48.001 03:30:21 -- common/autotest_common.sh@10 -- # set +x 00:29:48.001 03:30:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:48.001 03:30:21 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:29:48.001 03:30:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:48.001 03:30:21 -- common/autotest_common.sh@10 -- # set +x 00:29:48.001 03:30:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:48.001 03:30:21 -- target/dif.sh@45 -- # for sub in "$@" 00:29:48.001 03:30:21 -- target/dif.sh@46 -- # destroy_subsystem 2 00:29:48.001 03:30:21 -- target/dif.sh@36 -- # local sub_id=2 00:29:48.001 03:30:21 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:29:48.001 03:30:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:48.001 03:30:21 -- common/autotest_common.sh@10 -- # set +x 00:29:48.001 03:30:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:48.001 03:30:21 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:29:48.001 03:30:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:48.001 03:30:21 -- common/autotest_common.sh@10 -- # set +x 00:29:48.001 03:30:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:48.001 03:30:21 -- target/dif.sh@115 -- # NULL_DIF=1 00:29:48.001 03:30:21 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:29:48.001 03:30:21 -- target/dif.sh@115 -- # numjobs=2 00:29:48.001 03:30:21 -- target/dif.sh@115 -- # iodepth=8 00:29:48.001 03:30:21 -- target/dif.sh@115 -- # runtime=5 00:29:48.001 03:30:21 -- target/dif.sh@115 -- # files=1 00:29:48.001 03:30:21 -- target/dif.sh@117 -- # create_subsystems 0 1 00:29:48.001 03:30:21 -- target/dif.sh@28 -- # local sub 00:29:48.001 03:30:21 -- target/dif.sh@30 -- # for sub in "$@" 00:29:48.001 03:30:21 -- target/dif.sh@31 -- # create_subsystem 0 00:29:48.001 03:30:21 -- target/dif.sh@18 -- # local sub_id=0 00:29:48.001 03:30:21 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:29:48.001 03:30:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:48.001 03:30:21 -- common/autotest_common.sh@10 -- # set +x 00:29:48.001 bdev_null0 00:29:48.001 03:30:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:48.001 03:30:21 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:48.001 03:30:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:48.001 03:30:21 -- common/autotest_common.sh@10 -- # set +x 00:29:48.001 03:30:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:48.001 03:30:21 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:48.001 03:30:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:48.001 03:30:21 -- common/autotest_common.sh@10 -- # set +x 00:29:48.001 03:30:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:48.001 03:30:21 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:48.001 03:30:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:48.001 03:30:21 -- common/autotest_common.sh@10 -- # set +x 00:29:48.001 [2024-04-25 03:30:21.255355] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:48.001 03:30:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:48.001 03:30:21 -- target/dif.sh@30 -- # for sub in "$@" 00:29:48.001 03:30:21 -- target/dif.sh@31 -- # create_subsystem 1 00:29:48.001 03:30:21 -- target/dif.sh@18 -- # local sub_id=1 00:29:48.001 03:30:21 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:29:48.001 03:30:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:48.001 03:30:21 -- common/autotest_common.sh@10 -- # set +x 00:29:48.001 bdev_null1 00:29:48.001 03:30:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:48.001 03:30:21 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:29:48.001 03:30:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:48.001 03:30:21 -- common/autotest_common.sh@10 -- # set +x 00:29:48.001 03:30:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:48.001 03:30:21 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:29:48.001 03:30:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:48.001 03:30:21 -- common/autotest_common.sh@10 -- # set +x 00:29:48.001 03:30:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:48.001 03:30:21 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:48.001 03:30:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:48.001 03:30:21 -- common/autotest_common.sh@10 -- # set +x 00:29:48.001 03:30:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:48.001 03:30:21 -- target/dif.sh@118 -- # fio /dev/fd/62 00:29:48.001 03:30:21 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:29:48.001 03:30:21 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:29:48.001 03:30:21 -- nvmf/common.sh@521 -- # config=() 00:29:48.001 03:30:21 -- nvmf/common.sh@521 -- # local subsystem config 00:29:48.001 03:30:21 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:29:48.001 03:30:21 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:48.001 03:30:21 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:29:48.001 { 00:29:48.001 "params": { 00:29:48.001 "name": "Nvme$subsystem", 00:29:48.001 "trtype": "$TEST_TRANSPORT", 00:29:48.001 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:48.001 "adrfam": "ipv4", 00:29:48.001 "trsvcid": "$NVMF_PORT", 00:29:48.001 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:48.001 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:48.001 "hdgst": ${hdgst:-false}, 00:29:48.001 "ddgst": ${ddgst:-false} 00:29:48.001 }, 00:29:48.001 "method": "bdev_nvme_attach_controller" 00:29:48.001 } 00:29:48.001 EOF 00:29:48.001 )") 00:29:48.001 03:30:21 -- target/dif.sh@82 -- # gen_fio_conf 00:29:48.001 03:30:21 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:48.001 03:30:21 -- target/dif.sh@54 -- # local file 00:29:48.001 03:30:21 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:29:48.001 03:30:21 -- target/dif.sh@56 -- # cat 00:29:48.001 03:30:21 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:48.001 03:30:21 -- common/autotest_common.sh@1325 -- # local sanitizers 00:29:48.001 03:30:21 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:48.001 03:30:21 -- common/autotest_common.sh@1327 -- # shift 00:29:48.001 03:30:21 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:29:48.001 03:30:21 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:29:48.001 03:30:21 -- nvmf/common.sh@543 -- # cat 00:29:48.001 03:30:21 -- target/dif.sh@72 -- # (( file = 1 )) 00:29:48.001 03:30:21 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:48.001 03:30:21 -- target/dif.sh@72 -- # (( file <= files )) 00:29:48.001 03:30:21 -- target/dif.sh@73 -- # cat 00:29:48.001 03:30:21 -- common/autotest_common.sh@1331 -- # grep libasan 00:29:48.001 03:30:21 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:29:48.001 03:30:21 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:29:48.001 03:30:21 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:29:48.001 { 00:29:48.001 "params": { 00:29:48.001 "name": "Nvme$subsystem", 00:29:48.001 "trtype": "$TEST_TRANSPORT", 00:29:48.001 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:48.001 "adrfam": "ipv4", 00:29:48.001 "trsvcid": "$NVMF_PORT", 00:29:48.001 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:48.001 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:48.001 "hdgst": ${hdgst:-false}, 00:29:48.001 "ddgst": ${ddgst:-false} 00:29:48.001 }, 00:29:48.001 "method": "bdev_nvme_attach_controller" 00:29:48.001 } 00:29:48.001 EOF 00:29:48.001 )") 00:29:48.001 03:30:21 -- nvmf/common.sh@543 -- # cat 00:29:48.001 03:30:21 -- target/dif.sh@72 -- # (( file++ )) 00:29:48.001 03:30:21 -- target/dif.sh@72 -- # (( file <= files )) 00:29:48.001 03:30:21 -- nvmf/common.sh@545 -- # jq . 00:29:48.001 03:30:21 -- nvmf/common.sh@546 -- # IFS=, 00:29:48.001 03:30:21 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:29:48.001 "params": { 00:29:48.001 "name": "Nvme0", 00:29:48.001 "trtype": "tcp", 00:29:48.001 "traddr": "10.0.0.2", 00:29:48.001 "adrfam": "ipv4", 00:29:48.002 "trsvcid": "4420", 00:29:48.002 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:48.002 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:48.002 "hdgst": false, 00:29:48.002 "ddgst": false 00:29:48.002 }, 00:29:48.002 "method": "bdev_nvme_attach_controller" 00:29:48.002 },{ 00:29:48.002 "params": { 00:29:48.002 "name": "Nvme1", 00:29:48.002 "trtype": "tcp", 00:29:48.002 "traddr": "10.0.0.2", 00:29:48.002 "adrfam": "ipv4", 00:29:48.002 "trsvcid": "4420", 00:29:48.002 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:48.002 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:48.002 "hdgst": false, 00:29:48.002 "ddgst": false 00:29:48.002 }, 00:29:48.002 "method": "bdev_nvme_attach_controller" 00:29:48.002 }' 00:29:48.002 03:30:21 -- common/autotest_common.sh@1331 -- # asan_lib= 00:29:48.002 03:30:21 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:29:48.002 03:30:21 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:29:48.002 03:30:21 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:48.002 03:30:21 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:29:48.002 03:30:21 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:29:48.002 03:30:21 -- common/autotest_common.sh@1331 -- # asan_lib= 00:29:48.002 03:30:21 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:29:48.002 03:30:21 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:29:48.002 03:30:21 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:48.002 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:29:48.002 ... 00:29:48.002 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:29:48.002 ... 00:29:48.002 fio-3.35 00:29:48.002 Starting 4 threads 00:29:48.002 EAL: No free 2048 kB hugepages reported on node 1 00:29:53.268 00:29:53.268 filename0: (groupid=0, jobs=1): err= 0: pid=1638747: Thu Apr 25 03:30:27 2024 00:29:53.268 read: IOPS=1964, BW=15.3MiB/s (16.1MB/s)(76.8MiB/5002msec) 00:29:53.268 slat (nsec): min=4183, max=51586, avg=12642.96, stdev=6156.40 00:29:53.268 clat (usec): min=1825, max=47308, avg=4033.34, stdev=1401.05 00:29:53.268 lat (usec): min=1839, max=47321, avg=4045.98, stdev=1400.58 00:29:53.268 clat percentiles (usec): 00:29:53.268 | 1.00th=[ 3163], 5.00th=[ 3359], 10.00th=[ 3458], 20.00th=[ 3556], 00:29:53.268 | 30.00th=[ 3654], 40.00th=[ 3720], 50.00th=[ 3785], 60.00th=[ 3884], 00:29:53.268 | 70.00th=[ 4015], 80.00th=[ 4146], 90.00th=[ 5342], 95.00th=[ 5538], 00:29:53.268 | 99.00th=[ 6063], 99.50th=[ 6194], 99.90th=[ 6849], 99.95th=[47449], 00:29:53.268 | 99.99th=[47449] 00:29:53.268 bw ( KiB/s): min=13856, max=16480, per=25.61%, avg=15649.56, stdev=827.72, samples=9 00:29:53.268 iops : min= 1732, max= 2060, avg=1956.11, stdev=103.52, samples=9 00:29:53.268 lat (msec) : 2=0.01%, 4=69.24%, 10=30.67%, 50=0.08% 00:29:53.268 cpu : usr=94.40%, sys=4.50%, ctx=92, majf=0, minf=40 00:29:53.268 IO depths : 1=0.1%, 2=0.7%, 4=71.4%, 8=27.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:53.268 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:53.268 complete : 0=0.0%, 4=93.1%, 8=6.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:53.268 issued rwts: total=9828,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:53.268 latency : target=0, window=0, percentile=100.00%, depth=8 00:29:53.268 filename0: (groupid=0, jobs=1): err= 0: pid=1638748: Thu Apr 25 03:30:27 2024 00:29:53.268 read: IOPS=1844, BW=14.4MiB/s (15.1MB/s)(72.1MiB/5004msec) 00:29:53.268 slat (nsec): min=3914, max=52605, avg=12794.54, stdev=5448.73 00:29:53.268 clat (usec): min=1999, max=6763, avg=4301.78, stdev=484.74 00:29:53.268 lat (usec): min=2007, max=6790, avg=4314.58, stdev=485.06 00:29:53.268 clat percentiles (usec): 00:29:53.268 | 1.00th=[ 2933], 5.00th=[ 3523], 10.00th=[ 3720], 20.00th=[ 3916], 00:29:53.268 | 30.00th=[ 4080], 40.00th=[ 4228], 50.00th=[ 4359], 60.00th=[ 4490], 00:29:53.268 | 70.00th=[ 4555], 80.00th=[ 4621], 90.00th=[ 4752], 95.00th=[ 4883], 00:29:53.268 | 99.00th=[ 5800], 99.50th=[ 5866], 99.90th=[ 6521], 99.95th=[ 6652], 00:29:53.268 | 99.99th=[ 6783] 00:29:53.268 bw ( KiB/s): min=13952, max=15920, per=24.31%, avg=14855.33, stdev=851.34, samples=9 00:29:53.268 iops : min= 1744, max= 1990, avg=1856.89, stdev=106.41, samples=9 00:29:53.268 lat (msec) : 2=0.01%, 4=25.03%, 10=74.96% 00:29:53.268 cpu : usr=95.78%, sys=3.72%, ctx=9, majf=0, minf=19 00:29:53.268 IO depths : 1=0.1%, 2=0.8%, 4=66.6%, 8=32.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:53.268 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:53.268 complete : 0=0.0%, 4=96.3%, 8=3.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:53.268 issued rwts: total=9232,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:53.268 latency : target=0, window=0, percentile=100.00%, depth=8 00:29:53.268 filename1: (groupid=0, jobs=1): err= 0: pid=1638749: Thu Apr 25 03:30:27 2024 00:29:53.268 read: IOPS=1989, BW=15.5MiB/s (16.3MB/s)(77.7MiB/5002msec) 00:29:53.268 slat (nsec): min=3822, max=63175, avg=15319.79, stdev=7291.94 00:29:53.268 clat (usec): min=1392, max=6776, avg=3974.57, stdev=666.61 00:29:53.268 lat (usec): min=1401, max=6798, avg=3989.89, stdev=665.97 00:29:53.268 clat percentiles (usec): 00:29:53.268 | 1.00th=[ 2999], 5.00th=[ 3359], 10.00th=[ 3425], 20.00th=[ 3556], 00:29:53.268 | 30.00th=[ 3621], 40.00th=[ 3720], 50.00th=[ 3785], 60.00th=[ 3884], 00:29:53.268 | 70.00th=[ 3982], 80.00th=[ 4113], 90.00th=[ 5276], 95.00th=[ 5538], 00:29:53.268 | 99.00th=[ 6063], 99.50th=[ 6194], 99.90th=[ 6587], 99.95th=[ 6587], 00:29:53.268 | 99.99th=[ 6783] 00:29:53.268 bw ( KiB/s): min=14720, max=16448, per=25.99%, avg=15880.78, stdev=566.45, samples=9 00:29:53.268 iops : min= 1840, max= 2056, avg=1985.00, stdev=70.88, samples=9 00:29:53.268 lat (msec) : 2=0.05%, 4=70.54%, 10=29.41% 00:29:53.268 cpu : usr=89.46%, sys=6.70%, ctx=446, majf=0, minf=65 00:29:53.269 IO depths : 1=0.1%, 2=0.7%, 4=72.2%, 8=27.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:53.269 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:53.269 complete : 0=0.0%, 4=92.3%, 8=7.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:53.269 issued rwts: total=9951,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:53.269 latency : target=0, window=0, percentile=100.00%, depth=8 00:29:53.269 filename1: (groupid=0, jobs=1): err= 0: pid=1638750: Thu Apr 25 03:30:27 2024 00:29:53.269 read: IOPS=1841, BW=14.4MiB/s (15.1MB/s)(71.9MiB/5001msec) 00:29:53.269 slat (nsec): min=6728, max=62740, avg=14711.91, stdev=6832.41 00:29:53.269 clat (usec): min=1691, max=7004, avg=4306.60, stdev=446.71 00:29:53.269 lat (usec): min=1699, max=7029, avg=4321.31, stdev=447.57 00:29:53.269 clat percentiles (usec): 00:29:53.269 | 1.00th=[ 2966], 5.00th=[ 3589], 10.00th=[ 3752], 20.00th=[ 3949], 00:29:53.269 | 30.00th=[ 4080], 40.00th=[ 4293], 50.00th=[ 4424], 60.00th=[ 4555], 00:29:53.269 | 70.00th=[ 4555], 80.00th=[ 4621], 90.00th=[ 4686], 95.00th=[ 4817], 00:29:53.269 | 99.00th=[ 5342], 99.50th=[ 5735], 99.90th=[ 6325], 99.95th=[ 6915], 00:29:53.269 | 99.99th=[ 6980] 00:29:53.269 bw ( KiB/s): min=13952, max=16032, per=24.22%, avg=14801.78, stdev=804.96, samples=9 00:29:53.269 iops : min= 1744, max= 2004, avg=1850.22, stdev=100.62, samples=9 00:29:53.269 lat (msec) : 2=0.12%, 4=23.16%, 10=76.72% 00:29:53.269 cpu : usr=95.58%, sys=3.92%, ctx=9, majf=0, minf=40 00:29:53.269 IO depths : 1=0.3%, 2=1.2%, 4=66.2%, 8=32.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:53.269 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:53.269 complete : 0=0.0%, 4=96.3%, 8=3.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:53.269 issued rwts: total=9208,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:53.269 latency : target=0, window=0, percentile=100.00%, depth=8 00:29:53.269 00:29:53.269 Run status group 0 (all jobs): 00:29:53.269 READ: bw=59.7MiB/s (62.6MB/s), 14.4MiB/s-15.5MiB/s (15.1MB/s-16.3MB/s), io=299MiB (313MB), run=5001-5004msec 00:29:53.269 03:30:27 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:29:53.269 03:30:27 -- target/dif.sh@43 -- # local sub 00:29:53.269 03:30:27 -- target/dif.sh@45 -- # for sub in "$@" 00:29:53.269 03:30:27 -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:53.269 03:30:27 -- target/dif.sh@36 -- # local sub_id=0 00:29:53.269 03:30:27 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:53.269 03:30:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:53.269 03:30:27 -- common/autotest_common.sh@10 -- # set +x 00:29:53.269 03:30:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:53.269 03:30:27 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:53.269 03:30:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:53.269 03:30:27 -- common/autotest_common.sh@10 -- # set +x 00:29:53.269 03:30:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:53.269 03:30:27 -- target/dif.sh@45 -- # for sub in "$@" 00:29:53.269 03:30:27 -- target/dif.sh@46 -- # destroy_subsystem 1 00:29:53.269 03:30:27 -- target/dif.sh@36 -- # local sub_id=1 00:29:53.269 03:30:27 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:53.269 03:30:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:53.269 03:30:27 -- common/autotest_common.sh@10 -- # set +x 00:29:53.269 03:30:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:53.269 03:30:27 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:29:53.269 03:30:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:53.269 03:30:27 -- common/autotest_common.sh@10 -- # set +x 00:29:53.269 03:30:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:53.269 00:29:53.269 real 0m24.259s 00:29:53.269 user 4m27.366s 00:29:53.269 sys 0m8.210s 00:29:53.269 03:30:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:53.269 03:30:27 -- common/autotest_common.sh@10 -- # set +x 00:29:53.269 ************************************ 00:29:53.269 END TEST fio_dif_rand_params 00:29:53.269 ************************************ 00:29:53.269 03:30:27 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:29:53.269 03:30:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:29:53.269 03:30:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:53.269 03:30:27 -- common/autotest_common.sh@10 -- # set +x 00:29:53.269 ************************************ 00:29:53.269 START TEST fio_dif_digest 00:29:53.269 ************************************ 00:29:53.269 03:30:27 -- common/autotest_common.sh@1111 -- # fio_dif_digest 00:29:53.269 03:30:27 -- target/dif.sh@123 -- # local NULL_DIF 00:29:53.269 03:30:27 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:29:53.269 03:30:27 -- target/dif.sh@125 -- # local hdgst ddgst 00:29:53.269 03:30:27 -- target/dif.sh@127 -- # NULL_DIF=3 00:29:53.269 03:30:27 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:29:53.269 03:30:27 -- target/dif.sh@127 -- # numjobs=3 00:29:53.269 03:30:27 -- target/dif.sh@127 -- # iodepth=3 00:29:53.269 03:30:27 -- target/dif.sh@127 -- # runtime=10 00:29:53.269 03:30:27 -- target/dif.sh@128 -- # hdgst=true 00:29:53.269 03:30:27 -- target/dif.sh@128 -- # ddgst=true 00:29:53.269 03:30:27 -- target/dif.sh@130 -- # create_subsystems 0 00:29:53.269 03:30:27 -- target/dif.sh@28 -- # local sub 00:29:53.269 03:30:27 -- target/dif.sh@30 -- # for sub in "$@" 00:29:53.269 03:30:27 -- target/dif.sh@31 -- # create_subsystem 0 00:29:53.269 03:30:27 -- target/dif.sh@18 -- # local sub_id=0 00:29:53.269 03:30:27 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:29:53.269 03:30:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:53.269 03:30:27 -- common/autotest_common.sh@10 -- # set +x 00:29:53.269 bdev_null0 00:29:53.269 03:30:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:53.269 03:30:27 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:53.269 03:30:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:53.269 03:30:27 -- common/autotest_common.sh@10 -- # set +x 00:29:53.269 03:30:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:53.269 03:30:27 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:53.269 03:30:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:53.269 03:30:27 -- common/autotest_common.sh@10 -- # set +x 00:29:53.269 03:30:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:53.269 03:30:27 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:53.269 03:30:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:53.269 03:30:27 -- common/autotest_common.sh@10 -- # set +x 00:29:53.269 [2024-04-25 03:30:27.673874] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:53.269 03:30:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:53.269 03:30:27 -- target/dif.sh@131 -- # fio /dev/fd/62 00:29:53.269 03:30:27 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:29:53.269 03:30:27 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:29:53.269 03:30:27 -- nvmf/common.sh@521 -- # config=() 00:29:53.269 03:30:27 -- nvmf/common.sh@521 -- # local subsystem config 00:29:53.269 03:30:27 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:29:53.269 03:30:27 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:29:53.269 { 00:29:53.269 "params": { 00:29:53.269 "name": "Nvme$subsystem", 00:29:53.269 "trtype": "$TEST_TRANSPORT", 00:29:53.269 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:53.269 "adrfam": "ipv4", 00:29:53.269 "trsvcid": "$NVMF_PORT", 00:29:53.269 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:53.269 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:53.269 "hdgst": ${hdgst:-false}, 00:29:53.269 "ddgst": ${ddgst:-false} 00:29:53.269 }, 00:29:53.269 "method": "bdev_nvme_attach_controller" 00:29:53.269 } 00:29:53.269 EOF 00:29:53.269 )") 00:29:53.269 03:30:27 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:53.269 03:30:27 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:53.269 03:30:27 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:29:53.269 03:30:27 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:53.269 03:30:27 -- target/dif.sh@82 -- # gen_fio_conf 00:29:53.269 03:30:27 -- common/autotest_common.sh@1325 -- # local sanitizers 00:29:53.269 03:30:27 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:53.269 03:30:27 -- target/dif.sh@54 -- # local file 00:29:53.269 03:30:27 -- common/autotest_common.sh@1327 -- # shift 00:29:53.269 03:30:27 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:29:53.269 03:30:27 -- target/dif.sh@56 -- # cat 00:29:53.269 03:30:27 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:29:53.269 03:30:27 -- nvmf/common.sh@543 -- # cat 00:29:53.269 03:30:27 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:53.269 03:30:27 -- target/dif.sh@72 -- # (( file = 1 )) 00:29:53.269 03:30:27 -- common/autotest_common.sh@1331 -- # grep libasan 00:29:53.269 03:30:27 -- target/dif.sh@72 -- # (( file <= files )) 00:29:53.269 03:30:27 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:29:53.269 03:30:27 -- nvmf/common.sh@545 -- # jq . 00:29:53.269 03:30:27 -- nvmf/common.sh@546 -- # IFS=, 00:29:53.269 03:30:27 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:29:53.269 "params": { 00:29:53.269 "name": "Nvme0", 00:29:53.269 "trtype": "tcp", 00:29:53.269 "traddr": "10.0.0.2", 00:29:53.269 "adrfam": "ipv4", 00:29:53.269 "trsvcid": "4420", 00:29:53.269 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:53.269 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:53.269 "hdgst": true, 00:29:53.269 "ddgst": true 00:29:53.269 }, 00:29:53.269 "method": "bdev_nvme_attach_controller" 00:29:53.269 }' 00:29:53.269 03:30:27 -- common/autotest_common.sh@1331 -- # asan_lib= 00:29:53.269 03:30:27 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:29:53.269 03:30:27 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:29:53.269 03:30:27 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:53.269 03:30:27 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:29:53.269 03:30:27 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:29:53.269 03:30:27 -- common/autotest_common.sh@1331 -- # asan_lib= 00:29:53.269 03:30:27 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:29:53.269 03:30:27 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:29:53.269 03:30:27 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:53.527 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:29:53.527 ... 00:29:53.527 fio-3.35 00:29:53.527 Starting 3 threads 00:29:53.527 EAL: No free 2048 kB hugepages reported on node 1 00:30:05.752 00:30:05.752 filename0: (groupid=0, jobs=1): err= 0: pid=1639628: Thu Apr 25 03:30:38 2024 00:30:05.752 read: IOPS=149, BW=18.7MiB/s (19.6MB/s)(188MiB/10047msec) 00:30:05.752 slat (nsec): min=7321, max=90613, avg=13060.28, stdev=3653.23 00:30:05.752 clat (usec): min=9567, max=99417, avg=20020.87, stdev=13380.83 00:30:05.752 lat (usec): min=9586, max=99429, avg=20033.93, stdev=13380.90 00:30:05.752 clat percentiles (usec): 00:30:05.752 | 1.00th=[10421], 5.00th=[12911], 10.00th=[13566], 20.00th=[14484], 00:30:05.752 | 30.00th=[14877], 40.00th=[15270], 50.00th=[15664], 60.00th=[16057], 00:30:05.752 | 70.00th=[16450], 80.00th=[17171], 90.00th=[54264], 95.00th=[56886], 00:30:05.752 | 99.00th=[58983], 99.50th=[60031], 99.90th=[95945], 99.95th=[99091], 00:30:05.752 | 99.99th=[99091] 00:30:05.752 bw ( KiB/s): min=12288, max=24832, per=28.80%, avg=19200.00, stdev=3731.12, samples=20 00:30:05.752 iops : min= 96, max= 194, avg=150.00, stdev=29.15, samples=20 00:30:05.752 lat (msec) : 10=0.40%, 20=88.42%, 50=0.07%, 100=11.12% 00:30:05.752 cpu : usr=91.84%, sys=7.69%, ctx=14, majf=0, minf=156 00:30:05.752 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:05.752 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:05.752 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:05.752 issued rwts: total=1502,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:05.752 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:05.752 filename0: (groupid=0, jobs=1): err= 0: pid=1639629: Thu Apr 25 03:30:38 2024 00:30:05.752 read: IOPS=183, BW=22.9MiB/s (24.0MB/s)(230MiB/10048msec) 00:30:05.752 slat (nsec): min=6129, max=36659, avg=12995.63, stdev=3083.80 00:30:05.752 clat (usec): min=6523, max=59820, avg=16350.89, stdev=7787.74 00:30:05.752 lat (usec): min=6535, max=59834, avg=16363.88, stdev=7787.63 00:30:05.752 clat percentiles (usec): 00:30:05.752 | 1.00th=[ 7111], 5.00th=[10028], 10.00th=[11469], 20.00th=[12518], 00:30:05.752 | 30.00th=[13698], 40.00th=[15008], 50.00th=[15795], 60.00th=[16319], 00:30:05.752 | 70.00th=[16909], 80.00th=[17433], 90.00th=[18220], 95.00th=[19268], 00:30:05.752 | 99.00th=[56886], 99.50th=[57934], 99.90th=[58983], 99.95th=[60031], 00:30:05.752 | 99.99th=[60031] 00:30:05.752 bw ( KiB/s): min=17664, max=29440, per=35.27%, avg=23513.60, stdev=3094.68, samples=20 00:30:05.752 iops : min= 138, max= 230, avg=183.70, stdev=24.18, samples=20 00:30:05.752 lat (msec) : 10=4.79%, 20=91.35%, 50=0.54%, 100=3.32% 00:30:05.752 cpu : usr=90.59%, sys=8.54%, ctx=23, majf=0, minf=159 00:30:05.752 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:05.752 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:05.752 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:05.752 issued rwts: total=1839,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:05.752 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:05.752 filename0: (groupid=0, jobs=1): err= 0: pid=1639630: Thu Apr 25 03:30:38 2024 00:30:05.752 read: IOPS=188, BW=23.6MiB/s (24.7MB/s)(237MiB/10050msec) 00:30:05.752 slat (nsec): min=6490, max=43484, avg=12700.05, stdev=3005.81 00:30:05.752 clat (usec): min=6861, max=96219, avg=15901.67, stdev=8520.16 00:30:05.752 lat (usec): min=6873, max=96231, avg=15914.37, stdev=8520.21 00:30:05.752 clat percentiles (usec): 00:30:05.752 | 1.00th=[ 7832], 5.00th=[10290], 10.00th=[11207], 20.00th=[12125], 00:30:05.752 | 30.00th=[13173], 40.00th=[14353], 50.00th=[15008], 60.00th=[15533], 00:30:05.752 | 70.00th=[16057], 80.00th=[16712], 90.00th=[17695], 95.00th=[18744], 00:30:05.752 | 99.00th=[57410], 99.50th=[58983], 99.90th=[95945], 99.95th=[95945], 00:30:05.752 | 99.99th=[95945] 00:30:05.752 bw ( KiB/s): min=19712, max=29440, per=36.30%, avg=24206.95, stdev=3116.46, samples=20 00:30:05.752 iops : min= 154, max= 230, avg=189.10, stdev=24.36, samples=20 00:30:05.752 lat (msec) : 10=3.64%, 20=92.56%, 50=0.37%, 100=3.43% 00:30:05.752 cpu : usr=90.06%, sys=9.05%, ctx=37, majf=0, minf=119 00:30:05.752 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:05.752 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:05.752 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:05.752 issued rwts: total=1894,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:05.752 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:05.752 00:30:05.752 Run status group 0 (all jobs): 00:30:05.752 READ: bw=65.1MiB/s (68.3MB/s), 18.7MiB/s-23.6MiB/s (19.6MB/s-24.7MB/s), io=654MiB (686MB), run=10047-10050msec 00:30:05.752 03:30:38 -- target/dif.sh@132 -- # destroy_subsystems 0 00:30:05.752 03:30:38 -- target/dif.sh@43 -- # local sub 00:30:05.752 03:30:38 -- target/dif.sh@45 -- # for sub in "$@" 00:30:05.752 03:30:38 -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:05.752 03:30:38 -- target/dif.sh@36 -- # local sub_id=0 00:30:05.752 03:30:38 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:05.752 03:30:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:05.752 03:30:38 -- common/autotest_common.sh@10 -- # set +x 00:30:05.752 03:30:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:05.752 03:30:38 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:05.752 03:30:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:05.752 03:30:38 -- common/autotest_common.sh@10 -- # set +x 00:30:05.752 03:30:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:05.752 00:30:05.752 real 0m11.214s 00:30:05.752 user 0m28.421s 00:30:05.752 sys 0m2.822s 00:30:05.752 03:30:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:05.752 03:30:38 -- common/autotest_common.sh@10 -- # set +x 00:30:05.752 ************************************ 00:30:05.752 END TEST fio_dif_digest 00:30:05.752 ************************************ 00:30:05.752 03:30:38 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:30:05.752 03:30:38 -- target/dif.sh@147 -- # nvmftestfini 00:30:05.752 03:30:38 -- nvmf/common.sh@477 -- # nvmfcleanup 00:30:05.752 03:30:38 -- nvmf/common.sh@117 -- # sync 00:30:05.752 03:30:38 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:05.752 03:30:38 -- nvmf/common.sh@120 -- # set +e 00:30:05.752 03:30:38 -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:05.752 03:30:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:05.752 rmmod nvme_tcp 00:30:05.752 rmmod nvme_fabrics 00:30:05.752 rmmod nvme_keyring 00:30:05.752 03:30:38 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:05.752 03:30:38 -- nvmf/common.sh@124 -- # set -e 00:30:05.753 03:30:38 -- nvmf/common.sh@125 -- # return 0 00:30:05.753 03:30:38 -- nvmf/common.sh@478 -- # '[' -n 1632810 ']' 00:30:05.753 03:30:38 -- nvmf/common.sh@479 -- # killprocess 1632810 00:30:05.753 03:30:38 -- common/autotest_common.sh@936 -- # '[' -z 1632810 ']' 00:30:05.753 03:30:38 -- common/autotest_common.sh@940 -- # kill -0 1632810 00:30:05.753 03:30:38 -- common/autotest_common.sh@941 -- # uname 00:30:05.753 03:30:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:30:05.753 03:30:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1632810 00:30:05.753 03:30:38 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:30:05.753 03:30:38 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:30:05.753 03:30:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1632810' 00:30:05.753 killing process with pid 1632810 00:30:05.753 03:30:38 -- common/autotest_common.sh@955 -- # kill 1632810 00:30:05.753 03:30:38 -- common/autotest_common.sh@960 -- # wait 1632810 00:30:05.753 03:30:39 -- nvmf/common.sh@481 -- # '[' iso == iso ']' 00:30:05.753 03:30:39 -- nvmf/common.sh@482 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:30:06.011 Waiting for block devices as requested 00:30:06.011 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:30:06.011 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:30:06.270 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:30:06.270 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:30:06.270 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:30:06.529 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:30:06.529 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:30:06.529 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:30:06.529 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:30:06.529 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:30:06.788 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:30:06.788 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:30:06.788 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:30:07.047 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:30:07.047 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:30:07.047 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:30:07.047 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:30:07.307 03:30:41 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:30:07.307 03:30:41 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:30:07.307 03:30:41 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:07.307 03:30:41 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:07.307 03:30:41 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:07.307 03:30:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:07.307 03:30:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:09.208 03:30:43 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:09.208 00:30:09.208 real 1m7.098s 00:30:09.208 user 6m23.629s 00:30:09.208 sys 0m20.607s 00:30:09.208 03:30:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:09.208 03:30:43 -- common/autotest_common.sh@10 -- # set +x 00:30:09.208 ************************************ 00:30:09.208 END TEST nvmf_dif 00:30:09.208 ************************************ 00:30:09.208 03:30:43 -- spdk/autotest.sh@291 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:30:09.208 03:30:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:09.208 03:30:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:09.208 03:30:43 -- common/autotest_common.sh@10 -- # set +x 00:30:09.465 ************************************ 00:30:09.465 START TEST nvmf_abort_qd_sizes 00:30:09.465 ************************************ 00:30:09.465 03:30:43 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:30:09.465 * Looking for test storage... 00:30:09.465 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:09.465 03:30:43 -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:09.465 03:30:43 -- nvmf/common.sh@7 -- # uname -s 00:30:09.465 03:30:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:09.465 03:30:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:09.466 03:30:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:09.466 03:30:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:09.466 03:30:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:09.466 03:30:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:09.466 03:30:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:09.466 03:30:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:09.466 03:30:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:09.466 03:30:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:09.466 03:30:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:09.466 03:30:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:09.466 03:30:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:09.466 03:30:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:09.466 03:30:43 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:09.466 03:30:43 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:09.466 03:30:43 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:09.466 03:30:43 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:09.466 03:30:43 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:09.466 03:30:43 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:09.466 03:30:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:09.466 03:30:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:09.466 03:30:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:09.466 03:30:43 -- paths/export.sh@5 -- # export PATH 00:30:09.466 03:30:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:09.466 03:30:43 -- nvmf/common.sh@47 -- # : 0 00:30:09.466 03:30:43 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:09.466 03:30:43 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:09.466 03:30:43 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:09.466 03:30:43 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:09.466 03:30:43 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:09.466 03:30:43 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:09.466 03:30:43 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:09.466 03:30:43 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:09.466 03:30:43 -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:30:09.466 03:30:43 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:30:09.466 03:30:43 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:09.466 03:30:43 -- nvmf/common.sh@437 -- # prepare_net_devs 00:30:09.466 03:30:43 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:30:09.466 03:30:43 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:30:09.466 03:30:43 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:09.466 03:30:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:09.466 03:30:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:09.466 03:30:43 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:30:09.466 03:30:43 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:30:09.466 03:30:43 -- nvmf/common.sh@285 -- # xtrace_disable 00:30:09.466 03:30:43 -- common/autotest_common.sh@10 -- # set +x 00:30:11.373 03:30:45 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:30:11.373 03:30:45 -- nvmf/common.sh@291 -- # pci_devs=() 00:30:11.373 03:30:45 -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:11.373 03:30:45 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:11.373 03:30:45 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:11.373 03:30:45 -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:11.373 03:30:45 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:11.373 03:30:45 -- nvmf/common.sh@295 -- # net_devs=() 00:30:11.373 03:30:45 -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:11.373 03:30:45 -- nvmf/common.sh@296 -- # e810=() 00:30:11.373 03:30:45 -- nvmf/common.sh@296 -- # local -ga e810 00:30:11.373 03:30:45 -- nvmf/common.sh@297 -- # x722=() 00:30:11.373 03:30:45 -- nvmf/common.sh@297 -- # local -ga x722 00:30:11.373 03:30:45 -- nvmf/common.sh@298 -- # mlx=() 00:30:11.373 03:30:45 -- nvmf/common.sh@298 -- # local -ga mlx 00:30:11.373 03:30:45 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:11.373 03:30:45 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:11.373 03:30:45 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:11.373 03:30:45 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:11.373 03:30:45 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:11.373 03:30:45 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:11.373 03:30:45 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:11.373 03:30:45 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:11.373 03:30:45 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:11.373 03:30:45 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:11.373 03:30:45 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:11.373 03:30:45 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:11.373 03:30:45 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:11.373 03:30:45 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:11.373 03:30:45 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:11.373 03:30:45 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:11.373 03:30:45 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:11.373 03:30:45 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:11.373 03:30:45 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:11.373 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:11.373 03:30:45 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:11.373 03:30:45 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:11.373 03:30:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:11.373 03:30:45 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:11.373 03:30:45 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:11.373 03:30:45 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:11.373 03:30:45 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:11.373 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:11.373 03:30:45 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:11.373 03:30:45 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:11.373 03:30:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:11.373 03:30:45 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:11.373 03:30:45 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:11.373 03:30:45 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:11.373 03:30:45 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:11.373 03:30:45 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:11.373 03:30:45 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:11.374 03:30:45 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:11.374 03:30:45 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:30:11.374 03:30:45 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:11.374 03:30:45 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:11.374 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:11.374 03:30:45 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:30:11.374 03:30:45 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:11.374 03:30:45 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:11.374 03:30:45 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:30:11.374 03:30:45 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:11.374 03:30:45 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:11.374 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:11.374 03:30:45 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:30:11.374 03:30:45 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:30:11.374 03:30:45 -- nvmf/common.sh@403 -- # is_hw=yes 00:30:11.374 03:30:45 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:30:11.374 03:30:45 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:30:11.374 03:30:45 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:30:11.374 03:30:45 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:11.374 03:30:45 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:11.374 03:30:45 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:11.374 03:30:45 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:11.374 03:30:45 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:11.374 03:30:45 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:11.374 03:30:45 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:11.374 03:30:45 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:11.374 03:30:45 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:11.374 03:30:45 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:11.374 03:30:45 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:11.374 03:30:45 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:11.374 03:30:45 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:11.374 03:30:45 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:11.374 03:30:45 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:11.374 03:30:45 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:11.374 03:30:45 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:11.374 03:30:45 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:11.374 03:30:45 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:11.374 03:30:45 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:11.374 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:11.374 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.203 ms 00:30:11.374 00:30:11.374 --- 10.0.0.2 ping statistics --- 00:30:11.374 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:11.374 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:30:11.374 03:30:45 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:11.374 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:11.374 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:30:11.374 00:30:11.374 --- 10.0.0.1 ping statistics --- 00:30:11.374 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:11.374 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:30:11.374 03:30:45 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:11.374 03:30:45 -- nvmf/common.sh@411 -- # return 0 00:30:11.374 03:30:45 -- nvmf/common.sh@439 -- # '[' iso == iso ']' 00:30:11.374 03:30:45 -- nvmf/common.sh@440 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:30:12.750 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:30:12.750 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:30:12.750 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:30:12.750 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:30:12.750 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:30:12.750 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:30:12.750 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:30:12.750 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:30:12.750 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:30:12.750 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:30:12.750 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:30:12.750 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:30:12.750 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:30:12.750 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:30:12.750 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:30:12.750 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:30:13.685 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:30:13.685 03:30:48 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:13.685 03:30:48 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:30:13.685 03:30:48 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:30:13.685 03:30:48 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:13.685 03:30:48 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:30:13.685 03:30:48 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:30:13.685 03:30:48 -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:30:13.685 03:30:48 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:30:13.685 03:30:48 -- common/autotest_common.sh@710 -- # xtrace_disable 00:30:13.685 03:30:48 -- common/autotest_common.sh@10 -- # set +x 00:30:13.685 03:30:48 -- nvmf/common.sh@470 -- # nvmfpid=1644422 00:30:13.685 03:30:48 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:30:13.685 03:30:48 -- nvmf/common.sh@471 -- # waitforlisten 1644422 00:30:13.685 03:30:48 -- common/autotest_common.sh@817 -- # '[' -z 1644422 ']' 00:30:13.685 03:30:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:13.685 03:30:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:30:13.685 03:30:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:13.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:13.685 03:30:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:30:13.685 03:30:48 -- common/autotest_common.sh@10 -- # set +x 00:30:13.685 [2024-04-25 03:30:48.094868] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:30:13.685 [2024-04-25 03:30:48.094954] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:13.685 EAL: No free 2048 kB hugepages reported on node 1 00:30:13.685 [2024-04-25 03:30:48.158429] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:13.943 [2024-04-25 03:30:48.268246] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:13.943 [2024-04-25 03:30:48.268298] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:13.943 [2024-04-25 03:30:48.268321] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:13.943 [2024-04-25 03:30:48.268332] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:13.943 [2024-04-25 03:30:48.268342] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:13.943 [2024-04-25 03:30:48.268403] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:13.943 [2024-04-25 03:30:48.268461] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:13.943 [2024-04-25 03:30:48.268490] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:30:13.943 [2024-04-25 03:30:48.268493] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:14.876 03:30:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:30:14.876 03:30:49 -- common/autotest_common.sh@850 -- # return 0 00:30:14.876 03:30:49 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:30:14.876 03:30:49 -- common/autotest_common.sh@716 -- # xtrace_disable 00:30:14.876 03:30:49 -- common/autotest_common.sh@10 -- # set +x 00:30:14.876 03:30:49 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:14.876 03:30:49 -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:30:14.876 03:30:49 -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:30:14.876 03:30:49 -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:30:14.876 03:30:49 -- scripts/common.sh@309 -- # local bdf bdfs 00:30:14.876 03:30:49 -- scripts/common.sh@310 -- # local nvmes 00:30:14.876 03:30:49 -- scripts/common.sh@312 -- # [[ -n 0000:88:00.0 ]] 00:30:14.876 03:30:49 -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:30:14.876 03:30:49 -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:30:14.876 03:30:49 -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 00:30:14.876 03:30:49 -- scripts/common.sh@320 -- # uname -s 00:30:14.876 03:30:49 -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:30:14.876 03:30:49 -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:30:14.876 03:30:49 -- scripts/common.sh@325 -- # (( 1 )) 00:30:14.876 03:30:49 -- scripts/common.sh@326 -- # printf '%s\n' 0000:88:00.0 00:30:14.876 03:30:49 -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:30:14.876 03:30:49 -- target/abort_qd_sizes.sh@78 -- # nvme=0000:88:00.0 00:30:14.876 03:30:49 -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:30:14.876 03:30:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:14.876 03:30:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:14.876 03:30:49 -- common/autotest_common.sh@10 -- # set +x 00:30:14.876 ************************************ 00:30:14.876 START TEST spdk_target_abort 00:30:14.876 ************************************ 00:30:14.876 03:30:49 -- common/autotest_common.sh@1111 -- # spdk_target 00:30:14.876 03:30:49 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:30:14.876 03:30:49 -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 00:30:14.876 03:30:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:14.876 03:30:49 -- common/autotest_common.sh@10 -- # set +x 00:30:18.154 spdk_targetn1 00:30:18.154 03:30:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:18.154 03:30:51 -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:18.154 03:30:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:18.154 03:30:51 -- common/autotest_common.sh@10 -- # set +x 00:30:18.154 [2024-04-25 03:30:51.979395] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:18.154 03:30:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:18.154 03:30:51 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:30:18.154 03:30:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:18.154 03:30:51 -- common/autotest_common.sh@10 -- # set +x 00:30:18.154 03:30:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:18.154 03:30:51 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:30:18.154 03:30:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:18.154 03:30:51 -- common/autotest_common.sh@10 -- # set +x 00:30:18.154 03:30:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:18.154 03:30:52 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:30:18.154 03:30:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:18.154 03:30:52 -- common/autotest_common.sh@10 -- # set +x 00:30:18.154 [2024-04-25 03:30:52.011642] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:18.154 03:30:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:18.154 03:30:52 -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:30:18.154 03:30:52 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:30:18.154 03:30:52 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:30:18.154 03:30:52 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:30:18.154 03:30:52 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:30:18.154 03:30:52 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:30:18.154 03:30:52 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:30:18.154 03:30:52 -- target/abort_qd_sizes.sh@24 -- # local target r 00:30:18.154 03:30:52 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:30:18.154 03:30:52 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:18.154 03:30:52 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:30:18.154 03:30:52 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:18.154 03:30:52 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:30:18.154 03:30:52 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:18.154 03:30:52 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:30:18.154 03:30:52 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:18.154 03:30:52 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:18.154 03:30:52 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:18.154 03:30:52 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:18.154 03:30:52 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:18.154 03:30:52 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:18.155 EAL: No free 2048 kB hugepages reported on node 1 00:30:20.680 Initializing NVMe Controllers 00:30:20.680 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:30:20.680 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:20.680 Initialization complete. Launching workers. 00:30:20.680 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 9099, failed: 0 00:30:20.680 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1337, failed to submit 7762 00:30:20.680 success 859, unsuccess 478, failed 0 00:30:20.680 03:30:55 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:20.680 03:30:55 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:20.680 EAL: No free 2048 kB hugepages reported on node 1 00:30:23.963 Initializing NVMe Controllers 00:30:23.963 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:30:23.963 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:23.963 Initialization complete. Launching workers. 00:30:23.963 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8846, failed: 0 00:30:23.963 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1250, failed to submit 7596 00:30:23.963 success 298, unsuccess 952, failed 0 00:30:23.963 03:30:58 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:23.963 03:30:58 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:23.963 EAL: No free 2048 kB hugepages reported on node 1 00:30:27.241 Initializing NVMe Controllers 00:30:27.241 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:30:27.241 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:27.241 Initialization complete. Launching workers. 00:30:27.241 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31068, failed: 0 00:30:27.241 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2651, failed to submit 28417 00:30:27.241 success 503, unsuccess 2148, failed 0 00:30:27.241 03:31:01 -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:30:27.241 03:31:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:27.241 03:31:01 -- common/autotest_common.sh@10 -- # set +x 00:30:27.241 03:31:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:27.241 03:31:01 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:30:27.241 03:31:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:27.241 03:31:01 -- common/autotest_common.sh@10 -- # set +x 00:30:28.616 03:31:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:28.616 03:31:02 -- target/abort_qd_sizes.sh@61 -- # killprocess 1644422 00:30:28.616 03:31:02 -- common/autotest_common.sh@936 -- # '[' -z 1644422 ']' 00:30:28.616 03:31:02 -- common/autotest_common.sh@940 -- # kill -0 1644422 00:30:28.616 03:31:02 -- common/autotest_common.sh@941 -- # uname 00:30:28.616 03:31:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:30:28.616 03:31:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1644422 00:30:28.616 03:31:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:30:28.616 03:31:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:30:28.616 03:31:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1644422' 00:30:28.616 killing process with pid 1644422 00:30:28.616 03:31:02 -- common/autotest_common.sh@955 -- # kill 1644422 00:30:28.616 03:31:02 -- common/autotest_common.sh@960 -- # wait 1644422 00:30:28.897 00:30:28.897 real 0m14.042s 00:30:28.897 user 0m55.702s 00:30:28.897 sys 0m2.635s 00:30:28.897 03:31:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:28.897 03:31:03 -- common/autotest_common.sh@10 -- # set +x 00:30:28.897 ************************************ 00:30:28.897 END TEST spdk_target_abort 00:30:28.897 ************************************ 00:30:28.897 03:31:03 -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:30:28.897 03:31:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:28.897 03:31:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:28.897 03:31:03 -- common/autotest_common.sh@10 -- # set +x 00:30:28.897 ************************************ 00:30:28.897 START TEST kernel_target_abort 00:30:28.897 ************************************ 00:30:28.897 03:31:03 -- common/autotest_common.sh@1111 -- # kernel_target 00:30:28.897 03:31:03 -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:30:28.897 03:31:03 -- nvmf/common.sh@717 -- # local ip 00:30:28.897 03:31:03 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:28.897 03:31:03 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:28.897 03:31:03 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:28.897 03:31:03 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:28.897 03:31:03 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:28.897 03:31:03 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:28.897 03:31:03 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:28.897 03:31:03 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:28.897 03:31:03 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:28.897 03:31:03 -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:30:28.897 03:31:03 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:30:28.897 03:31:03 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:30:28.897 03:31:03 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:28.897 03:31:03 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:30:28.897 03:31:03 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:30:28.897 03:31:03 -- nvmf/common.sh@628 -- # local block nvme 00:30:28.897 03:31:03 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:30:28.897 03:31:03 -- nvmf/common.sh@631 -- # modprobe nvmet 00:30:28.897 03:31:03 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:30:28.897 03:31:03 -- nvmf/common.sh@636 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:30:30.282 Waiting for block devices as requested 00:30:30.282 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:30:30.282 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:30:30.282 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:30:30.282 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:30:30.541 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:30:30.541 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:30:30.541 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:30:30.541 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:30:30.541 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:30:30.800 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:30:30.800 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:30:30.800 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:30:31.058 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:30:31.058 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:30:31.058 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:30:31.058 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:30:31.316 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:30:31.316 03:31:05 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:30:31.316 03:31:05 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:30:31.316 03:31:05 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:30:31.316 03:31:05 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:30:31.316 03:31:05 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:30:31.316 03:31:05 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:30:31.316 03:31:05 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:30:31.316 03:31:05 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:30:31.316 03:31:05 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:30:31.316 No valid GPT data, bailing 00:30:31.316 03:31:05 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:30:31.316 03:31:05 -- scripts/common.sh@391 -- # pt= 00:30:31.316 03:31:05 -- scripts/common.sh@392 -- # return 1 00:30:31.316 03:31:05 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:30:31.316 03:31:05 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme0n1 ]] 00:30:31.316 03:31:05 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:31.316 03:31:05 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:30:31.316 03:31:05 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:30:31.316 03:31:05 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:30:31.317 03:31:05 -- nvmf/common.sh@656 -- # echo 1 00:30:31.317 03:31:05 -- nvmf/common.sh@657 -- # echo /dev/nvme0n1 00:30:31.317 03:31:05 -- nvmf/common.sh@658 -- # echo 1 00:30:31.317 03:31:05 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:30:31.317 03:31:05 -- nvmf/common.sh@661 -- # echo tcp 00:30:31.317 03:31:05 -- nvmf/common.sh@662 -- # echo 4420 00:30:31.317 03:31:05 -- nvmf/common.sh@663 -- # echo ipv4 00:30:31.317 03:31:05 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:30:31.575 03:31:05 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:30:31.575 00:30:31.575 Discovery Log Number of Records 2, Generation counter 2 00:30:31.575 =====Discovery Log Entry 0====== 00:30:31.575 trtype: tcp 00:30:31.575 adrfam: ipv4 00:30:31.575 subtype: current discovery subsystem 00:30:31.575 treq: not specified, sq flow control disable supported 00:30:31.575 portid: 1 00:30:31.575 trsvcid: 4420 00:30:31.575 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:30:31.575 traddr: 10.0.0.1 00:30:31.575 eflags: none 00:30:31.575 sectype: none 00:30:31.575 =====Discovery Log Entry 1====== 00:30:31.575 trtype: tcp 00:30:31.575 adrfam: ipv4 00:30:31.575 subtype: nvme subsystem 00:30:31.575 treq: not specified, sq flow control disable supported 00:30:31.575 portid: 1 00:30:31.575 trsvcid: 4420 00:30:31.575 subnqn: nqn.2016-06.io.spdk:testnqn 00:30:31.575 traddr: 10.0.0.1 00:30:31.575 eflags: none 00:30:31.575 sectype: none 00:30:31.575 03:31:05 -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:30:31.575 03:31:05 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:30:31.575 03:31:05 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:30:31.575 03:31:05 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:30:31.575 03:31:05 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:30:31.575 03:31:05 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:30:31.575 03:31:05 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:30:31.575 03:31:05 -- target/abort_qd_sizes.sh@24 -- # local target r 00:30:31.575 03:31:05 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:30:31.575 03:31:05 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:31.575 03:31:05 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:30:31.575 03:31:05 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:31.575 03:31:05 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:30:31.575 03:31:05 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:31.575 03:31:05 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:30:31.575 03:31:05 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:31.575 03:31:05 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:30:31.575 03:31:05 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:31.575 03:31:05 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:31.575 03:31:05 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:31.575 03:31:05 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:31.575 EAL: No free 2048 kB hugepages reported on node 1 00:30:34.855 Initializing NVMe Controllers 00:30:34.855 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:30:34.855 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:34.855 Initialization complete. Launching workers. 00:30:34.855 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 25265, failed: 0 00:30:34.855 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 25265, failed to submit 0 00:30:34.855 success 0, unsuccess 25265, failed 0 00:30:34.855 03:31:08 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:34.855 03:31:08 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:34.855 EAL: No free 2048 kB hugepages reported on node 1 00:30:38.145 Initializing NVMe Controllers 00:30:38.145 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:30:38.145 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:38.145 Initialization complete. Launching workers. 00:30:38.145 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 55912, failed: 0 00:30:38.145 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 14074, failed to submit 41838 00:30:38.145 success 0, unsuccess 14074, failed 0 00:30:38.145 03:31:12 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:38.145 03:31:12 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:38.145 EAL: No free 2048 kB hugepages reported on node 1 00:30:41.427 Initializing NVMe Controllers 00:30:41.427 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:30:41.427 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:41.427 Initialization complete. Launching workers. 00:30:41.427 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 51929, failed: 0 00:30:41.427 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 12962, failed to submit 38967 00:30:41.427 success 0, unsuccess 12962, failed 0 00:30:41.427 03:31:15 -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:30:41.427 03:31:15 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:30:41.427 03:31:15 -- nvmf/common.sh@675 -- # echo 0 00:30:41.427 03:31:15 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:41.427 03:31:15 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:30:41.427 03:31:15 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:30:41.427 03:31:15 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:41.427 03:31:15 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:30:41.427 03:31:15 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:30:41.427 03:31:15 -- nvmf/common.sh@687 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:30:41.995 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:30:41.995 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:30:41.995 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:30:41.995 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:30:41.995 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:30:41.995 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:30:41.995 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:30:41.995 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:30:41.995 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:30:41.995 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:30:41.995 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:30:41.995 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:30:41.995 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:30:41.995 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:30:41.995 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:30:41.995 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:30:42.931 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:30:43.190 00:30:43.190 real 0m14.145s 00:30:43.190 user 0m4.202s 00:30:43.190 sys 0m3.428s 00:30:43.190 03:31:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:43.190 03:31:17 -- common/autotest_common.sh@10 -- # set +x 00:30:43.190 ************************************ 00:30:43.190 END TEST kernel_target_abort 00:30:43.190 ************************************ 00:30:43.190 03:31:17 -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:30:43.190 03:31:17 -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:30:43.190 03:31:17 -- nvmf/common.sh@477 -- # nvmfcleanup 00:30:43.190 03:31:17 -- nvmf/common.sh@117 -- # sync 00:30:43.190 03:31:17 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:43.190 03:31:17 -- nvmf/common.sh@120 -- # set +e 00:30:43.190 03:31:17 -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:43.190 03:31:17 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:43.190 rmmod nvme_tcp 00:30:43.190 rmmod nvme_fabrics 00:30:43.196 rmmod nvme_keyring 00:30:43.196 03:31:17 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:43.196 03:31:17 -- nvmf/common.sh@124 -- # set -e 00:30:43.196 03:31:17 -- nvmf/common.sh@125 -- # return 0 00:30:43.196 03:31:17 -- nvmf/common.sh@478 -- # '[' -n 1644422 ']' 00:30:43.196 03:31:17 -- nvmf/common.sh@479 -- # killprocess 1644422 00:30:43.196 03:31:17 -- common/autotest_common.sh@936 -- # '[' -z 1644422 ']' 00:30:43.196 03:31:17 -- common/autotest_common.sh@940 -- # kill -0 1644422 00:30:43.196 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (1644422) - No such process 00:30:43.196 03:31:17 -- common/autotest_common.sh@963 -- # echo 'Process with pid 1644422 is not found' 00:30:43.196 Process with pid 1644422 is not found 00:30:43.196 03:31:17 -- nvmf/common.sh@481 -- # '[' iso == iso ']' 00:30:43.196 03:31:17 -- nvmf/common.sh@482 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:30:44.132 Waiting for block devices as requested 00:30:44.132 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:30:44.391 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:30:44.391 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:30:44.649 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:30:44.649 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:30:44.649 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:30:44.649 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:30:44.649 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:30:44.907 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:30:44.907 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:30:44.907 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:30:45.165 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:30:45.165 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:30:45.165 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:30:45.165 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:30:45.426 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:30:45.426 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:30:45.426 03:31:19 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:30:45.426 03:31:19 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:30:45.426 03:31:19 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:45.426 03:31:19 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:45.426 03:31:19 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:45.426 03:31:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:45.426 03:31:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:47.966 03:31:21 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:47.966 00:30:47.966 real 0m38.201s 00:30:47.966 user 1m2.121s 00:30:47.966 sys 0m9.445s 00:30:47.966 03:31:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:47.966 03:31:21 -- common/autotest_common.sh@10 -- # set +x 00:30:47.966 ************************************ 00:30:47.966 END TEST nvmf_abort_qd_sizes 00:30:47.966 ************************************ 00:30:47.966 03:31:21 -- spdk/autotest.sh@293 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:30:47.966 03:31:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:47.966 03:31:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:47.966 03:31:21 -- common/autotest_common.sh@10 -- # set +x 00:30:47.966 ************************************ 00:30:47.966 START TEST keyring_file 00:30:47.966 ************************************ 00:30:47.966 03:31:22 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:30:47.966 * Looking for test storage... 00:30:47.966 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:30:47.966 03:31:22 -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:30:47.966 03:31:22 -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:47.966 03:31:22 -- nvmf/common.sh@7 -- # uname -s 00:30:47.966 03:31:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:47.966 03:31:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:47.966 03:31:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:47.966 03:31:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:47.966 03:31:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:47.966 03:31:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:47.966 03:31:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:47.966 03:31:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:47.966 03:31:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:47.966 03:31:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:47.966 03:31:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:47.966 03:31:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:47.966 03:31:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:47.966 03:31:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:47.966 03:31:22 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:47.966 03:31:22 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:47.966 03:31:22 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:47.966 03:31:22 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:47.966 03:31:22 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:47.966 03:31:22 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:47.966 03:31:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:47.966 03:31:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:47.966 03:31:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:47.966 03:31:22 -- paths/export.sh@5 -- # export PATH 00:30:47.967 03:31:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:47.967 03:31:22 -- nvmf/common.sh@47 -- # : 0 00:30:47.967 03:31:22 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:47.967 03:31:22 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:47.967 03:31:22 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:47.967 03:31:22 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:47.967 03:31:22 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:47.967 03:31:22 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:47.967 03:31:22 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:47.967 03:31:22 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:47.967 03:31:22 -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:30:47.967 03:31:22 -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:30:47.967 03:31:22 -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:30:47.967 03:31:22 -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:30:47.967 03:31:22 -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:30:47.967 03:31:22 -- keyring/file.sh@24 -- # trap cleanup EXIT 00:30:47.967 03:31:22 -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:30:47.967 03:31:22 -- keyring/common.sh@15 -- # local name key digest path 00:30:47.967 03:31:22 -- keyring/common.sh@17 -- # name=key0 00:30:47.967 03:31:22 -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:30:47.967 03:31:22 -- keyring/common.sh@17 -- # digest=0 00:30:47.967 03:31:22 -- keyring/common.sh@18 -- # mktemp 00:30:47.967 03:31:22 -- keyring/common.sh@18 -- # path=/tmp/tmp.boiNcZlsKd 00:30:47.967 03:31:22 -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:30:47.967 03:31:22 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:30:47.967 03:31:22 -- nvmf/common.sh@691 -- # local prefix key digest 00:30:47.967 03:31:22 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:30:47.967 03:31:22 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:30:47.967 03:31:22 -- nvmf/common.sh@693 -- # digest=0 00:30:47.967 03:31:22 -- nvmf/common.sh@694 -- # python - 00:30:47.967 03:31:22 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.boiNcZlsKd 00:30:47.967 03:31:22 -- keyring/common.sh@23 -- # echo /tmp/tmp.boiNcZlsKd 00:30:47.967 03:31:22 -- keyring/file.sh@26 -- # key0path=/tmp/tmp.boiNcZlsKd 00:30:47.967 03:31:22 -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:30:47.967 03:31:22 -- keyring/common.sh@15 -- # local name key digest path 00:30:47.967 03:31:22 -- keyring/common.sh@17 -- # name=key1 00:30:47.967 03:31:22 -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:30:47.967 03:31:22 -- keyring/common.sh@17 -- # digest=0 00:30:47.967 03:31:22 -- keyring/common.sh@18 -- # mktemp 00:30:47.967 03:31:22 -- keyring/common.sh@18 -- # path=/tmp/tmp.rWEtK60iMl 00:30:47.967 03:31:22 -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:30:47.967 03:31:22 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:30:47.967 03:31:22 -- nvmf/common.sh@691 -- # local prefix key digest 00:30:47.967 03:31:22 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:30:47.967 03:31:22 -- nvmf/common.sh@693 -- # key=112233445566778899aabbccddeeff00 00:30:47.967 03:31:22 -- nvmf/common.sh@693 -- # digest=0 00:30:47.967 03:31:22 -- nvmf/common.sh@694 -- # python - 00:30:47.967 03:31:22 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.rWEtK60iMl 00:30:47.967 03:31:22 -- keyring/common.sh@23 -- # echo /tmp/tmp.rWEtK60iMl 00:30:47.967 03:31:22 -- keyring/file.sh@27 -- # key1path=/tmp/tmp.rWEtK60iMl 00:30:47.967 03:31:22 -- keyring/file.sh@30 -- # tgtpid=1650323 00:30:47.967 03:31:22 -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:30:47.967 03:31:22 -- keyring/file.sh@32 -- # waitforlisten 1650323 00:30:47.967 03:31:22 -- common/autotest_common.sh@817 -- # '[' -z 1650323 ']' 00:30:47.967 03:31:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:47.967 03:31:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:30:47.967 03:31:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:47.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:47.967 03:31:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:30:47.967 03:31:22 -- common/autotest_common.sh@10 -- # set +x 00:30:47.967 [2024-04-25 03:31:22.276398] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:30:47.967 [2024-04-25 03:31:22.276478] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1650323 ] 00:30:47.967 EAL: No free 2048 kB hugepages reported on node 1 00:30:47.967 [2024-04-25 03:31:22.338861] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:47.967 [2024-04-25 03:31:22.459046] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:48.226 03:31:22 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:30:48.226 03:31:22 -- common/autotest_common.sh@850 -- # return 0 00:30:48.226 03:31:22 -- keyring/file.sh@33 -- # rpc_cmd 00:30:48.226 03:31:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:48.226 03:31:22 -- common/autotest_common.sh@10 -- # set +x 00:30:48.226 [2024-04-25 03:31:22.715492] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:48.485 null0 00:30:48.485 [2024-04-25 03:31:22.747553] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:30:48.485 [2024-04-25 03:31:22.748058] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:30:48.485 [2024-04-25 03:31:22.755568] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:30:48.485 03:31:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:48.485 03:31:22 -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:30:48.485 03:31:22 -- common/autotest_common.sh@638 -- # local es=0 00:30:48.485 03:31:22 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:30:48.485 03:31:22 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:30:48.485 03:31:22 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:30:48.485 03:31:22 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:30:48.485 03:31:22 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:30:48.485 03:31:22 -- common/autotest_common.sh@641 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:30:48.485 03:31:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:48.485 03:31:22 -- common/autotest_common.sh@10 -- # set +x 00:30:48.485 [2024-04-25 03:31:22.767595] nvmf_rpc.c: 769:nvmf_rpc_listen_paused: *ERROR*: A listener already exists with different secure channel option.request: 00:30:48.485 { 00:30:48.485 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:30:48.485 "secure_channel": false, 00:30:48.485 "listen_address": { 00:30:48.485 "trtype": "tcp", 00:30:48.485 "traddr": "127.0.0.1", 00:30:48.485 "trsvcid": "4420" 00:30:48.485 }, 00:30:48.485 "method": "nvmf_subsystem_add_listener", 00:30:48.485 "req_id": 1 00:30:48.485 } 00:30:48.485 Got JSON-RPC error response 00:30:48.485 response: 00:30:48.485 { 00:30:48.485 "code": -32602, 00:30:48.485 "message": "Invalid parameters" 00:30:48.485 } 00:30:48.485 03:31:22 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:30:48.485 03:31:22 -- common/autotest_common.sh@641 -- # es=1 00:30:48.485 03:31:22 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:30:48.485 03:31:22 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:30:48.485 03:31:22 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:30:48.485 03:31:22 -- keyring/file.sh@46 -- # bperfpid=1650333 00:30:48.485 03:31:22 -- keyring/file.sh@48 -- # waitforlisten 1650333 /var/tmp/bperf.sock 00:30:48.485 03:31:22 -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:30:48.485 03:31:22 -- common/autotest_common.sh@817 -- # '[' -z 1650333 ']' 00:30:48.485 03:31:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:48.485 03:31:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:30:48.486 03:31:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:48.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:48.486 03:31:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:30:48.486 03:31:22 -- common/autotest_common.sh@10 -- # set +x 00:30:48.486 [2024-04-25 03:31:22.814071] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:30:48.486 [2024-04-25 03:31:22.814154] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1650333 ] 00:30:48.486 EAL: No free 2048 kB hugepages reported on node 1 00:30:48.486 [2024-04-25 03:31:22.873278] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:48.486 [2024-04-25 03:31:22.983363] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:48.745 03:31:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:30:48.745 03:31:23 -- common/autotest_common.sh@850 -- # return 0 00:30:48.745 03:31:23 -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.boiNcZlsKd 00:30:48.745 03:31:23 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.boiNcZlsKd 00:30:49.003 03:31:23 -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.rWEtK60iMl 00:30:49.003 03:31:23 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.rWEtK60iMl 00:30:49.261 03:31:23 -- keyring/file.sh@51 -- # get_key key0 00:30:49.261 03:31:23 -- keyring/file.sh@51 -- # jq -r .path 00:30:49.261 03:31:23 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:49.261 03:31:23 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:49.261 03:31:23 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:49.522 03:31:23 -- keyring/file.sh@51 -- # [[ /tmp/tmp.boiNcZlsKd == \/\t\m\p\/\t\m\p\.\b\o\i\N\c\Z\l\s\K\d ]] 00:30:49.522 03:31:23 -- keyring/file.sh@52 -- # get_key key1 00:30:49.522 03:31:23 -- keyring/file.sh@52 -- # jq -r .path 00:30:49.522 03:31:23 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:49.522 03:31:23 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:49.522 03:31:23 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:49.781 03:31:24 -- keyring/file.sh@52 -- # [[ /tmp/tmp.rWEtK60iMl == \/\t\m\p\/\t\m\p\.\r\W\E\t\K\6\0\i\M\l ]] 00:30:49.781 03:31:24 -- keyring/file.sh@53 -- # get_refcnt key0 00:30:49.781 03:31:24 -- keyring/common.sh@12 -- # get_key key0 00:30:49.781 03:31:24 -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:49.781 03:31:24 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:49.781 03:31:24 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:49.781 03:31:24 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:50.039 03:31:24 -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:30:50.039 03:31:24 -- keyring/file.sh@54 -- # get_refcnt key1 00:30:50.039 03:31:24 -- keyring/common.sh@12 -- # get_key key1 00:30:50.039 03:31:24 -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:50.039 03:31:24 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:50.039 03:31:24 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:50.039 03:31:24 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:50.297 03:31:24 -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:30:50.297 03:31:24 -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:50.297 03:31:24 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:50.297 [2024-04-25 03:31:24.768052] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:30:50.555 nvme0n1 00:30:50.555 03:31:24 -- keyring/file.sh@59 -- # get_refcnt key0 00:30:50.555 03:31:24 -- keyring/common.sh@12 -- # get_key key0 00:30:50.555 03:31:24 -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:50.555 03:31:24 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:50.555 03:31:24 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:50.555 03:31:24 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:50.813 03:31:25 -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:30:50.813 03:31:25 -- keyring/file.sh@60 -- # get_refcnt key1 00:30:50.813 03:31:25 -- keyring/common.sh@12 -- # get_key key1 00:30:50.813 03:31:25 -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:50.813 03:31:25 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:50.813 03:31:25 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:50.813 03:31:25 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:51.071 03:31:25 -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:30:51.071 03:31:25 -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:51.071 Running I/O for 1 seconds... 00:30:52.004 00:30:52.004 Latency(us) 00:30:52.004 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:52.004 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:30:52.004 nvme0n1 : 1.02 3838.65 14.99 0.00 0.00 33114.10 4247.70 84274.44 00:30:52.004 =================================================================================================================== 00:30:52.004 Total : 3838.65 14.99 0.00 0.00 33114.10 4247.70 84274.44 00:30:52.004 0 00:30:52.005 03:31:26 -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:30:52.005 03:31:26 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:30:52.262 03:31:26 -- keyring/file.sh@65 -- # get_refcnt key0 00:30:52.262 03:31:26 -- keyring/common.sh@12 -- # get_key key0 00:30:52.262 03:31:26 -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:52.262 03:31:26 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:52.262 03:31:26 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:52.262 03:31:26 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:52.520 03:31:26 -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:30:52.520 03:31:27 -- keyring/file.sh@66 -- # get_refcnt key1 00:30:52.520 03:31:27 -- keyring/common.sh@12 -- # get_key key1 00:30:52.520 03:31:27 -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:52.520 03:31:27 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:52.520 03:31:27 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:52.520 03:31:27 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:52.779 03:31:27 -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:30:52.779 03:31:27 -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:30:52.779 03:31:27 -- common/autotest_common.sh@638 -- # local es=0 00:30:52.779 03:31:27 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:30:52.779 03:31:27 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:30:52.779 03:31:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:30:52.779 03:31:27 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:30:52.779 03:31:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:30:52.779 03:31:27 -- common/autotest_common.sh@641 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:30:52.779 03:31:27 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:30:53.037 [2024-04-25 03:31:27.462923] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:30:53.037 [2024-04-25 03:31:27.463402] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7b4340 (107): Transport endpoint is not connected 00:30:53.037 [2024-04-25 03:31:27.464390] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7b4340 (9): Bad file descriptor 00:30:53.037 [2024-04-25 03:31:27.465387] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:53.037 [2024-04-25 03:31:27.465410] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:30:53.037 [2024-04-25 03:31:27.465437] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:53.037 request: 00:30:53.037 { 00:30:53.037 "name": "nvme0", 00:30:53.037 "trtype": "tcp", 00:30:53.037 "traddr": "127.0.0.1", 00:30:53.037 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:53.037 "adrfam": "ipv4", 00:30:53.037 "trsvcid": "4420", 00:30:53.037 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:53.037 "psk": "key1", 00:30:53.037 "method": "bdev_nvme_attach_controller", 00:30:53.037 "req_id": 1 00:30:53.037 } 00:30:53.037 Got JSON-RPC error response 00:30:53.037 response: 00:30:53.037 { 00:30:53.037 "code": -32602, 00:30:53.037 "message": "Invalid parameters" 00:30:53.037 } 00:30:53.037 03:31:27 -- common/autotest_common.sh@641 -- # es=1 00:30:53.037 03:31:27 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:30:53.037 03:31:27 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:30:53.037 03:31:27 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:30:53.037 03:31:27 -- keyring/file.sh@71 -- # get_refcnt key0 00:30:53.037 03:31:27 -- keyring/common.sh@12 -- # get_key key0 00:30:53.037 03:31:27 -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:53.037 03:31:27 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:53.037 03:31:27 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:53.037 03:31:27 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:53.295 03:31:27 -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:30:53.295 03:31:27 -- keyring/file.sh@72 -- # get_refcnt key1 00:30:53.295 03:31:27 -- keyring/common.sh@12 -- # get_key key1 00:30:53.295 03:31:27 -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:53.295 03:31:27 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:53.295 03:31:27 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:53.295 03:31:27 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:53.554 03:31:27 -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:30:53.554 03:31:27 -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:30:53.554 03:31:27 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:30:53.812 03:31:28 -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:30:53.812 03:31:28 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:30:54.070 03:31:28 -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:30:54.070 03:31:28 -- keyring/file.sh@77 -- # jq length 00:30:54.070 03:31:28 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:54.329 03:31:28 -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:30:54.329 03:31:28 -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.boiNcZlsKd 00:30:54.329 03:31:28 -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.boiNcZlsKd 00:30:54.329 03:31:28 -- common/autotest_common.sh@638 -- # local es=0 00:30:54.329 03:31:28 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.boiNcZlsKd 00:30:54.329 03:31:28 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:30:54.329 03:31:28 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:30:54.329 03:31:28 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:30:54.329 03:31:28 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:30:54.329 03:31:28 -- common/autotest_common.sh@641 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.boiNcZlsKd 00:30:54.329 03:31:28 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.boiNcZlsKd 00:30:54.587 [2024-04-25 03:31:28.939175] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.boiNcZlsKd': 0100660 00:30:54.587 [2024-04-25 03:31:28.939217] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:30:54.587 request: 00:30:54.587 { 00:30:54.587 "name": "key0", 00:30:54.587 "path": "/tmp/tmp.boiNcZlsKd", 00:30:54.587 "method": "keyring_file_add_key", 00:30:54.587 "req_id": 1 00:30:54.587 } 00:30:54.587 Got JSON-RPC error response 00:30:54.587 response: 00:30:54.587 { 00:30:54.587 "code": -1, 00:30:54.587 "message": "Operation not permitted" 00:30:54.587 } 00:30:54.587 03:31:28 -- common/autotest_common.sh@641 -- # es=1 00:30:54.587 03:31:28 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:30:54.587 03:31:28 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:30:54.587 03:31:28 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:30:54.587 03:31:28 -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.boiNcZlsKd 00:30:54.587 03:31:28 -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.boiNcZlsKd 00:30:54.587 03:31:28 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.boiNcZlsKd 00:30:54.845 03:31:29 -- keyring/file.sh@86 -- # rm -f /tmp/tmp.boiNcZlsKd 00:30:54.845 03:31:29 -- keyring/file.sh@88 -- # get_refcnt key0 00:30:54.845 03:31:29 -- keyring/common.sh@12 -- # get_key key0 00:30:54.845 03:31:29 -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:54.845 03:31:29 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:54.845 03:31:29 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:54.845 03:31:29 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:55.103 03:31:29 -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:30:55.103 03:31:29 -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:55.103 03:31:29 -- common/autotest_common.sh@638 -- # local es=0 00:30:55.103 03:31:29 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:55.103 03:31:29 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:30:55.103 03:31:29 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:30:55.103 03:31:29 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:30:55.103 03:31:29 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:30:55.103 03:31:29 -- common/autotest_common.sh@641 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:55.103 03:31:29 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:55.362 [2024-04-25 03:31:29.665126] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.boiNcZlsKd': No such file or directory 00:30:55.362 [2024-04-25 03:31:29.665168] nvme_tcp.c:2570:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:30:55.362 [2024-04-25 03:31:29.665209] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:30:55.362 [2024-04-25 03:31:29.665223] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:55.362 [2024-04-25 03:31:29.665236] bdev_nvme.c:6204:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:30:55.362 request: 00:30:55.362 { 00:30:55.362 "name": "nvme0", 00:30:55.362 "trtype": "tcp", 00:30:55.362 "traddr": "127.0.0.1", 00:30:55.362 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:55.362 "adrfam": "ipv4", 00:30:55.362 "trsvcid": "4420", 00:30:55.362 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:55.362 "psk": "key0", 00:30:55.362 "method": "bdev_nvme_attach_controller", 00:30:55.362 "req_id": 1 00:30:55.362 } 00:30:55.362 Got JSON-RPC error response 00:30:55.362 response: 00:30:55.362 { 00:30:55.362 "code": -19, 00:30:55.362 "message": "No such device" 00:30:55.362 } 00:30:55.362 03:31:29 -- common/autotest_common.sh@641 -- # es=1 00:30:55.362 03:31:29 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:30:55.362 03:31:29 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:30:55.362 03:31:29 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:30:55.362 03:31:29 -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:30:55.362 03:31:29 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:30:55.620 03:31:29 -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:30:55.620 03:31:29 -- keyring/common.sh@15 -- # local name key digest path 00:30:55.620 03:31:29 -- keyring/common.sh@17 -- # name=key0 00:30:55.620 03:31:29 -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:30:55.620 03:31:29 -- keyring/common.sh@17 -- # digest=0 00:30:55.620 03:31:29 -- keyring/common.sh@18 -- # mktemp 00:30:55.620 03:31:29 -- keyring/common.sh@18 -- # path=/tmp/tmp.0gFMGZbPe0 00:30:55.620 03:31:29 -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:30:55.620 03:31:29 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:30:55.620 03:31:29 -- nvmf/common.sh@691 -- # local prefix key digest 00:30:55.620 03:31:29 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:30:55.620 03:31:29 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:30:55.620 03:31:29 -- nvmf/common.sh@693 -- # digest=0 00:30:55.620 03:31:29 -- nvmf/common.sh@694 -- # python - 00:30:55.620 03:31:29 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.0gFMGZbPe0 00:30:55.620 03:31:29 -- keyring/common.sh@23 -- # echo /tmp/tmp.0gFMGZbPe0 00:30:55.620 03:31:29 -- keyring/file.sh@95 -- # key0path=/tmp/tmp.0gFMGZbPe0 00:30:55.620 03:31:29 -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.0gFMGZbPe0 00:30:55.620 03:31:29 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.0gFMGZbPe0 00:30:55.878 03:31:30 -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:55.878 03:31:30 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:56.136 nvme0n1 00:30:56.136 03:31:30 -- keyring/file.sh@99 -- # get_refcnt key0 00:30:56.136 03:31:30 -- keyring/common.sh@12 -- # get_key key0 00:30:56.136 03:31:30 -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:56.136 03:31:30 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:56.136 03:31:30 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:56.136 03:31:30 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:56.394 03:31:30 -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:30:56.394 03:31:30 -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:30:56.394 03:31:30 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:30:56.652 03:31:31 -- keyring/file.sh@101 -- # get_key key0 00:30:56.652 03:31:31 -- keyring/file.sh@101 -- # jq -r .removed 00:30:56.652 03:31:31 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:56.652 03:31:31 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:56.652 03:31:31 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:56.910 03:31:31 -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:30:56.910 03:31:31 -- keyring/file.sh@102 -- # get_refcnt key0 00:30:56.910 03:31:31 -- keyring/common.sh@12 -- # get_key key0 00:30:56.910 03:31:31 -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:56.910 03:31:31 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:56.910 03:31:31 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:56.910 03:31:31 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:57.168 03:31:31 -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:30:57.168 03:31:31 -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:30:57.168 03:31:31 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:30:57.425 03:31:31 -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:30:57.425 03:31:31 -- keyring/file.sh@104 -- # jq length 00:30:57.425 03:31:31 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:57.682 03:31:31 -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:30:57.682 03:31:31 -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.0gFMGZbPe0 00:30:57.682 03:31:31 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.0gFMGZbPe0 00:30:57.972 03:31:32 -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.rWEtK60iMl 00:30:57.972 03:31:32 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.rWEtK60iMl 00:30:58.233 03:31:32 -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:58.233 03:31:32 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:58.492 nvme0n1 00:30:58.492 03:31:32 -- keyring/file.sh@112 -- # bperf_cmd save_config 00:30:58.492 03:31:32 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:30:58.750 03:31:33 -- keyring/file.sh@112 -- # config='{ 00:30:58.750 "subsystems": [ 00:30:58.750 { 00:30:58.750 "subsystem": "keyring", 00:30:58.750 "config": [ 00:30:58.750 { 00:30:58.750 "method": "keyring_file_add_key", 00:30:58.750 "params": { 00:30:58.750 "name": "key0", 00:30:58.750 "path": "/tmp/tmp.0gFMGZbPe0" 00:30:58.750 } 00:30:58.750 }, 00:30:58.750 { 00:30:58.750 "method": "keyring_file_add_key", 00:30:58.750 "params": { 00:30:58.750 "name": "key1", 00:30:58.750 "path": "/tmp/tmp.rWEtK60iMl" 00:30:58.750 } 00:30:58.750 } 00:30:58.750 ] 00:30:58.750 }, 00:30:58.750 { 00:30:58.750 "subsystem": "iobuf", 00:30:58.750 "config": [ 00:30:58.750 { 00:30:58.750 "method": "iobuf_set_options", 00:30:58.750 "params": { 00:30:58.750 "small_pool_count": 8192, 00:30:58.750 "large_pool_count": 1024, 00:30:58.750 "small_bufsize": 8192, 00:30:58.750 "large_bufsize": 135168 00:30:58.750 } 00:30:58.750 } 00:30:58.750 ] 00:30:58.750 }, 00:30:58.750 { 00:30:58.750 "subsystem": "sock", 00:30:58.750 "config": [ 00:30:58.750 { 00:30:58.750 "method": "sock_impl_set_options", 00:30:58.750 "params": { 00:30:58.750 "impl_name": "posix", 00:30:58.750 "recv_buf_size": 2097152, 00:30:58.750 "send_buf_size": 2097152, 00:30:58.750 "enable_recv_pipe": true, 00:30:58.750 "enable_quickack": false, 00:30:58.750 "enable_placement_id": 0, 00:30:58.750 "enable_zerocopy_send_server": true, 00:30:58.750 "enable_zerocopy_send_client": false, 00:30:58.750 "zerocopy_threshold": 0, 00:30:58.750 "tls_version": 0, 00:30:58.750 "enable_ktls": false 00:30:58.750 } 00:30:58.750 }, 00:30:58.750 { 00:30:58.750 "method": "sock_impl_set_options", 00:30:58.750 "params": { 00:30:58.750 "impl_name": "ssl", 00:30:58.750 "recv_buf_size": 4096, 00:30:58.750 "send_buf_size": 4096, 00:30:58.750 "enable_recv_pipe": true, 00:30:58.750 "enable_quickack": false, 00:30:58.750 "enable_placement_id": 0, 00:30:58.750 "enable_zerocopy_send_server": true, 00:30:58.750 "enable_zerocopy_send_client": false, 00:30:58.750 "zerocopy_threshold": 0, 00:30:58.750 "tls_version": 0, 00:30:58.750 "enable_ktls": false 00:30:58.750 } 00:30:58.750 } 00:30:58.750 ] 00:30:58.750 }, 00:30:58.750 { 00:30:58.750 "subsystem": "vmd", 00:30:58.750 "config": [] 00:30:58.750 }, 00:30:58.750 { 00:30:58.750 "subsystem": "accel", 00:30:58.750 "config": [ 00:30:58.750 { 00:30:58.750 "method": "accel_set_options", 00:30:58.750 "params": { 00:30:58.750 "small_cache_size": 128, 00:30:58.750 "large_cache_size": 16, 00:30:58.750 "task_count": 2048, 00:30:58.750 "sequence_count": 2048, 00:30:58.750 "buf_count": 2048 00:30:58.750 } 00:30:58.750 } 00:30:58.750 ] 00:30:58.750 }, 00:30:58.750 { 00:30:58.750 "subsystem": "bdev", 00:30:58.750 "config": [ 00:30:58.750 { 00:30:58.750 "method": "bdev_set_options", 00:30:58.750 "params": { 00:30:58.750 "bdev_io_pool_size": 65535, 00:30:58.750 "bdev_io_cache_size": 256, 00:30:58.750 "bdev_auto_examine": true, 00:30:58.750 "iobuf_small_cache_size": 128, 00:30:58.750 "iobuf_large_cache_size": 16 00:30:58.750 } 00:30:58.750 }, 00:30:58.750 { 00:30:58.750 "method": "bdev_raid_set_options", 00:30:58.750 "params": { 00:30:58.750 "process_window_size_kb": 1024 00:30:58.750 } 00:30:58.750 }, 00:30:58.750 { 00:30:58.750 "method": "bdev_iscsi_set_options", 00:30:58.750 "params": { 00:30:58.750 "timeout_sec": 30 00:30:58.750 } 00:30:58.750 }, 00:30:58.750 { 00:30:58.750 "method": "bdev_nvme_set_options", 00:30:58.750 "params": { 00:30:58.750 "action_on_timeout": "none", 00:30:58.750 "timeout_us": 0, 00:30:58.750 "timeout_admin_us": 0, 00:30:58.750 "keep_alive_timeout_ms": 10000, 00:30:58.750 "arbitration_burst": 0, 00:30:58.750 "low_priority_weight": 0, 00:30:58.750 "medium_priority_weight": 0, 00:30:58.750 "high_priority_weight": 0, 00:30:58.750 "nvme_adminq_poll_period_us": 10000, 00:30:58.750 "nvme_ioq_poll_period_us": 0, 00:30:58.750 "io_queue_requests": 512, 00:30:58.750 "delay_cmd_submit": true, 00:30:58.750 "transport_retry_count": 4, 00:30:58.750 "bdev_retry_count": 3, 00:30:58.750 "transport_ack_timeout": 0, 00:30:58.750 "ctrlr_loss_timeout_sec": 0, 00:30:58.750 "reconnect_delay_sec": 0, 00:30:58.750 "fast_io_fail_timeout_sec": 0, 00:30:58.750 "disable_auto_failback": false, 00:30:58.750 "generate_uuids": false, 00:30:58.750 "transport_tos": 0, 00:30:58.750 "nvme_error_stat": false, 00:30:58.750 "rdma_srq_size": 0, 00:30:58.750 "io_path_stat": false, 00:30:58.750 "allow_accel_sequence": false, 00:30:58.750 "rdma_max_cq_size": 0, 00:30:58.751 "rdma_cm_event_timeout_ms": 0, 00:30:58.751 "dhchap_digests": [ 00:30:58.751 "sha256", 00:30:58.751 "sha384", 00:30:58.751 "sha512" 00:30:58.751 ], 00:30:58.751 "dhchap_dhgroups": [ 00:30:58.751 "null", 00:30:58.751 "ffdhe2048", 00:30:58.751 "ffdhe3072", 00:30:58.751 "ffdhe4096", 00:30:58.751 "ffdhe6144", 00:30:58.751 "ffdhe8192" 00:30:58.751 ] 00:30:58.751 } 00:30:58.751 }, 00:30:58.751 { 00:30:58.751 "method": "bdev_nvme_attach_controller", 00:30:58.751 "params": { 00:30:58.751 "name": "nvme0", 00:30:58.751 "trtype": "TCP", 00:30:58.751 "adrfam": "IPv4", 00:30:58.751 "traddr": "127.0.0.1", 00:30:58.751 "trsvcid": "4420", 00:30:58.751 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:58.751 "prchk_reftag": false, 00:30:58.751 "prchk_guard": false, 00:30:58.751 "ctrlr_loss_timeout_sec": 0, 00:30:58.751 "reconnect_delay_sec": 0, 00:30:58.751 "fast_io_fail_timeout_sec": 0, 00:30:58.751 "psk": "key0", 00:30:58.751 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:58.751 "hdgst": false, 00:30:58.751 "ddgst": false 00:30:58.751 } 00:30:58.751 }, 00:30:58.751 { 00:30:58.751 "method": "bdev_nvme_set_hotplug", 00:30:58.751 "params": { 00:30:58.751 "period_us": 100000, 00:30:58.751 "enable": false 00:30:58.751 } 00:30:58.751 }, 00:30:58.751 { 00:30:58.751 "method": "bdev_wait_for_examine" 00:30:58.751 } 00:30:58.751 ] 00:30:58.751 }, 00:30:58.751 { 00:30:58.751 "subsystem": "nbd", 00:30:58.751 "config": [] 00:30:58.751 } 00:30:58.751 ] 00:30:58.751 }' 00:30:58.751 03:31:33 -- keyring/file.sh@114 -- # killprocess 1650333 00:30:58.751 03:31:33 -- common/autotest_common.sh@936 -- # '[' -z 1650333 ']' 00:30:58.751 03:31:33 -- common/autotest_common.sh@940 -- # kill -0 1650333 00:30:58.751 03:31:33 -- common/autotest_common.sh@941 -- # uname 00:30:58.751 03:31:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:30:58.751 03:31:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1650333 00:30:58.751 03:31:33 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:30:58.751 03:31:33 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:30:58.751 03:31:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1650333' 00:30:58.751 killing process with pid 1650333 00:30:58.751 03:31:33 -- common/autotest_common.sh@955 -- # kill 1650333 00:30:58.751 Received shutdown signal, test time was about 1.000000 seconds 00:30:58.751 00:30:58.751 Latency(us) 00:30:58.751 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:58.751 =================================================================================================================== 00:30:58.751 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:58.751 03:31:33 -- common/autotest_common.sh@960 -- # wait 1650333 00:30:59.009 03:31:33 -- keyring/file.sh@117 -- # bperfpid=1651785 00:30:59.009 03:31:33 -- keyring/file.sh@119 -- # waitforlisten 1651785 /var/tmp/bperf.sock 00:30:59.009 03:31:33 -- common/autotest_common.sh@817 -- # '[' -z 1651785 ']' 00:30:59.009 03:31:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:59.009 03:31:33 -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:30:59.009 03:31:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:30:59.009 03:31:33 -- keyring/file.sh@115 -- # echo '{ 00:30:59.009 "subsystems": [ 00:30:59.009 { 00:30:59.009 "subsystem": "keyring", 00:30:59.009 "config": [ 00:30:59.009 { 00:30:59.009 "method": "keyring_file_add_key", 00:30:59.009 "params": { 00:30:59.009 "name": "key0", 00:30:59.009 "path": "/tmp/tmp.0gFMGZbPe0" 00:30:59.009 } 00:30:59.009 }, 00:30:59.009 { 00:30:59.009 "method": "keyring_file_add_key", 00:30:59.009 "params": { 00:30:59.009 "name": "key1", 00:30:59.009 "path": "/tmp/tmp.rWEtK60iMl" 00:30:59.009 } 00:30:59.009 } 00:30:59.009 ] 00:30:59.009 }, 00:30:59.009 { 00:30:59.009 "subsystem": "iobuf", 00:30:59.009 "config": [ 00:30:59.009 { 00:30:59.009 "method": "iobuf_set_options", 00:30:59.009 "params": { 00:30:59.009 "small_pool_count": 8192, 00:30:59.009 "large_pool_count": 1024, 00:30:59.009 "small_bufsize": 8192, 00:30:59.009 "large_bufsize": 135168 00:30:59.009 } 00:30:59.009 } 00:30:59.009 ] 00:30:59.009 }, 00:30:59.009 { 00:30:59.009 "subsystem": "sock", 00:30:59.009 "config": [ 00:30:59.009 { 00:30:59.009 "method": "sock_impl_set_options", 00:30:59.009 "params": { 00:30:59.009 "impl_name": "posix", 00:30:59.009 "recv_buf_size": 2097152, 00:30:59.009 "send_buf_size": 2097152, 00:30:59.009 "enable_recv_pipe": true, 00:30:59.009 "enable_quickack": false, 00:30:59.009 "enable_placement_id": 0, 00:30:59.009 "enable_zerocopy_send_server": true, 00:30:59.009 "enable_zerocopy_send_client": false, 00:30:59.009 "zerocopy_threshold": 0, 00:30:59.009 "tls_version": 0, 00:30:59.009 "enable_ktls": false 00:30:59.009 } 00:30:59.009 }, 00:30:59.009 { 00:30:59.009 "method": "sock_impl_set_options", 00:30:59.009 "params": { 00:30:59.009 "impl_name": "ssl", 00:30:59.009 "recv_buf_size": 4096, 00:30:59.009 "send_buf_size": 4096, 00:30:59.009 "enable_recv_pipe": true, 00:30:59.009 "enable_quickack": false, 00:30:59.009 "enable_placement_id": 0, 00:30:59.009 "enable_zerocopy_send_server": true, 00:30:59.009 "enable_zerocopy_send_client": false, 00:30:59.009 "zerocopy_threshold": 0, 00:30:59.009 "tls_version": 0, 00:30:59.009 "enable_ktls": false 00:30:59.009 } 00:30:59.009 } 00:30:59.009 ] 00:30:59.009 }, 00:30:59.009 { 00:30:59.009 "subsystem": "vmd", 00:30:59.009 "config": [] 00:30:59.009 }, 00:30:59.009 { 00:30:59.009 "subsystem": "accel", 00:30:59.009 "config": [ 00:30:59.009 { 00:30:59.009 "method": "accel_set_options", 00:30:59.009 "params": { 00:30:59.009 "small_cache_size": 128, 00:30:59.009 "large_cache_size": 16, 00:30:59.009 "task_count": 2048, 00:30:59.009 "sequence_count": 2048, 00:30:59.009 "buf_count": 2048 00:30:59.009 } 00:30:59.009 } 00:30:59.009 ] 00:30:59.009 }, 00:30:59.009 { 00:30:59.009 "subsystem": "bdev", 00:30:59.009 "config": [ 00:30:59.009 { 00:30:59.009 "method": "bdev_set_options", 00:30:59.009 "params": { 00:30:59.009 "bdev_io_pool_size": 65535, 00:30:59.009 "bdev_io_cache_size": 256, 00:30:59.009 "bdev_auto_examine": true, 00:30:59.009 "iobuf_small_cache_size": 128, 00:30:59.009 "iobuf_large_cache_size": 16 00:30:59.009 } 00:30:59.009 }, 00:30:59.009 { 00:30:59.009 "method": "bdev_raid_set_options", 00:30:59.009 "params": { 00:30:59.009 "process_window_size_kb": 1024 00:30:59.009 } 00:30:59.009 }, 00:30:59.009 { 00:30:59.009 "method": "bdev_iscsi_set_options", 00:30:59.009 "params": { 00:30:59.009 "timeout_sec": 30 00:30:59.009 } 00:30:59.009 }, 00:30:59.009 { 00:30:59.009 "method": "bdev_nvme_set_options", 00:30:59.009 "params": { 00:30:59.009 "action_on_timeout": "none", 00:30:59.010 "timeout_us": 0, 00:30:59.010 "timeout_admin_us": 0, 00:30:59.010 "keep_alive_timeout_ms": 10000, 00:30:59.010 "arbitration_burst": 0, 00:30:59.010 "low_priority_weight": 0, 00:30:59.010 "medium_priority_weight": 0, 00:30:59.010 "high_priority_weight": 0, 00:30:59.010 "nvme_adminq_poll_period_us": 10000, 00:30:59.010 "nvme_ioq_poll_period_us": 0, 00:30:59.010 "io_queue_requests": 512, 00:30:59.010 "delay_cmd_submit": true, 00:30:59.010 "transport_retry_count": 4, 00:30:59.010 "bdev_retry_count": 3, 00:30:59.010 "transport_ack_timeout": 0, 00:30:59.010 "ctrlr_loss_timeout_sec": 0, 00:30:59.010 "reconnect_delay_sec": 0, 00:30:59.010 "fast_io_fail_timeout_sec": 0, 00:30:59.010 "disable_auto_failback": false, 00:30:59.010 "generate_uuids": false, 00:30:59.010 "transport_tos": 0, 00:30:59.010 "nvme_error_stat": false, 00:30:59.010 "rdma_srq_size": 0, 00:30:59.010 "io_path_stat": false, 00:30:59.010 "allow_accel_sequence": false, 00:30:59.010 "rdma_max_cq_size": 0, 00:30:59.010 "rdma_cm_event_timeout_ms": 0, 00:30:59.010 "dhchap_digests": [ 00:30:59.010 "sha256", 00:30:59.010 "sha384", 00:30:59.010 "sha512" 00:30:59.010 ], 00:30:59.010 "dhchap_dhgroups": [ 00:30:59.010 "null", 00:30:59.010 "ffdhe2048", 00:30:59.010 "ffdhe3072", 00:30:59.010 "ffdhe4096", 00:30:59.010 "ffdhe6144", 00:30:59.010 "ffdhe8192" 00:30:59.010 ] 00:30:59.010 } 00:30:59.010 }, 00:30:59.010 { 00:30:59.010 "method": "bdev_nvme_attach_controller", 00:30:59.010 "params": { 00:30:59.010 "name": "nvme0", 00:30:59.010 "trtype": "TCP", 00:30:59.010 "adrfam": "IPv4", 00:30:59.010 "traddr": "127.0.0.1", 00:30:59.010 "trsvcid": "4420", 00:30:59.010 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:59.010 "prchk_reftag": false, 00:30:59.010 "prchk_guard": false, 00:30:59.010 "ctrlr_loss_timeout_sec": 0, 00:30:59.010 "reconnect_delay_sec": 0, 00:30:59.010 "fast_io_fail_timeout_sec": 0, 00:30:59.010 "psk": "key0", 00:30:59.010 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:59.010 "hdgst": false, 00:30:59.010 "ddgst": false 00:30:59.010 } 00:30:59.010 }, 00:30:59.010 { 00:30:59.010 "method": "bdev_nvme_set_hotplug", 00:30:59.010 "params": { 00:30:59.010 "period_us": 100000, 00:30:59.010 "enable": false 00:30:59.010 } 00:30:59.010 }, 00:30:59.010 { 00:30:59.010 "method": "bdev_wait_for_examine" 00:30:59.010 } 00:30:59.010 ] 00:30:59.010 }, 00:30:59.010 { 00:30:59.010 "subsystem": "nbd", 00:30:59.010 "config": [] 00:30:59.010 } 00:30:59.010 ] 00:30:59.010 }' 00:30:59.010 03:31:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:59.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:59.010 03:31:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:30:59.010 03:31:33 -- common/autotest_common.sh@10 -- # set +x 00:30:59.010 [2024-04-25 03:31:33.420692] Starting SPDK v24.05-pre git sha1 abd932d6f / DPDK 23.11.0 initialization... 00:30:59.010 [2024-04-25 03:31:33.420789] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1651785 ] 00:30:59.010 EAL: No free 2048 kB hugepages reported on node 1 00:30:59.010 [2024-04-25 03:31:33.479700] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:59.268 [2024-04-25 03:31:33.590060] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:59.526 [2024-04-25 03:31:33.776207] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:31:00.091 03:31:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:31:00.091 03:31:34 -- common/autotest_common.sh@850 -- # return 0 00:31:00.091 03:31:34 -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:31:00.091 03:31:34 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:00.091 03:31:34 -- keyring/file.sh@120 -- # jq length 00:31:00.349 03:31:34 -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:31:00.349 03:31:34 -- keyring/file.sh@121 -- # get_refcnt key0 00:31:00.349 03:31:34 -- keyring/common.sh@12 -- # get_key key0 00:31:00.349 03:31:34 -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:00.349 03:31:34 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:00.349 03:31:34 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:00.349 03:31:34 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:00.349 03:31:34 -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:31:00.349 03:31:34 -- keyring/file.sh@122 -- # get_refcnt key1 00:31:00.607 03:31:34 -- keyring/common.sh@12 -- # get_key key1 00:31:00.607 03:31:34 -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:00.607 03:31:34 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:00.607 03:31:34 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:00.607 03:31:34 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:00.607 03:31:35 -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:31:00.607 03:31:35 -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:31:00.607 03:31:35 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:31:00.607 03:31:35 -- keyring/file.sh@123 -- # jq -r '.[].name' 00:31:00.865 03:31:35 -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:31:00.865 03:31:35 -- keyring/file.sh@1 -- # cleanup 00:31:00.865 03:31:35 -- keyring/file.sh@19 -- # rm -f /tmp/tmp.0gFMGZbPe0 /tmp/tmp.rWEtK60iMl 00:31:00.865 03:31:35 -- keyring/file.sh@20 -- # killprocess 1651785 00:31:00.865 03:31:35 -- common/autotest_common.sh@936 -- # '[' -z 1651785 ']' 00:31:00.865 03:31:35 -- common/autotest_common.sh@940 -- # kill -0 1651785 00:31:00.865 03:31:35 -- common/autotest_common.sh@941 -- # uname 00:31:00.865 03:31:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:31:00.865 03:31:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1651785 00:31:00.865 03:31:35 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:31:00.865 03:31:35 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:31:00.865 03:31:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1651785' 00:31:00.865 killing process with pid 1651785 00:31:00.865 03:31:35 -- common/autotest_common.sh@955 -- # kill 1651785 00:31:00.865 Received shutdown signal, test time was about 1.000000 seconds 00:31:00.865 00:31:00.865 Latency(us) 00:31:00.865 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:00.865 =================================================================================================================== 00:31:00.865 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:31:00.865 03:31:35 -- common/autotest_common.sh@960 -- # wait 1651785 00:31:01.123 03:31:35 -- keyring/file.sh@21 -- # killprocess 1650323 00:31:01.123 03:31:35 -- common/autotest_common.sh@936 -- # '[' -z 1650323 ']' 00:31:01.123 03:31:35 -- common/autotest_common.sh@940 -- # kill -0 1650323 00:31:01.123 03:31:35 -- common/autotest_common.sh@941 -- # uname 00:31:01.123 03:31:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:31:01.123 03:31:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1650323 00:31:01.381 03:31:35 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:31:01.381 03:31:35 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:31:01.381 03:31:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1650323' 00:31:01.381 killing process with pid 1650323 00:31:01.381 03:31:35 -- common/autotest_common.sh@955 -- # kill 1650323 00:31:01.381 [2024-04-25 03:31:35.633776] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:31:01.381 03:31:35 -- common/autotest_common.sh@960 -- # wait 1650323 00:31:01.639 00:31:01.639 real 0m14.021s 00:31:01.639 user 0m34.262s 00:31:01.639 sys 0m3.191s 00:31:01.639 03:31:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:31:01.639 03:31:36 -- common/autotest_common.sh@10 -- # set +x 00:31:01.639 ************************************ 00:31:01.639 END TEST keyring_file 00:31:01.639 ************************************ 00:31:01.639 03:31:36 -- spdk/autotest.sh@294 -- # [[ n == y ]] 00:31:01.639 03:31:36 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:31:01.639 03:31:36 -- spdk/autotest.sh@310 -- # '[' 0 -eq 1 ']' 00:31:01.639 03:31:36 -- spdk/autotest.sh@314 -- # '[' 0 -eq 1 ']' 00:31:01.639 03:31:36 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:31:01.639 03:31:36 -- spdk/autotest.sh@328 -- # '[' 0 -eq 1 ']' 00:31:01.639 03:31:36 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:31:01.639 03:31:36 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:31:01.639 03:31:36 -- spdk/autotest.sh@341 -- # '[' 0 -eq 1 ']' 00:31:01.639 03:31:36 -- spdk/autotest.sh@345 -- # '[' 0 -eq 1 ']' 00:31:01.639 03:31:36 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:31:01.639 03:31:36 -- spdk/autotest.sh@354 -- # '[' 0 -eq 1 ']' 00:31:01.639 03:31:36 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:31:01.639 03:31:36 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:31:01.639 03:31:36 -- spdk/autotest.sh@369 -- # [[ 0 -eq 1 ]] 00:31:01.639 03:31:36 -- spdk/autotest.sh@373 -- # [[ 0 -eq 1 ]] 00:31:01.639 03:31:36 -- spdk/autotest.sh@378 -- # trap - SIGINT SIGTERM EXIT 00:31:01.639 03:31:36 -- spdk/autotest.sh@380 -- # timing_enter post_cleanup 00:31:01.640 03:31:36 -- common/autotest_common.sh@710 -- # xtrace_disable 00:31:01.640 03:31:36 -- common/autotest_common.sh@10 -- # set +x 00:31:01.640 03:31:36 -- spdk/autotest.sh@381 -- # autotest_cleanup 00:31:01.640 03:31:36 -- common/autotest_common.sh@1378 -- # local autotest_es=0 00:31:01.640 03:31:36 -- common/autotest_common.sh@1379 -- # xtrace_disable 00:31:01.640 03:31:36 -- common/autotest_common.sh@10 -- # set +x 00:31:03.543 INFO: APP EXITING 00:31:03.543 INFO: killing all VMs 00:31:03.543 INFO: killing vhost app 00:31:03.543 INFO: EXIT DONE 00:31:04.919 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:31:04.919 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:31:04.919 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:31:04.919 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:31:04.919 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:31:04.919 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:31:04.919 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:31:04.919 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:31:04.919 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:31:04.919 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:31:04.919 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:31:04.919 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:31:04.919 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:31:04.919 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:31:04.919 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:31:04.919 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:31:04.919 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:31:06.295 Cleaning 00:31:06.295 Removing: /var/run/dpdk/spdk0/config 00:31:06.295 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:31:06.295 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:31:06.295 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:31:06.295 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:31:06.295 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:31:06.295 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:31:06.295 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:31:06.295 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:31:06.295 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:31:06.295 Removing: /var/run/dpdk/spdk0/hugepage_info 00:31:06.295 Removing: /var/run/dpdk/spdk1/config 00:31:06.295 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:31:06.295 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:31:06.295 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:31:06.295 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:31:06.295 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:31:06.295 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:31:06.295 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:31:06.295 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:31:06.295 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:31:06.295 Removing: /var/run/dpdk/spdk1/hugepage_info 00:31:06.295 Removing: /var/run/dpdk/spdk1/mp_socket 00:31:06.295 Removing: /var/run/dpdk/spdk2/config 00:31:06.295 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:31:06.295 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:31:06.295 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:31:06.295 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:31:06.295 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:31:06.295 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:31:06.295 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:31:06.295 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:31:06.295 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:31:06.295 Removing: /var/run/dpdk/spdk2/hugepage_info 00:31:06.295 Removing: /var/run/dpdk/spdk3/config 00:31:06.295 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:31:06.295 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:31:06.295 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:31:06.295 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:31:06.295 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:31:06.295 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:31:06.295 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:31:06.295 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:31:06.295 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:31:06.295 Removing: /var/run/dpdk/spdk3/hugepage_info 00:31:06.295 Removing: /var/run/dpdk/spdk4/config 00:31:06.295 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:31:06.295 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:31:06.295 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:31:06.295 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:31:06.295 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:31:06.295 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:31:06.295 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:31:06.295 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:31:06.296 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:31:06.296 Removing: /var/run/dpdk/spdk4/hugepage_info 00:31:06.296 Removing: /dev/shm/bdev_svc_trace.1 00:31:06.296 Removing: /dev/shm/nvmf_trace.0 00:31:06.296 Removing: /dev/shm/spdk_tgt_trace.pid1371563 00:31:06.296 Removing: /var/run/dpdk/spdk0 00:31:06.296 Removing: /var/run/dpdk/spdk1 00:31:06.296 Removing: /var/run/dpdk/spdk2 00:31:06.296 Removing: /var/run/dpdk/spdk3 00:31:06.296 Removing: /var/run/dpdk/spdk4 00:31:06.296 Removing: /var/run/dpdk/spdk_pid1369841 00:31:06.296 Removing: /var/run/dpdk/spdk_pid1370597 00:31:06.296 Removing: /var/run/dpdk/spdk_pid1371563 00:31:06.296 Removing: /var/run/dpdk/spdk_pid1372052 00:31:06.296 Removing: /var/run/dpdk/spdk_pid1372753 00:31:06.296 Removing: /var/run/dpdk/spdk_pid1372892 00:31:06.296 Removing: /var/run/dpdk/spdk_pid1373623 00:31:06.296 Removing: /var/run/dpdk/spdk_pid1373743 00:31:06.296 Removing: /var/run/dpdk/spdk_pid1374003 00:31:06.296 Removing: /var/run/dpdk/spdk_pid1375200 00:31:06.296 Removing: /var/run/dpdk/spdk_pid1376117 00:31:06.296 Removing: /var/run/dpdk/spdk_pid1376438 00:31:06.296 Removing: /var/run/dpdk/spdk_pid1376638 00:31:06.296 Removing: /var/run/dpdk/spdk_pid1376845 00:31:06.296 Removing: /var/run/dpdk/spdk_pid1377167 00:31:06.296 Removing: /var/run/dpdk/spdk_pid1377343 00:31:06.296 Removing: /var/run/dpdk/spdk_pid1377503 00:31:06.296 Removing: /var/run/dpdk/spdk_pid1377694 00:31:06.296 Removing: /var/run/dpdk/spdk_pid1378287 00:31:06.296 Removing: /var/run/dpdk/spdk_pid1381042 00:31:06.296 Removing: /var/run/dpdk/spdk_pid1381518 00:31:06.296 Removing: /var/run/dpdk/spdk_pid1381772 00:31:06.296 Removing: /var/run/dpdk/spdk_pid1381872 00:31:06.296 Removing: /var/run/dpdk/spdk_pid1382190 00:31:06.296 Removing: /var/run/dpdk/spdk_pid1382313 00:31:06.296 Removing: /var/run/dpdk/spdk_pid1382752 00:31:06.296 Removing: /var/run/dpdk/spdk_pid1382792 00:31:06.296 Removing: /var/run/dpdk/spdk_pid1383065 00:31:06.296 Removing: /var/run/dpdk/spdk_pid1383074 00:31:06.296 Removing: /var/run/dpdk/spdk_pid1383370 00:31:06.296 Removing: /var/run/dpdk/spdk_pid1383376 00:31:06.296 Removing: /var/run/dpdk/spdk_pid1383796 00:31:06.296 Removing: /var/run/dpdk/spdk_pid1384041 00:31:06.296 Removing: /var/run/dpdk/spdk_pid1384251 00:31:06.296 Removing: /var/run/dpdk/spdk_pid1384431 00:31:06.296 Removing: /var/run/dpdk/spdk_pid1384584 00:31:06.296 Removing: /var/run/dpdk/spdk_pid1384712 00:31:06.296 Removing: /var/run/dpdk/spdk_pid1384958 00:31:06.296 Removing: /var/run/dpdk/spdk_pid1385124 00:31:06.296 Removing: /var/run/dpdk/spdk_pid1385402 00:31:06.296 Removing: /var/run/dpdk/spdk_pid1385575 00:31:06.296 Removing: /var/run/dpdk/spdk_pid1385748 00:31:06.296 Removing: /var/run/dpdk/spdk_pid1386021 00:31:06.296 Removing: /var/run/dpdk/spdk_pid1386179 00:31:06.296 Removing: /var/run/dpdk/spdk_pid1386461 00:31:06.296 Removing: /var/run/dpdk/spdk_pid1386631 00:31:06.296 Removing: /var/run/dpdk/spdk_pid1386808 00:31:06.296 Removing: /var/run/dpdk/spdk_pid1387078 00:31:06.296 Removing: /var/run/dpdk/spdk_pid1387242 00:31:06.296 Removing: /var/run/dpdk/spdk_pid1387524 00:31:06.296 Removing: /var/run/dpdk/spdk_pid1387684 00:31:06.555 Removing: /var/run/dpdk/spdk_pid1387921 00:31:06.555 Removing: /var/run/dpdk/spdk_pid1388140 00:31:06.555 Removing: /var/run/dpdk/spdk_pid1388302 00:31:06.555 Removing: /var/run/dpdk/spdk_pid1388590 00:31:06.555 Removing: /var/run/dpdk/spdk_pid1388753 00:31:06.555 Removing: /var/run/dpdk/spdk_pid1389036 00:31:06.555 Removing: /var/run/dpdk/spdk_pid1389121 00:31:06.555 Removing: /var/run/dpdk/spdk_pid1389453 00:31:06.555 Removing: /var/run/dpdk/spdk_pid1391651 00:31:06.555 Removing: /var/run/dpdk/spdk_pid1445550 00:31:06.555 Removing: /var/run/dpdk/spdk_pid1448189 00:31:06.555 Removing: /var/run/dpdk/spdk_pid1453800 00:31:06.555 Removing: /var/run/dpdk/spdk_pid1457089 00:31:06.555 Removing: /var/run/dpdk/spdk_pid1459708 00:31:06.555 Removing: /var/run/dpdk/spdk_pid1460106 00:31:06.555 Removing: /var/run/dpdk/spdk_pid1468839 00:31:06.555 Removing: /var/run/dpdk/spdk_pid1469117 00:31:06.555 Removing: /var/run/dpdk/spdk_pid1471638 00:31:06.555 Removing: /var/run/dpdk/spdk_pid1475463 00:31:06.555 Removing: /var/run/dpdk/spdk_pid1478019 00:31:06.555 Removing: /var/run/dpdk/spdk_pid1484435 00:31:06.555 Removing: /var/run/dpdk/spdk_pid1489758 00:31:06.555 Removing: /var/run/dpdk/spdk_pid1491066 00:31:06.555 Removing: /var/run/dpdk/spdk_pid1491725 00:31:06.555 Removing: /var/run/dpdk/spdk_pid1501823 00:31:06.555 Removing: /var/run/dpdk/spdk_pid1504104 00:31:06.555 Removing: /var/run/dpdk/spdk_pid1506962 00:31:06.555 Removing: /var/run/dpdk/spdk_pid1508143 00:31:06.555 Removing: /var/run/dpdk/spdk_pid1509349 00:31:06.555 Removing: /var/run/dpdk/spdk_pid1509482 00:31:06.555 Removing: /var/run/dpdk/spdk_pid1509622 00:31:06.555 Removing: /var/run/dpdk/spdk_pid1509758 00:31:06.555 Removing: /var/run/dpdk/spdk_pid1510307 00:31:06.555 Removing: /var/run/dpdk/spdk_pid1512134 00:31:06.555 Removing: /var/run/dpdk/spdk_pid1512877 00:31:06.555 Removing: /var/run/dpdk/spdk_pid1513301 00:31:06.555 Removing: /var/run/dpdk/spdk_pid1514913 00:31:06.555 Removing: /var/run/dpdk/spdk_pid1515403 00:31:06.555 Removing: /var/run/dpdk/spdk_pid1515914 00:31:06.555 Removing: /var/run/dpdk/spdk_pid1518441 00:31:06.555 Removing: /var/run/dpdk/spdk_pid1521963 00:31:06.555 Removing: /var/run/dpdk/spdk_pid1525492 00:31:06.555 Removing: /var/run/dpdk/spdk_pid1548902 00:31:06.555 Removing: /var/run/dpdk/spdk_pid1551659 00:31:06.555 Removing: /var/run/dpdk/spdk_pid1555447 00:31:06.555 Removing: /var/run/dpdk/spdk_pid1556400 00:31:06.555 Removing: /var/run/dpdk/spdk_pid1557630 00:31:06.555 Removing: /var/run/dpdk/spdk_pid1560312 00:31:06.555 Removing: /var/run/dpdk/spdk_pid1562559 00:31:06.555 Removing: /var/run/dpdk/spdk_pid1566910 00:31:06.555 Removing: /var/run/dpdk/spdk_pid1566917 00:31:06.555 Removing: /var/run/dpdk/spdk_pid1569823 00:31:06.555 Removing: /var/run/dpdk/spdk_pid1569957 00:31:06.555 Removing: /var/run/dpdk/spdk_pid1570094 00:31:06.555 Removing: /var/run/dpdk/spdk_pid1570366 00:31:06.555 Removing: /var/run/dpdk/spdk_pid1570400 00:31:06.555 Removing: /var/run/dpdk/spdk_pid1571561 00:31:06.555 Removing: /var/run/dpdk/spdk_pid1573358 00:31:06.555 Removing: /var/run/dpdk/spdk_pid1574535 00:31:06.555 Removing: /var/run/dpdk/spdk_pid1575714 00:31:06.555 Removing: /var/run/dpdk/spdk_pid1576899 00:31:06.555 Removing: /var/run/dpdk/spdk_pid1578076 00:31:06.555 Removing: /var/run/dpdk/spdk_pid1581752 00:31:06.555 Removing: /var/run/dpdk/spdk_pid1582084 00:31:06.555 Removing: /var/run/dpdk/spdk_pid1583221 00:31:06.555 Removing: /var/run/dpdk/spdk_pid1583693 00:31:06.555 Removing: /var/run/dpdk/spdk_pid1587169 00:31:06.555 Removing: /var/run/dpdk/spdk_pid1589125 00:31:06.555 Removing: /var/run/dpdk/spdk_pid1592670 00:31:06.555 Removing: /var/run/dpdk/spdk_pid1595852 00:31:06.555 Removing: /var/run/dpdk/spdk_pid1600320 00:31:06.555 Removing: /var/run/dpdk/spdk_pid1600322 00:31:06.555 Removing: /var/run/dpdk/spdk_pid1613256 00:31:06.555 Removing: /var/run/dpdk/spdk_pid1613792 00:31:06.555 Removing: /var/run/dpdk/spdk_pid1614201 00:31:06.555 Removing: /var/run/dpdk/spdk_pid1614611 00:31:06.555 Removing: /var/run/dpdk/spdk_pid1615324 00:31:06.555 Removing: /var/run/dpdk/spdk_pid1615734 00:31:06.555 Removing: /var/run/dpdk/spdk_pid1616266 00:31:06.555 Removing: /var/run/dpdk/spdk_pid1616677 00:31:06.555 Removing: /var/run/dpdk/spdk_pid1619178 00:31:06.555 Removing: /var/run/dpdk/spdk_pid1619342 00:31:06.555 Removing: /var/run/dpdk/spdk_pid1623241 00:31:06.555 Removing: /var/run/dpdk/spdk_pid1623296 00:31:06.555 Removing: /var/run/dpdk/spdk_pid1625020 00:31:06.555 Removing: /var/run/dpdk/spdk_pid1630009 00:31:06.555 Removing: /var/run/dpdk/spdk_pid1630079 00:31:06.556 Removing: /var/run/dpdk/spdk_pid1632984 00:31:06.556 Removing: /var/run/dpdk/spdk_pid1634393 00:31:06.556 Removing: /var/run/dpdk/spdk_pid1635911 00:31:06.556 Removing: /var/run/dpdk/spdk_pid1637242 00:31:06.556 Removing: /var/run/dpdk/spdk_pid1638569 00:31:06.556 Removing: /var/run/dpdk/spdk_pid1639445 00:31:06.556 Removing: /var/run/dpdk/spdk_pid1644855 00:31:06.556 Removing: /var/run/dpdk/spdk_pid1645249 00:31:06.556 Removing: /var/run/dpdk/spdk_pid1645640 00:31:06.556 Removing: /var/run/dpdk/spdk_pid1647086 00:31:06.556 Removing: /var/run/dpdk/spdk_pid1647480 00:31:06.556 Removing: /var/run/dpdk/spdk_pid1647880 00:31:06.556 Removing: /var/run/dpdk/spdk_pid1650323 00:31:06.556 Removing: /var/run/dpdk/spdk_pid1650333 00:31:06.556 Removing: /var/run/dpdk/spdk_pid1651785 00:31:06.556 Clean 00:31:06.814 03:31:41 -- common/autotest_common.sh@1437 -- # return 0 00:31:06.814 03:31:41 -- spdk/autotest.sh@382 -- # timing_exit post_cleanup 00:31:06.814 03:31:41 -- common/autotest_common.sh@716 -- # xtrace_disable 00:31:06.814 03:31:41 -- common/autotest_common.sh@10 -- # set +x 00:31:06.814 03:31:41 -- spdk/autotest.sh@384 -- # timing_exit autotest 00:31:06.814 03:31:41 -- common/autotest_common.sh@716 -- # xtrace_disable 00:31:06.814 03:31:41 -- common/autotest_common.sh@10 -- # set +x 00:31:06.814 03:31:41 -- spdk/autotest.sh@385 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:31:06.814 03:31:41 -- spdk/autotest.sh@387 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:31:06.814 03:31:41 -- spdk/autotest.sh@387 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:31:06.814 03:31:41 -- spdk/autotest.sh@389 -- # hash lcov 00:31:06.814 03:31:41 -- spdk/autotest.sh@389 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:31:06.814 03:31:41 -- spdk/autotest.sh@391 -- # hostname 00:31:06.814 03:31:41 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:31:07.072 geninfo: WARNING: invalid characters removed from testname! 00:31:33.615 03:32:06 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:31:36.148 03:32:10 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:31:38.687 03:32:13 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:31:42.887 03:32:16 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:31:45.428 03:32:19 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:31:48.726 03:32:22 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:31:51.266 03:32:25 -- spdk/autotest.sh@398 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:31:51.266 03:32:25 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:51.266 03:32:25 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:31:51.266 03:32:25 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:51.266 03:32:25 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:51.266 03:32:25 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:51.266 03:32:25 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:51.266 03:32:25 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:51.266 03:32:25 -- paths/export.sh@5 -- $ export PATH 00:31:51.266 03:32:25 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:51.266 03:32:25 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:31:51.266 03:32:25 -- common/autobuild_common.sh@435 -- $ date +%s 00:31:51.266 03:32:25 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1714008745.XXXXXX 00:31:51.266 03:32:25 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1714008745.YQgYpj 00:31:51.266 03:32:25 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:31:51.266 03:32:25 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:31:51.266 03:32:25 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:31:51.266 03:32:25 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:31:51.266 03:32:25 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:31:51.266 03:32:25 -- common/autobuild_common.sh@451 -- $ get_config_params 00:31:51.266 03:32:25 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:31:51.266 03:32:25 -- common/autotest_common.sh@10 -- $ set +x 00:31:51.266 03:32:25 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk' 00:31:51.266 03:32:25 -- common/autobuild_common.sh@453 -- $ start_monitor_resources 00:31:51.266 03:32:25 -- pm/common@17 -- $ local monitor 00:31:51.266 03:32:25 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:51.266 03:32:25 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=1661684 00:31:51.266 03:32:25 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:51.266 03:32:25 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=1661686 00:31:51.266 03:32:25 -- pm/common@21 -- $ date +%s 00:31:51.266 03:32:25 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:51.266 03:32:25 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=1661688 00:31:51.266 03:32:25 -- pm/common@21 -- $ date +%s 00:31:51.266 03:32:25 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:51.266 03:32:25 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=1661692 00:31:51.266 03:32:25 -- pm/common@26 -- $ sleep 1 00:31:51.266 03:32:25 -- pm/common@21 -- $ date +%s 00:31:51.266 03:32:25 -- pm/common@21 -- $ date +%s 00:31:51.266 03:32:25 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1714008745 00:31:51.266 03:32:25 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1714008745 00:31:51.266 03:32:25 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1714008745 00:31:51.266 03:32:25 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1714008745 00:31:51.266 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1714008745_collect-vmstat.pm.log 00:31:51.266 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1714008745_collect-bmc-pm.bmc.pm.log 00:31:51.266 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1714008745_collect-cpu-load.pm.log 00:31:51.266 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1714008745_collect-cpu-temp.pm.log 00:31:52.221 03:32:26 -- common/autobuild_common.sh@454 -- $ trap stop_monitor_resources EXIT 00:31:52.221 03:32:26 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:31:52.221 03:32:26 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:31:52.221 03:32:26 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:31:52.221 03:32:26 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:31:52.221 03:32:26 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:31:52.221 03:32:26 -- spdk/autopackage.sh@19 -- $ timing_finish 00:31:52.221 03:32:26 -- common/autotest_common.sh@722 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:31:52.221 03:32:26 -- common/autotest_common.sh@723 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:31:52.221 03:32:26 -- common/autotest_common.sh@725 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:31:52.221 03:32:26 -- spdk/autopackage.sh@20 -- $ exit 0 00:31:52.221 03:32:26 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:31:52.221 03:32:26 -- pm/common@30 -- $ signal_monitor_resources TERM 00:31:52.221 03:32:26 -- pm/common@41 -- $ local monitor pid pids signal=TERM 00:31:52.221 03:32:26 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:52.221 03:32:26 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:31:52.221 03:32:26 -- pm/common@45 -- $ pid=1661711 00:31:52.221 03:32:26 -- pm/common@52 -- $ sudo kill -TERM 1661711 00:31:52.482 03:32:26 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:52.482 03:32:26 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:31:52.482 03:32:26 -- pm/common@45 -- $ pid=1661714 00:31:52.482 03:32:26 -- pm/common@52 -- $ sudo kill -TERM 1661714 00:31:52.482 03:32:26 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:52.482 03:32:26 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:31:52.482 03:32:26 -- pm/common@45 -- $ pid=1661712 00:31:52.482 03:32:26 -- pm/common@52 -- $ sudo kill -TERM 1661712 00:31:52.482 03:32:26 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:52.482 03:32:26 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:31:52.482 03:32:26 -- pm/common@45 -- $ pid=1661713 00:31:52.482 03:32:26 -- pm/common@52 -- $ sudo kill -TERM 1661713 00:31:52.482 + [[ -n 1286927 ]] 00:31:52.482 + sudo kill 1286927 00:31:52.493 [Pipeline] } 00:31:52.516 [Pipeline] // stage 00:31:52.522 [Pipeline] } 00:31:52.538 [Pipeline] // timeout 00:31:52.543 [Pipeline] } 00:31:52.559 [Pipeline] // catchError 00:31:52.563 [Pipeline] } 00:31:52.579 [Pipeline] // wrap 00:31:52.585 [Pipeline] } 00:31:52.600 [Pipeline] // catchError 00:31:52.608 [Pipeline] stage 00:31:52.610 [Pipeline] { (Epilogue) 00:31:52.624 [Pipeline] catchError 00:31:52.626 [Pipeline] { 00:31:52.640 [Pipeline] echo 00:31:52.641 Cleanup processes 00:31:52.647 [Pipeline] sh 00:31:52.933 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:31:52.933 1661829 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:31:52.933 1661979 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:31:52.947 [Pipeline] sh 00:31:53.232 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:31:53.232 ++ grep -v 'sudo pgrep' 00:31:53.232 ++ awk '{print $1}' 00:31:53.232 + sudo kill -9 1661829 00:31:53.244 [Pipeline] sh 00:31:53.528 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:32:01.653 [Pipeline] sh 00:32:01.940 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:32:01.940 Artifacts sizes are good 00:32:01.955 [Pipeline] archiveArtifacts 00:32:01.963 Archiving artifacts 00:32:02.199 [Pipeline] sh 00:32:02.478 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:32:02.493 [Pipeline] cleanWs 00:32:02.504 [WS-CLEANUP] Deleting project workspace... 00:32:02.504 [WS-CLEANUP] Deferred wipeout is used... 00:32:02.511 [WS-CLEANUP] done 00:32:02.513 [Pipeline] } 00:32:02.533 [Pipeline] // catchError 00:32:02.544 [Pipeline] sh 00:32:02.824 + logger -p user.info -t JENKINS-CI 00:32:02.832 [Pipeline] } 00:32:02.848 [Pipeline] // stage 00:32:02.854 [Pipeline] } 00:32:02.871 [Pipeline] // node 00:32:02.876 [Pipeline] End of Pipeline 00:32:02.914 Finished: SUCCESS